One day, Anna Robbins and John Campbell were sitting at the table in Robbins’ office talking about artificial intelligence.
“I wonder if it could write my syllabus for me,” said Robbins, the president and dean of theology of Acadia Divinity College, part of Acadia University in Wolfville, Nova Scotia, Canada.
Campbell, the school’s director of technology for education, looked at her and said, “Well, let’s see.”
The pair fired up ChatGPT, gave a line or two about the institution, and asked it to write a 12-week syllabus for a course on the ethics of AI.
“It produced this thing,” Robbins said. “And we just looked at each other in silence and said, ‘This has to be a research project.’ Because it was pretty good, what it gave us.”
They refined the project, creating a frame for ChatGPT with the learning outcomes for the master of divinity program, the school’s approach to education, which included Bloom’s Taxonomy as well as Indigenous ways of knowing, and other key aspects of the institution.
Then they asked it to design learning outcomes for a course on the ethics of AI.
“They were really excellent. They were as good as, if not better (I have to say humbly) than what I would write as a professor,” Robbins said. “And it would take me hours sometimes to tweak the language to get them just right.”
Using the learning outcomes, ChatGPT wrote a full syllabus for a course on the ethics of AI, including texts, readings and assignments. Later, it generated video lectures with an avatar of Robbins that can teach in 80 languages and was so realistic that her family couldn’t tell at first glance it wasn’t her.
The result? This fall, six students of Acadia Divinity College are enrolled in an entirely AI-generated course. (They are being graded, pass/fail, on their evaluation of the course itself, not on the content of the class.)
It’s one of the first experiments at Acadia using AI in theological education — something Robbins thinks is crucial for theologians and educators to understand.
“We have a creative team around here, and so when we saw some of what was happening with the release of the large language models like ChatGPT, we just started playing with it — first of all, for fun, because we found it entertaining,” Robbins said.
“But then we realized very quickly that there was something pretty profoundly disruptive in this technology.”
Robbins spoke with Faith & Leadership’s Sally Hicks about the AI-generated course experiment as well as other ways this technology is already changing education. The following is an edited transcript.
Faith & Leadership: It sounds as if you’re at the forefront of thinking about this big new world.
Anna Robbins: I hadn’t really expected it, but there we are. It comes quite naturally in some ways, because as a school, our strategic vision includes this idea of launching a futuring hub, where we would do research about the future church, the future landscape.
Futuring is an exercise that businesses have done for a long time, as have other large organizations, but not the Christian world so much. Certainly, not theological educational institutions or churches. I worked in the U.K. for a long time and was involved in doing some futuring exercises there looking at the future of the church. My whole area of study was theology and contemporary culture, so I was always interested in where things are going next.
We have research going on in that hub, looking at trends, using AI tools to help us identify some of the trends alongside sociological and theological research.
The more we saw research coming out, the more it seemed pretty obvious that AI was something everyone was going to have to wrestle with. So we thought, “What are we waiting for? It’s quite accessible. There are many things we could do with it.” So the question was, “Well, let’s see all that’s possible for theological education and for the church with AI; then, we’ll start to look at what of this is desirable.”
Being first out of the gate, a lot of people say, “Why are you so in love with AI?” I say, “We’re not in love with AI. We’re fascinated with it. We think it’s going to have a huge impact on culture.”
We need to be out there saying, “What can it do, and what can be helpful to our school and to our churches? What could be harmful? What do we not want to push into?”
But we can’t make those decisions if we don’t get out there and experiment to see what’s possible and to imagine what could be done with it that would be good for our mission and for the kingdom mission of God in Christ in the church.
In futuring strategy, you have the possibility to shape a desired future and not only be subject to whatever future emerges. Being first out of the gate seemed to us to be a real gift. And when I say first out of the gate, in many ways we’re not, but probably in theological education, we’re among the first.
In the history of technology, ethical reflection has lagged far behind technological development. Theological educators bring to the development of technology an immediately and inherently reflective frame, I think, a morally reflective frame.
We then began to just toot the horn everywhere and say, “Everybody needs to play with this. Everybody needs to try it. Let us show you what we’re doing so you can actually see the disruptive power that this is going to have on education and think about how we are, as theological institutions, going to engage it.”
Also, we need to help equip our pastoral leaders to engage and to use it well, because we found very quickly that it is being used very widely in the churches in all kinds of different ways.
So we’re able to facilitate some of those conversations.
F&L: How is the AI-created course going with your avatar lecturing?
AR: It’s a research project, so we’re not intervening until the course is over, and we’ll do focus group interviews with the participants.
Our pedagogical style is we do asynchronous videos for the lecture pieces. Then we do synchronous meetings every week in a flipped classroom mode to go through the material and see if there are any questions, to deal with it in a more experiential way together.
Immediately, the time saved for us could be phenomenal, because you give it your material and it generates the video. You don’t have to sit and create your own video and edit your videos and so on. So that’s a win right out of the box, along with the writing of learning outcomes.
At this moment, my avatar teaches in 80 languages. If we think about this, this is phenomenal, right? We’re talking about Pentecost for theological education.
Can the AI actually generate that content in a reliable and accurate way? That’s part of what we’re testing out as well. At the end of their time with it, I will sit down and go through the material and have a look at what it has generated.
Because we’re a Baptist school, one of the first things it generated was a [Baptist-oriented] reading list, but that was far too narrow for our context. We said, “We’re not that kind of Baptist. We’re historically much broader than that.” It responds, and all of this happens in seconds. This is not a lengthy process.
At the end of the day, what are the efficiency savings for faculty? People say, “Well, you don’t need faculty anymore.” Technically not. My tech director could easily run a whole seminary off the side of his desk with a handful of AI avatars and so on.
People say, “Well, you’re just trying to get rid of pastors and professors.” Absolutely not. It’s just that if there are ways of using the technology that are efficient and better than what we’re doing, then why not do that? Professors can have more time to research and think and mentor and be with students in a way that they’ve often complained they’ve not been able to do because the workload is so heavy.
There are a lot of advantages we think could be gained, but the experiment is really for us to find out what’s good and what isn’t so good. I’m really, really going to be interested in reflecting on the student experience, because to me, that’s the key.
F&L: How were the students chosen for this experiment?
AR: We just put it out to the student body: “You can have free tuition for this course if you’d like to participate in this experiment.” It was all very clear to them. We had a very high level of students apply, and we have six enrolled in it.
F&L: This may sound like a simplistic question, but do you worry that it’s going to teach them something that’s wrong?
AR: These are graduate students. They’re very capable people. They were told from the outset the nature of the research and the nature of what was happening. I did hear a couple of things — “Well, how do we know if what it’s teaching us is right?” And we laughed, because we said, “How do you know if what your professor is teaching you is right?”
People worry that using AI is going to kill critical faculties; it actually can sharpen them. The students came into this course with their critical thinking antennae on full alert, and we think they should go into every situation with their critical thinking antennae on full alert.
F&L: Acadia is committed to Indigenous ways of learning. What is this, and how does it interact with the AI piece?
AR: As a school, we’ve had a decadelong partnership with NAIITS, which is an Indigenous learning community. It’s had a huge impact on our understanding of how we learn to walk well with the original inhabitants of the land.
In Canada, a lot of reconciliation and racial justice pieces revolve around the Canadian Truth and Reconciliation Commission, which explored the relationship of settler people through residential schools with Indigenous people in Canada. Pressing into the Canadian TRC calls to action for the churches and for theological education involves indigenizing the curriculum in some ways. This is about decolonizing the curriculum and the institution.
We’ve been pushed to consider heutagogy, education as discovery, not simply a transfer of knowledge. I think AI offers an interesting mix in that discussion, actually. If students can generate their own courses and generate their own learning in a course of discovery, there may be some coherence there, potentially, with Indigenous ways of knowing, but there’s some work on that that would need to be done.
F&L: This is a fascinating conversation for me personally, as an editor and a writer. For many people in my professional world, this feels very threatening.
AR: It probably is one of the industries under the most immediate threat. I think part of it depends on how we learn to use the tools. I use it for writing a lot. We have it programmed now to generate all my donor letters.
The writer is still necessary, I believe, and I still don’t like what it cold-writes for me. I still like to do a draft. I will give it to ChatGPT, and I’ll say, “Can you make this better? Can you fix this part?” And it’ll come back, and I’ll say, “Actually, that’s worse.” And then I’ll go back to what I had before. It is this kind of co-creation.
It’s really good with technical writing. I’ll say, “I need a policy on this and this and this. Please include this, this and this.” Bang, there it is. Of course, I still need to go in and add things or tweak things and that sort of thing. But it’s a huge time saver.
I think that there will still be a place for all kinds of human creativity, because it will become more and more rare.
I know enough of it now that I think I could use a series of prompts and within a matter of a few hours, I could have a Ph.D. written. I spent three years doing a Ph.D. What is the point when this is so easily at our fingertips? Do we need to look at education, higher education, what we’re looking for in the equipping of professors, in a very different way?
We’re having a little bit of fun at this kind of level that we’re using here, but there are some big, big questions that it raises.
F&L: One question that comes to mind is that you are co-creating and you are influencing it now, but if you’ve already spent three years writing a Ph.D. dissertation, you know what to bring to your AI-generated one. But if that person who’s writing a first major academic work has only used AI, is it different? Is it better? Is it worse?
AR: It’s different. I’m hesitant to say it’s better or worse. I can’t say how that will land, but why would someone spend all that time if you don’t have to? We’ve had discussions in our university senate here, and they say, “Well, the students need to learn to critically think.” I’m like, “Well, why will they not critically think by using AI? They can.”
I use AI all the time to do arguments about the validity of the New Testament or the reality of the resurrection. It’s very good at arguing. If I say, “OK, I want to get more up to speed with Hume, so you be Hume and I’ll be me and we’ll argue,” we do this, and it’s a lot of fun, actually.
There are ways that it can train your knowledge and your critical acumen without necessarily having to sit in the library for three or five years. And I don’t know what we’ll be looking for in the future, but I know we’re already looking for professors who are much more able to turn their hand to a lot of different things.
Yes, they’re excellent specialists in their fields, but they’re not here just to be rarefied scholars. They’re here for the church and whatever that means, investing in the lives of students and adjusting to what’s happening in the big wide world.
That’s where part of our decolonizing work really helps with innovation. I think decolonization and innovation are really great partners, because when you decolonize, as you know, you’re recognizing that we don’t know everything and we’re not in control of everything. Actually, we’re in control of almost nothing.
I have a bunch of knowledge that I got from hanging out in libraries, reading about white guys in the 20th century. Why is that the pinnacle of knowledge? There’s a leveling to that where we can say, “Well, if having a Ph.D. isn’t the great pinnacle we thought it was, what makes a really good theological professor?”
We still want thinkers. I don’t know that it’s going to look the same, though. I think it is decolonizing work when I look at my shelf of books — which is smaller than it used to be, and continues to be ever smaller — as a reflection of how little we actually know. But we’ve convinced ourselves that we know a lot.
People worry that using AI is going to kill critical faculties; it actually can sharpen them.