When I was a kid, I can remember being fascinated by Koko the gorilla. Researchers had taught Koko a number of sign language signs, about a thousand in all, and she had learned to communicate with humans. Often these signs were straightforward and even simple: a sign for a favorite food, or to name her pet kitten (“All Ball”). Scientists debated, and still debate to this day, whether Koko had any kind of fluency with language. She scored in the range of a human 3-year-old on cognitive and language tests, which is an incredibly impressive result that suggests, to some, the presence of a mind in the sense we could recognize, even if it was a limited mind. But Koko died at 46 years old, having never progressed further than that.
I thought about Koko recently while watching the Apple TV+ series Extrapolations. That show is provocative and also incredibly frustrating. It has an interesting premise; it imagines a world in which technology continues to progress at a furious pace, while the climate continues to deteriorate around us, making life difficult for humans and everything else living on earth. It plays out the consequences of inaction on climate change. This promising starting point is thwarted, though, by lots of corny dialogue, strange plot, and distracting cameos. Meryl Streep, for example, voices a whale—the last whale, living in the year 2046—speaking through a whale/human translation device. (See? Corny, strange, distracting). The implication behind the whale scenes are that whales have an enormous, exotic, but familiar intelligence, and that the only thing that has been keeping us from understanding each other is the language barrier. It’s one of many things about the show that left me shaking my head in disbelief, but that I also found myself returning to in my thoughts for days and weeks to come.
Both Koko and Streep’s whale—and other subjects of study like parrots, octopi, dolphins, SETI, and even dogs—are examples of the human desire to find intelligence somewhere besides ourselves. A lot of our science, like a lot of our popular culture, begins from the conviction that we cannot possibly be alone as the only intelligent beings. We want companionship in the void of a mostly-empty universe. And, undoubtedly, there is intelligence to be found everywhere. Gorillas don’t need to speak human languages to show their intelligence; like many primates, they show evidence of certain kinds of reasoning, tool use, and complex social life. Octopi have a beguilingly alien intellect that we don’t really understand very well at all. Whales—even the ones not voiced by Meryl Streep—call out to each other across distances in a way that suggests shared understandings of meanings. These days it’s all the rage to point out that plants can talk to each other through their roots and by making sounds imperceptible to human ears, sharing resources and information. There’s intelligence everywhere.
The most recent object of our human search for intelligence is machines. For years now, we have been hearing that the artificial intelligence revolution is just around the corner. In the past year, with the release of Chat GPT and other sophisticated large language model AIs, we seem to have reached an inflection point. AI can do genuinely impressive things, like pass the bar exam, drive a car, score highly on AP tests, and (just announced yesterday as a new offering from Google), write news stories for journalists. Perhaps some of the most impressive and terrifying feats have to do with image and video generation; the image at the top of this post, in fact, was auto-generated by an AI program embedded into the Substack platform. (For each post, I have the option to upload an image, find one among the images licensed by Substack, or have AI generate one). I’m not sure what it’s supposed to be, exactly, but it’s certainly a provocative image, given the prompt I gave the program, which was “artificial intelligence” with a command to create a photorealistic image. It’s not hard to imagine a future where the epidemic of fake news is helped along with “deepfakes” generated by AI, making it harder and harder to tell fiction from reality. (We should ask whose purposes these capabilities might serve).
This is one of many breathless warnings being shouted about AI: AI is going to replace attorneys, we are told. It is going to make teachers obsolete. AI is going to flood the market with cheap fiction, realistic-seeming but totally fabricated photographs, and ten-page essays generated in ten seconds for high school students to turn in as if they were their own, without bothering to learn anything. Prognosticators are warning that AI will be the end of numerous industries, the harbinger of a golden age of human leisure, or one of the horsemen of the apocalypse.
In my own field of education, the anxiety about AI has mostly centered on its potential to eradicate the essay as a medium of learning and teaching. Essays have long served as key building blocks of education; I can remember learning about the five-paragraph essay in the sixth grade, and honestly the kinds of writing that gets assigned from elementary school to graduate school today don’t deviate very much from that same format. I think a lot of my own success as an academic, both as a student and as a scholar, is attributable to my mastery of essay-writing. I’m not the smartest in the room, or the hardest-working, but I can write, and in this business that counts for a lot. Teachers at all levels tend to be the kinds of people who were good at learning and the tasks of learning, and who have a lot of positive associations with school. So it makes sense that as a group, we are nervous about this new technology that can perform the menial—but supposedly essential—task of writing an essay.
What will happen, folks wonder, when book reports are as easy as typing a prompt into a box? What will it mean when the standard compare-and-contrast essay can be done by machine? What about term papers, theses, and other forms of performing knowledge in written form?
I get the anxiety, but I’m also less worried about this than other people seem to be. Yes, AI can write a very passable essay, but it has been possible for a long time to game the system if you want to. Millions of papers are freely available on the web, searchable by topic and length, to the degree that there is a whole industry devoted to plagiarism detection. (Students might or might not know that every assignment they submit to most learning management systems is screened for plagiarism and given a score, that sometimes shows up in green, yellow, or red, suggesting how much of it has been pilfered from known pre-written sources). I have had students turn in copy-and-pasted Wikipedia articles before, or lightly edited versions of each others’ papers. These are the easy ones to catch; some students (though none that I have ever taught, to my knowledge) have paid others to write their papers for them. Given all of these ways of fulfilling an assignment without actually writing it, an AI-generated essay doesn’t seem like such a big deal. Yes, it’s cheating, but there have always been lots of ways to cheat.
Some of the most thoughtful pedagogues over the years have gotten around this problem by making assignments especially creative, esoteric, or weird, so that it’s unlikely anyone will find a pre-written essay on the same topic, or that someone on the internet who hasn’t been involved the class would be able to dash off an essay on the topic for pay. The same techniques might work for AI. Or, some folks assign a creative task (a podcast, a painting, a slideshow) and then require a short essay explaining the way the task was completed. There are ways around AI, the same way there have always been creative ways to engage students apart from the essay. The kinds of essays that AI is good at writing haven’t been state of the art teaching tools for a long time.
But beyond that, I think it’s worth asking what kind of essay an AI is capable of generating. I have read a number of them, including high-school-level literature reviews, long-form journalism, blog posts, and breaking news stories. To me, they all seem pretty transparently nonhuman. They have a surface-level mundanity about them that reminds me of someone stalling for time until they can fetch a thought to organize things around. It’s like a class presentation by someone who read the book but can’t think of anything smart to say about it. They regurgitate facts but fail to do a lot of complex comparison. Their sentence structures are simple. Maybe I’ve been fooled well enough that I was reading AI-generated content without even knowing it, but at best, it seems like AI these days is capable of imitating bad human writing.
This mundane and unremarkable aspect of AI is a feature of it, not a bug. It’s designed to sound like everything else; the way it’s produced virtually ensures that writing produced by an AI is going to sound pretty middle-of-the-road and fail to say anything original. Most AI devoted to writing these days operates on a large-language-model of learning, in which an algorithm is fed huge quantities of text (millions upon millions of words, or often more), and then asked to imitate that text. The AI analyzes which words are likely to follow other words, which texts are in conversation with each other, and what kinds of things people have said on a topic. Then it spits out a new version of that. This is why AI sometimes produces racist, sexist, or otherwise appalling content: it has been trained on things written by humans, which are often racist, sexist, or otherwise appalling. AI is a mirror, not a window.
In this regard, I’m skeptical about the claims of AI becoming “intelligent” any time soon, if ever. How would that happen? At its core, AI—even really sophisticated ones like Chat GPT and its successors—are parrots. They can mimic, but they cannot yet create. It’s not clear how they could make the leap from really skilled copying to truly intelligent creativity. There are people who are way smarter and better trained than me out there right now working on this problem, and maybe they’ll crack it. But I think it’s more likely than not that AI will end up more like Koko the gorilla than a human being. It will be able to wow us with its ability to speak our language in a certain kind of way, but it won’t be able to help us see anything that we didn’t already know, even if we didn’t know we knew it.
This is not to dismiss the potential of AI—far from it. It is, and will continue to be, a powerful tool. Like microscopes and telescopes that help us see better than our eyes, and cars and airplanes that help us travel faster than our legs, and books that help us learn and remember better than our brains, AI will be a tool that helps us understand all kind of things about the world better, not least of all ourselves. It’s already helping to detect cancers, find lost and hidden things, predict the weather, and—yes—write essays. It will only get better, and more powerful. But I wonder whether it will ever really be able to know more than we can tell it.
At the Iliff School of Theology, where I teach, we have an AI Institute that is tackling some of the big questions about AI from a theological perspective. (I am not part of that AI Institute, and my opinions here certainly aren’t endorsed by the people who are, who have a far more sophisticated understanding of things than me). One of the key theological questions about AI is, paradoxically, what it means to be human. When a machine can do stuff that only humans have been able to do, like write an essay or drive a car, what makes a person a person? Is there something theological about the answer to that question? Is there something about human ways of knowing and creating that are uniquely human, that cannot be mimicked by a machine? I think so. Can machines help us understand better what is unique about human being? I think so. Will machines and humans collaborate in ways that make theological reflection more important? I think so.
If AI helps us to ask these kinds of questions, and to appreciate the gift of humanity in all of its diversity and wonder, then I think it will be one of our greatest creations. If AI can help us know ourselves and appreciate each other, then it might contribute more to human flourishing than we suspect. For the same reasons, it might also hurt us, and drive wedges between us. Either way, AI will be teaching us something about ourselves. But I don’t think it will be the intelligent companion we have been searching for—not really. AI will amplify the best and the worst of humanity, but it will always be human, knowing what we know and saying what we say, remixing us for our own consumption. It will teach us about ourselves, telling us things that we will want to know—but also things we won’t.
Yes, you can write!