Can Artificial Intelligence Expand Our Capacity for Human Learning?
A conversation with Gardner Campbell
As educators we've all experienced the rise of new technologies along with the process of sorting out how each one may impact our work and our lives. Is the coming of AI any different? If so, how can we approach AI wisely?
Here, Gardner Campbell, associate professor of English at Virginia Commonwealth University and a widely known technology thought leader considers issues and concerns surrounding AI, identifies helpful resources, and offers some grounding thoughts on human learning as we embark on our AI journey in education.
Mary Grush: Does the shift to AI bring up radically new questions that we've never had to ask before, especially in the education context?
Gardner Campbell: The short answer is yes! But my answer requires some clarification. I'll try to provide a high-level overview here, but that means I'll probably be oversimplifying some things or raising questions that would need at least another Q&A to address.
Grush: I think most of our readers will understand your emphatic "yes" answer, but of course, please give us some background.
Campbell: Throughout history, general intelligence — meaning primarily the ability to reason, but viewed by many as also including qualities like imagination or creativity — has been considered the thing that distinguishes human beings as a species. Psychologists call this array of traits and capabilities "g" for short. It follows, then, that if computers can be said to be intelligent — to be described with values akin to reason, imagination, or creativity — then that "human" distinction collapses. And if that distinction collapses, any use of the word "human"; any appellation tied to our uniqueness as a species has to be re-examined.
Throughout history, general intelligence has been considered the thing that distinguishes human beings as a species.
The next question, then, is whether ChatGPT, Bing, Bard, Caktus.ai, Poe, et al. are intelligent in ways that involve reason, imagination, or creativity. My own view, as well as that of many experts in the field, is that they are not. They are not — or not yet — capable of what psychologists call AGI, or artificial general intelligence, which is comparable to human intelligence in the ways I just mentioned — possessing reason, imagination, or creativity… That's why it's more accurate to call ChatGPT et al. "generative AI," as a way of distinguishing what these affordances can do, from just "AI", or AGI, which is not what they can do.
Grush: So if ChatGPT and other so-called "AI" platforms aren't really performing along the lines of human-variety general intelligence, why do we call them AI at all?
Campbell: Aside from sheer hype, I'd describe two main reasons. First, the large-language-model design of generative AI, while in many respects little more than autocomplete on steroids, is the first computing technology that stimulates to this potentially dangerous degree what cognitive psychologists call overattribution. To put it simply, when one interacts with one of these "bots", there is the strong impression, even the unshakable conviction at times, that one is talking to someone, someone who is in fact intelligent in the human sense.
Overattribution means more than just anthropomorphizing, say, our automobiles, by giving them cute names. It means ascribing motivations, intentions, reason, creativity, and more, to things that do not possess those attributes.
And second, because human beings are social animals, we are always looking for companions. In fact, human culture results from the way individual intelligences share, with others, an environment of thought and creativity. Generative AI bots are engineered to present themselves to us as companions.
Human culture results from the way individual intelligences share, with others, an environment of thought and creativity.
No matter how many times these bots repeat their scripted warnings that they are not in fact human — that they have no intentions, motivations, or original thoughts — they continue to use "I" to refer to themselves. And they construct answers in the form of smoothly intelligible language — out of their statistical analyses of how human beings use language, mind you! The extent to which these "I" statements and smoothly crafted language constructs resemble not only human speech but expert and thoughtful human speech is shocking to anyone interacting with them the first time, and even many times afterward.
When we encounter this use of language, we can't help inferring personality — and usually competence or authority — because human language represents not only the world but also the personality and perceptions of the person who's communicating something about the world. Even in my description here, I've slipped into language suggesting that generative AI technologies "do" something, that they have a "self," though of course I know better.
That's the historical context, though all too brief, that I wanted to include here. We've seen all this before, to a lesser extent, with the "Eliza effect", in which Joseph Weizenbaum invented a computer program, modeled on the nondirective therapy developed by Carl Rogers, that appeared to converse with you simply by reflecting aspects of your questions and answers back at you. People who knew and wholeheartedly believed that Eliza was nothing more than a clever computer program nevertheless found themselves spending hours with Eliza, engrossed in what seemed to be the companionship of a tireless conversational partner. Weizenbaum [https://www.historyofinformation.com/detail.php?id=4137] was so alarmed by this that he ended up writing a book, Computer Power and Human Reason, about his concerns.
Grush: Is anything different now? Why are we still so drawn into these generative AI interactions?
Campbell: It's the sheer scale and elegance of the thing that's inspiring a very rapid uptake among millions of users. You enter a question and get back a short, apparently well-written essay answering your query in personal and professional-sounding prose. Or if you experiment with Bing, as an example, you'll get back answers that are more chatty, sometimes even a bit sassy or edgy, peppered with emojis. Those types of exchanges, done frequently enough, overcome our awareness that we are actually "conversing" with a computer program.
And the illusion of companionship is irresistible, in some cases because there's also narcissism at work on the human end of the exchange. Because the illusion is so pervasive and convincing, people tend to believe that it's not only real but somehow accurate, like the daily horoscope but infinitely customized to whatever is on your mind.
Grush: So the scale of AI adoption and acceptance is at least starting to uncover new potential in known issues. What else are we going to encounter that may raise entirely new issues as we move deeper into AI?
Campbell: Three things, for starters. First, there's the potential for a destructive and irreversible detachment from reality as the culture becomes a hall of mirrors, some of them fun-house distortions. And human beings may in turn normalize the illusions because we have a need to believe in something. Geoffrey Hinton, who pioneered the ideas on which generative AI is based, recently resigned from Google [Early May, 2023] over his concerns [https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html], stating that "It is hard to see how you can prevent the bad actors from using it for bad things". Generative AI's smoothly written, personable answers go down easy. Too easy!
Generative AI's smoothly written, personable answers go down easy. Too easy!
Second, there are what researchers call emergent phenomena, by which they actually mean, "Oh, we didn't expect that to happen!" We've already seen troubling instances of generative AI making up false "facts" with spurious citations; at worst, suggesting that people should leave their spouses or commit suicide. The generative AI developers insist they're continuing to improve "guardrails" that will prevent false or otherwise harmful or polarizing answers from appearing, but significant damage has already been done and I am not convinced that human beings will agree on what's "harmful" or "polarizing", especially when complex issues are involved. These are complex issues that must be addressed by human beings deliberately, openly, and deeply.
And third, the organizations using humanity as a test bed for these transformative leaps into perilous territory are huge for-profit corporations that, historically speaking, do not always aim for the betterment of humankind, to put it mildly and somewhat euphemistically. We're already hearing about AutoGPT, the capability to use generative AI to execute an entire sequence of tasks or problem-solving assignments. These technologies don't understand context, implications, or connotations, yet they'll be presented as tremendous time-saving conveniences. Can we trust them?
These technologies don't understand context, implications, or connotations, yet they'll be presented as tremendous time-saving conveniences. Can we trust them?
Grush: What you've been talking about here are all areas that should concern higher education institutions, but are there other concerns you might express that are even more specific to the teaching and learning context?
Campbell: Another emphatic yes! One thing that's coming up in the teaching and learning context is that the potential for superficial, thoughtless work or, at worst, cheating, increases dramatically as the cost goes down with machine-based generation. We're already seeing this happen. And along with that sticky potential, we will also see education institutions relying on generative AI to automate their own communication and education tasks in ways that will drain meaningful human interaction from what we will continue to call, less and less authentically, "teaching and learning". In the end, so-called "evergreen" courses will run themselves, and automated grading of machine-generated assignments will result in self-certifying meaninglessness. Many of these things are already happening, but generative AI will continue to promote those issues by bringing costs down dramatically while permitting vast duplication of template-based course design.
So-called "evergreen" courses will run themselves, and automated grading of machine-generated assignments will result in self-certifying meaninglessness.
And finally, as we become accustomed to machine-generated language, images, and so on, there are huge implications for the worth of human intellectual labor, and for the ability of creators of any kind to earn a fair profit from their labor. So, higher education will no longer be able to say either that it encourages human creativity and thoughtfulness, or that it prepares learners for the workplace, as both of these areas will be substantially changed.
Grush: In all these concerns, are you referring mostly to "bots" — or are there other forms of generative AI that may emerge with potential issues for education?
Campbell: I've been referring to chatbots primarily so far, but these concerns extend to image generation — Midjourney, DALL-E, and the like — as well as to the emerging video and voice generation technologies. Deepfakes are especially concerning, but the larger issues always involve the ways we as human beings define and share what we consider reality.
Grush: So looking even more deeply into education specifically, what are a few more of the questions you find yourself asking as you see AI emerging in teaching and learning?
Campbell: I ask myself how we can teach students not only how or when or why to use these technologies, but how to exercise their own good judgment in using them. The checklists and guidelines we offer our students are good, but there's no substitute for building wisdom. In fact, building wisdom should be at the center of what we term "education."
Building wisdom should be at the center of what we term "education".
And, I ponder over how we can use this AI phenomenon as an opportunity for re-examining the ways we think about our institutional missions, and indeed about how education might best contribute to meaningful human flourishing in the present and for the future.
And, I wonder how can we use the advent of AI as a "teachable moment" about what it means to be human, and about how human beings' innate search for understanding might be better encouraged and supported.
It's an interesting side note that as humans, we have worked to establish some safeguards and shared understanding internationally around nuclear weapons. How might we try to do something similar in education to get our human minds around AI, quickly and at scale, before our teaching and learning systems, higher education institutions, and even society itself may suffer irreversible damage?
Grush: You've been tracking most of the conversations about generative AI in education. Of course, we can't cover all that research in a brief Q&A, but are there select resources that you think might be potential guideposts for educators? And allowing for change in this developing environment, can we use this set of resources like a compass, not a map?
Campbell: One of my go-to experts is Gary Marcus, whose newsletter "The Road to AI We Can Trust" has been greatly helpful. Rowan Cheung's "The Rundown" gathers many sources of information in one newsletter, and that's also tremendously helpful. And I continue to read David Weinberger, a clear-headed writer whose work on the Web influenced me greatly. Of course, there are many other thoughtful, articulate writers who should be included in a comprehensive bibliography of AI literature.
Two recent New Yorker essays by Ted Chiang are essential readings, in my view. One [https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web] is a particularly thoughtful analysis of the cultural erosion and impoverishment that may result from generative AI, even if the "bad actors" don't take advantage of it. The other [https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey] warns that "…The desire to get something without effort is the real problem".
One more important resource that I'd urge my colleagues to use is their own judgment, based on their experiments with these technologies before they employ them in teaching and learning. Be careful not to reveal any personal information — yes, read the privacy policies and terms of service carefully! But get onto these platforms and try asking questions and follow-up questions, as well as posing difficult problems. See what you think. Take a look at experiments like the Khan Academy's "Khanmigo" prototypes, discussed by Sal Khan in a TED Talk. And then read Peter Coy's New York Times op-ed on that project, in which Coy finds out he can manipulate Khanmigo into simply giving him the answers, instead of tutoring him Socratically, merely by "playing dumb".
Grush: How would you describe human learning? Could AI in education help build the capacity for thinking and learning about more complex things? If so, could AI actually pave the way for understanding?
Campbell: For me, human learning is among the most extraordinary phenomena our universe has to offer, more beautiful and awe-inspiring than galaxies, nebulae, or any other natural phenomena. Human learning, especially in the way it can empower insight, lies at the very center of my own experience of meaning, of purpose, and indeed of love. One of the reasons I so love to write and think about the work of John Milton, the focus of my doctoral work, is that Milton placed an extremely high value on the human capacity for learning as the very core of what it means to be human. He even wrote a pamphlet on education reform!
Human learning is among the most extraordinary phenomena our universe has to offer, more beautiful and awe-inspiring than galaxies, nebulae, or any other natural phenomena.
I do think the current conversations surrounding AI can help focus our attention on the essential term "understanding," a capability generative AI does not have. Just what is understanding, and how far should we as educators proceed to teach with AI technologies we can employ but not truly understand ourselves? Are there frameworks, levels, or modes of understanding we'd be willing to work within to achieve or measure against certain curricular goals? We need even more robust conversations among educators about the notion of understanding: what we mean by it and how learners might demonstrate it.
How far should we as educators proceed to teach with AI technologies we can employ but not truly understand ourselves?
Of course, there are always new tools, applications, and, in fact whole new fields of generative AI to explore and — safely — experiment with. For example, there's an emerging field called "prompt engineering," the study and practice of eliciting the most useful and accurate results from generative AI. In essence, it's the study of how to pose questions that will get good and relevant answers. That's an interesting thing to think about as something that might have implications for teaching our writing students about good, clear expository prose.
Grush: We've already touched on the scaling of AI. Are there any additional comments you'd like to make about scale?
Campbell: Indeed, I could make many more comments! But for now, I'll summarize by saying that, unfortunately, I see extraordinary dangers ahead as scaling up the availability and use of generative AI also scales up most or perhaps all of the most misguided approaches to the digital age that higher education has pursued for decades. For a more extensive exploration of what I believe to be these disastrous missteps within higher education, I invite readers to start with my 2009 article "A Personal Cyberinfrastructure" and go from there, especially by viewing my blog writings — for example, The Odyssey Project: Further Discoveries. Also see videos of my keynote presentations on my YouTube channel.
Grush: In order for fruitful and beneficial applications of AI in education to occur, what would help? I know there are several institutional components that might need to "come along" as AI adoption continues — maybe assessment, digital competencies, core curriculum revisions, or alignment with strategic plans, as examples. Is there one thing you think institutions could concentrate on that would be most useful, something that may even help us discover how AI could expand the capacity for human learning?
Campbell: There are many areas that need to be ready for change. To that end, as a proposed first step, networks of colleges and universities might declare "The Age of AI" as a theme for the upcoming academic year, and devote themselves to networked learning experiences around that theme both within their institutions and across that interinstitutional network. I don't mean speakers series and symposia alone. An institutional commitment to asking the difficult questions should encourage substantive, thoughtful experiences integrated within and across the curriculum, and be inclusive of all students, faculty, and staff.
Expanding the capacity for human learning is a tall order, but it is the real goal. The choices we as educators make now for AI adoption can mean the difference between disaster and continued progress. Let's hope the choices are still ours.