Open Menu Close Menu


Campus Technology Insider Podcast March 2023

Listen: AI and the Future of Writing Instruction

Rhea Kelly: Hello, and welcome to the Campus Technology Insider podcast. I'm Rhea Kelly, editor in chief of Campus Technology, and your host.

Much has been made of plagiarism concerns around the use of ChatGPT in education, and there's no doubt that generative AI technology will impact the role of writing both in higher education and in society in general. But as my guest Mark Warschauer points out, the use of AI for writing and communication presents an inherent contradiction: Those who can best write with AI will be those who can best write without it, because they'll need to be able to write good prompts, evaluate the AI output, and edit the resulting text into a usable final product. Warschauer is a professor of education and informatics at the University of California, Irvine, and founder of UCI's Digital Learning Lab. We talked about the potential of AI for teaching and learning, overcoming faculty skepticism about AI tools, research questions that should be asked about AI in education, and more. Here's our chat.

Hi, Mark, welcome to the podcast.

Mark Warschauer: Hi Rhea, great to talk to you.

Kelly: So I thought maybe to start, you could just introduce yourself briefly and talk a little bit about your background and your work.

Warschauer: Thank you. I'm a professor of education and Informatics at the University of California, Irvine. And basically, I've been doing research on technology, literacy, and learning since about the mid 1990s. I conducted one of the first dissertations on online learning in 1995 to 1997. And ever since then, I've been focused on how new digital media including AI is changing how we communicate, and what that means for diverse learners, language development, literacy development, and learning.

Kelly: So of course, with your focus on technologies like AI, I wanted to ask you about ChatGPT. So, you know, so much has been, so much attention has been paid to kind of the plagiarism concerns with this new technology. Where do you stand on that? Is this, is this a bad thing? Is it a useful tool? What do you think?

Warschauer: I like to frame this, Rhea, as the what I call the June-July contradiction. In June, when students are finishing up their classes, they may be punished by their instructors for using AI as a writing tool. But in July, when they're out on the job market, they can be punished by their employers or potential employers for not knowing how to use AI as a writing tool. Because employers care a lot more about efficiency and productivity than they do about authenticity. So I think our job is to help students get from here to there.

Kelly: I love how you brought up the, the workforce skills angle with that. So plagiarism aside, I'm just curious how you think this will impact writing instruction in general. Because I mean, even if employers appreciate the skills that go along with utilizing AI, is ChatGPT a crutch that kind of impedes learning a good sort of base of, of good writing practices?

Warschauer: Great question, Rhea. And here we get to another contradiction, which I call the "with it or without it" contradiction. On the one hand, people who write and communicate in professional life will definitely want to take advantage of AI's powerful affordances, so they will need to learn how to do so. But ironically, those who can best write with AI will be those who can best write without it, because they'll need to be able to write good prompts, to corroborate what comes out of it, to evaluate what comes out of it, to edit AI output. So if learners use it too soon and too extensively, it can definitely become a crutch that will rob them of the opportunity to develop foundational skills. I think a good analogy here is the graphing calculator in mathematics instruction. If children depend on calculators at too young of an age, they'll be robbed of opportunities to develop, to develop basic knowledge and skills that will help them in the long run. On the other hand, by the time they are taking more advanced math in high school, it is only natural to deploy graphing calculators to do the grunt work. But even then, high school students are often given assignments and assessments both with and without calculators, so students are learning how to do math with them and without them. I think we need to develop a framework to help students learn how to use AI writing tools well in a developmentally appropriate way. We're suggesting what we call a five-part framework that will implement these goals over time. We believe that students need to: One, understand tools such as ChatGPT and how they function, how they work. Two, know how to access and navigate them. Three, learn how to effectively prompt them. As you may know, prompt engineering itself is becoming a big deal. Four, learn how to corroborate their output. And five, learn how to effectively and ethically incorporate that output into their own work, which includes transparently describing how output from ChatGPT or other tools was used in their final product. So this is what we all have to figure out together, when and how, at what age you start introducing this in what way, so students develop both the foundational skills and the skills to exploit these powerful new technologies.

Kelly: Especially evaluating the output is something I feel like is difficult to do without that good base of knowing what constitutes good writing.

Warschauer: Exactly.
Kelly: What do you think is most interesting about ChatGPT, just from a higher education perspective, or areas where you see the most potential in teaching and learning?

Warschauer: To me, I think the most interesting thing about ChatGPT is what influence these tools will have on the role of writing, not only in the university, but also in society. Over the past 50 years, with the advent of an information society, writing has gone from the skill of a fairly small privileged elite, to an essential skill for today's world. And studies today, for example, have shown that even engineers spend the majority of their time writing. Now new technologies have always affected literacy. I mean think, for example, how the invention and diffusion of the printing press devalued the skills of writing out loud reading out loud for a public audience. But changes are happening especially quickly right now, to reading and writing and literacy in this new online era. So on the one hand, you might say writing will become a trivial skill that we'll offload to AI. Or it may be that ready access to AI-generated mediocre written content will make skilled writing even more important. It's really impossible to predict how this is going to play out. Related to that, in higher education, of course, writing is not just writing. We're not using it only as a communication tool that students will need for their future, but we're also using it as a way to stimulate and evaluate thinking ability and to help both students and teachers reflect on what students have learned. So there's going to be a lot of really tricky questions over the coming years, whether we're going to double down on the teaching and learning of writing, or we're going to focus on other ways of expressing knowledge and thinking ability. And, and moving beyond writing, I think one of the areas that has the most potential in teaching and learning is in the area of computer programming and coding. Like in writing, ChatGPT can powerfully help people in their coding. But again, ironically, it takes a good programmer to best make use of it, to identify where it's producing something helpful and where it's producing something that's completely wrong. So again, we have this contradiction of "with or without it" that will affect many areas of teaching and learning. If I can give one more example, think of business schools, which are teaching students to generate market analysis, reports, and plans. I guarantee you that, you know, just three months into the public distribution of ChatGPT, these are already being almost ubiquitous in the business world. So we're gonna have to figure out how these are going to all be integrated into instruction. We live in interesting times.

Kelly: Yeah, definitely. Do you encounter a lot of skepticism among other faculty and, and what do you think is the best way to get buy-in from those skeptics on embracing AI for teaching and learning?

Warschauer: Well, of course there's skepticism, Rhea. And I've always thought that the best way to get buy-in from faculty on embracing new tools is to get them to start using them in their own professional lives. So again, I go back to the example of online learning. I mean, again, now it's ubiquitous, but there was a lot of skepticism in the past. But we found that once faculty had been involved in productive online learning communities themselves, and they could see what way and how to use those tools to promote their own learning and that of their colleagues, they were more likely to want to incorporate those online tools in their own teaching. So I would suggest taking the same approach with generative AI. Let's show instructors how to use it in their own teaching and research for developing syllabi suggestions, lesson plans, reading lists, for experimenting with generating responses to student writing, for developing quiz questions, etc, and a myriad, myriad of other ways. And I think once they can understand how it can help them, they'll also be inclined to think more creatively about how it can help their students.

Kelly: Do you have any concrete tips for specific ways ChatGPT might be used in a class as a learning tool?

Warschauer: Well, there's certainly a lot of things that students can do, whether or not they're being used by their teachers. Students can, if students are going to research a topic, they can generate a list of resources or citations on those topics. It may or may not be accurate, but it's a starting point. They can generate a list of possible angles to take for an assignment. They can generate multiple choice quiz questions to test them on, on what they already know about a topic. They can use it to generate feedback on their own writing. And, of course, just add on to that, all the ways that instructors can do it. I've seen some interesting examples recently, where students are asked to compare writing that was done by humans and writing that was done by AI. For example, both AI and the students themselves were asked to summarize longer pieces that were written. And then the students had to compare them and see what were the strengths, advantages of each. I'm also involved in, in second language learning and research on second language learning. And, you know, there have been some of the same debates and discussions about Google Translate, where first of all people wanted to ban it from the classroom, and then people realized, well, certainly in terms of written translations, it's ridiculous to ban it because they're going to be used ubiquitously. So rather than telling students not to use it, you figured out, people who taught translation brought that into the classroom, which is okay, let's start with something translated by Google, and then let's edit it and compare it and see it, how it can be improved. Well, I can tell you, at least for certain languages, such as Spanish and English, ChatGPT is far better than Google Translate. So again, there's ways that it can be integrated in, you know similar to the way we've used other tools, like online learning and Google and Google Translate and calculators, you know, rather than trying to ban them, to critically look at their output compared to what humans can do.

Kelly: Are there, I think earlier, you mentioned ethical concerns. So I wonder if you could talk a little bit about those sorts of ethical concerns around ChatGPT that students need to understand.

Warschauer: So I spend a fair amount of time on Reddit. And as you may know, it's mostly young people on Reddit. And I go to both a ChatGPT and OpenAI subreddits. And one of the most common themes there is students get getting caught for having turned in ChatGPT's output as their own, and what they should do about it. And the most common advice given by other students is to deny, deny, deny, because it's very hard to prove. And I think that's problematic. It's obviously problematic. And I think that we need to teach students about the ethics of these tools and, you know, in a certain way, they've been democratized. I mean, everybody could cheat on essays by just paying somebody to write something for them. But now, you know, even those without the means and resources to do that can, can essentially ask ChatGPT to do it. So I think we just, there's no way around it. And we can't …. Of course there's tools, as I'm sure you and your listeners are aware of, that are designed to detect ChatGPT output. But they're not 100% foolproof, and there's always ways to get around them. I mean, there's, for example, there's paraphrasing websites. So you could, you know, print something out and run something in ChatGPT and then run it through the paraphrasing tool to change it further, or change a few words yourself. So we're going to be in a never ending losing battle, I think, if we're gonna try to address this purely technologically. I think we're gonna have to pay more attention to teaching students about the ethics of this and helping them understand that this is important not only for their classroom, but, but for their careers. You know, the role of AI and AI output and using it in an honest and ethical and effective way, is a real value to them and to their lives and to society, and spend more time talking about …. And again, you know, for somebody who comes from the field of second language learning, this has often been a contradiction for immigrant students and foreign students. Because they're always taught that to be really good at English, they should imitate native speakers. But when they find language examples from native speakers, and they put them in their writing, they get dinged for plagiarism. So teaching them, you know, what it means to imitate or incorporate other people's speech or writing in your own has always been a challenging thing to begin with. And this just adds to the challenge, but I don't think there's anything, any way we can get around it. So certainly, especially writing instructors, I think, for them to understand, you know, what are the ethical ways of incorporating output from ChatGPT, and what needs to be disclosed, and how and why, is going to be important. And the most, the central element of that is, of course, transparency, so that how you used it becomes clear to the reader.

Kelly: As the founder of UCI's Digital Learning Lab, I'm thinking you must have thought about potential research on generative AI in education. So I'm curious, what kinds of research questions do you think, you know, people need to be asking or pursuing in this field?

Warschauer: Great question. And I love, I love thinking about this and talking about this. So we've been spending a lot of that time on this in my lab, really on two related questions. One is, how can we use ChatGPT in our own research, and then what kinds of research we want to do on ChatGPT. So as for the first question, we have grants from the National Science Foundation to develop interactive learning materials for children. And more specifically, we're partnering with PBS Kids to create dialogic versions of their television programming. So you might be familiar that when you watch a kids TV show, the character Dora the Explorer, whoever, Mickey Mouse, whoever pops up and asks them questions and waits for their response. The problem is that Dora or Mickey don't know what the kid's saying. Well, we're creating dialogic versions of these that kids can watch on an, on an iPad or a laptop, where the character asks them a question, but we're programming it using a conversational agent like Siri or Alexa so the character can understand the kid's response and then dialogue about it with them. And we find that kids learn a lot better this way. So we've immediately started to experiment with how we can use ChatGPT in this process, and we're experimenting with using it to develop good questions for kids or using it as a way to interpret and reply to children's answers. And we're finding that it's actually a lot more effective in doing that than other tools we've been using, like the engines behind Google Assistant or Siri or Alexa. And we want to take that further by, you know, having, creating situations where children can generate their own stories using generative AI and things. So there's all sorts of ideas for this. But onto the other part, we're also talking about what research we want to conduct on ChatGPT. And because this is so new, we're really focusing on, on R&D. We want to partner with instructors to develop tools, curricula, and pedagogical approaches for effective integration of ChatGPT into the classroom. And we have two projects we've initiated for that. The first is a partnership with the UCI School of Engineering. They have a class called "Communications for the Profession" that all engineering majors have to take. So we're working with the instructors of that course on these issues. And it's a kind of what we call design-based implementation research, in which we will develop some things together, try them in the classroom, and then further refine and develop them, and then disseminate them so they can be used much more broadly. And we have a similar partnership related to high school English language arts and social studies that is underway, where we're developing partnerships with a couple of high schools in Orange County, where we're going to partner with teachers on these things. Now, we're developing not only curricula, but also tools. You may have noticed, Rhea, that a zillion companies have sprung up that offer an online interface so that businesses can communicate with ChatGPT, using OpenAI's API, for specific purposes, like maybe like writing social media posts, or presentations, or emails. But nobody's created anything like that for education yet. Now, I'm sure they will, you know, and we're not trying to create, you know, the big tool that everybody across the country is going to use. But we are creating some small tools that, that will, will serve our purposes. I mean, for example, we're creating a Chrome plugin, so that students, even whether they have an account or not, can interact with ChatGPT for specific constrained purposes that we set up, such as, you know, they might click on a certain box to get feedback on their writing, according to an evidence-based rubric, or to get quizzes on something. So this work is just getting started. But we're quite excited about it. It's taking a lot of time and our energy and everybody's enthusiastic about it.

Kelly: How long are these studies? You know, what kind of timeline are we, are we talking about?

Warschauer: We're starting both of these studies now. And, you know, we think they might go for a couple of years, kind of through two iterative cycles of academic years, where we develop something, we pilot it out through a year, we fully refine it. And then in the summer, you know, we develop it again, and we start to implement it, and then we are ready for broader distribution. But, but we're not, we don't, you know, we practice open science, where we share our tools, our resources, our ideas. So we're never part of like, okay, we have to finish this and perfect it before we tell anybody what we're doing, because they're going to steal it. As soon as we have anything related to this, we're going to share with people and, and we're starting that plan already. We have a new initiative, called Pens and Pixels, that I'm happy to tell you a little bit about. We are anticipating, I can't announce it yet, but we're anticipating a grant to organize a national conference on the role of generative AI in both education and educational research. The conference is planned for July 13th and it will be online. And we will also use the resources to create a curated, curated set of resources to help educators and educational researchers and students navigate this terrain. So the model we're drawing on for that is the Online Learning Research Center website that we created in March 2020, that helped thousands of educators around the world transition to online learning. So we, by the way we came up with the name Pens and Pixels by asking ChatGPT for some suggestions.

Kelly: I love that. It's actually a great name.

Warschauer: It suggested "Pen and Pixel" singular, which I really like. But the, the URL for that was already taken. So we edited it to Pens and Pixels, which is also pretty good, and will be out soon for both the conference and the website.

Kelly: That is a great idea. Because I'm familiar with those, those websites that, I can think of a number of institutions that put these together in the early days of the pandemic, to sort of just have resources for this, you know, emergency remote teaching that was going on, available in, in one place. And to have something like that for this just sounds amazing.

Warschauer: Yes, and I think the, the easy thing to do is create a website and just throw everything you come across up there, and I've seen some things like that, you know, that have 200 articles on ChatGPT. Those obviously aren't very helpful. The more difficult thing is to create something that's really curated that gets the wheat from the chaff or whatever the expression is. Because that takes some work. And I think, again,, I think it's still a useful website on online learning. We have, for example, rubrics there, for instructors and program directors and others to evaluate online courses. We have a lot of great tools there as well. And we're looking forward to doing the same thing on generative AI at

Kelly: Well, I can't wait to see that. Thank you so much for coming on. This was, this was great.

Warschauer: Thank you for having me.

Kelly: Thank you for joining us. I'm Rhea Kelly, and this was the Campus Technology Insider podcast. You can find us on the major podcast platforms or visit us online at Let us know what you think of this episode and what you'd like to hear in the future. Until next time.

comments powered by Disqus