Campus Technology Insider Podcast February 2023

Listen: AI in Education: Will We Need Humans Anymore?

00:08
Rhea Kelly: Hello and welcome to the Campus Technology Insider podcast. I'm Rhea Kelly, editor in chief of Campus Technology, and your host.

ChatGPT is groundbreaking, but it's also merely the first in what will likely be a series of innovations built on foundational developments in artificial intelligence, machine learning, and natural language processing that are going to change the world. Higher education is already feeling the impact of generative AI technology in terms of plagiarism and instructional design concerns, but these challenges also come with immense opportunities to personalize learning and streamline time-consuming tasks. For this episode of the podcast, I spoke with Mark Schneider, director of the U.S. Department of Education's Institute of Education Sciences, about how AI is transforming education and the evolving role of humans in an AI-powered future. Here's our chat.

Hi, Mark, welcome to the podcast.

Mark Schneider: Thank you for having me.

Kelly: So I thought maybe I should have you kind of introduce yourself and your role at the IES.

01:24
Schneider: So I'm Mark Schneider, I'm the director of the Institute of Education Sciences. It's a science agency housed within the U.S. Department of Education. It, it does research, we have two research centers, the National Center for Education Research and the National Center for Special Education Research. We have the National Center for Education Statistics, and NCES, that's the National Center for Education Statistics, also runs the National Assessment for Education Progress. In higher ed, it also runs the IPEDs integrated post-secondary education data system, which probably your readership may have had some familiarity with it. And the fourth center is the National Center for Education Evaluation and Regional Assistance, and that runs things like the What Works Clearinghouse, which maybe some of you know about, and also ERIC, which is a repository of education publications.

02:26
Kelly: Yeah, those are all very familiar sources of great information and research, and data. So you recently published a blog post about Open AI's ChatGPT natural language chatbot, titled, "Do We Need Humans Anymore?" And that's a provocative question, especially kind of coming out of the, you know, Department of Education. So that seems like a good place to start, you know, with these developments in AI and machine learning, are we going to need humans anymore?

02:57
Schneider: Well, of course, one chooses a title to provoke interest, right? I hope we need humans, but the question is, what's the role of humans in, in this new world? So I, along with probably thousands of other people, have now written about, played with, blogged about ChatGPT. It's very interesting. So let me answer this in a couple of ways. So the first one is, like everybody else has been playing with ChatGPT, there's an incredible awe factor, right? I can't believe that this works this way and it can do this, right? And then the second one is like, wow, this is really boring prose and there are mistakes in it. So, so right now, I mean, it's gonna get better. This is the way AI works, it's gonna get better. But it's really, so what it's going to transform the role of humans, clearly. It's not going to make them go away, but it's going to transform it. And for me, the question is, like, what does literacy mean? What does reading and writing mean in a world in which this chatbot is like the first of many generations that we're going to be living through that in the next couple, three years? So, so here, for example, is, so the boring prose thing is, is grammatically correct and no spelling errors, right? So that's a major step forward for a lot of writing. But then the next question, though, is, there are errors built into it. So let's say, I don't know what the real percentage is, but let's say 20% of the things that it generates are wrong. But you don't know, unless you're a human with an installed database and experience, you don't know what that 20% is, right? And that's, and that's the challenge. So we can't rely on it, maybe anytime, but we certainly can't rely on it now to produce factually correct, factually correct material 100% of the time, which means that we need, humans need different kinds of skills in order to parse this information and say, "Well, that didn't work. That's not, I don't think that's true." And then what would you do? Well, you go to Google now, which is, which is really funny, right? So the chatbot may be eliminating the, sooner or later may eliminate that "I'm gonna go to Google to find out what the real thing is." But it's, so it's an interesting, interesting thing. So the other thing that I find really challenging is that we've been working on improving writing. We've been trying to figure out how to run competitions and prizes, to invent AI-assisted tutors, writing tutors for kids in, you know, high school, middle school, in part because their teachers cannot give …. The art of writing is the art of rewriting. But if you're a middle school or high school teacher, you might have a hundred kids, and it's just literally too time consuming to actually do, you know, the kind of hands-on editing feedback. So we were, we were going to work, we started about a year ago, thinking about what kind of competition we could run, like an XPRIZE kind of thing, to develop a, an online AI-assisted writing tutor. Well, guess what, on November 30th, when ChatGPT came out, that whole plan is like, whoa, back to the drawing board. You know, what do we need to do in a world where ChatGPT is going to be a potential writing tutor. So we have to rethink that. It's a lovely place to be, we had, you know, we thought we were gonna make breakthroughs, someone beat us to it by a longshot. And now we gotta go back and think well, this is a new development, and we need to do something to update our prior, update our direction, update our goals, because there's this major AI breakthrough that's probably a hell of a lot better than, than what we were gonna get in our, in our competition.

07:13
Kelly: So what are some of the other kinds of challenges that come up specific to higher education? I mean, definitely big questions about digital literacy. But what about things like combating plagiarism or, or ways, like changing the way we assess student work?

07:31
Schneider: So, so let's do the plagiarism thing. So I think we're gonna end up with two things. One is how, how big a redefinition of plagiarism do we need. Right? So I was an academic, I was university faculty for a long time. And, you know, so long, long ago, far back, so that we had to worry, for example, cutting and pasting out of, well let's see, go back, actually, before my time, encyclopedias, but then, you know, Google and the internet, right? And then Wikipedia, and then all these things were tools that people use. And quite frankly, faculty also often got sloppy about cutting and pasting. So what, what's plagiarism in this new world? I mean, you know, so we want to be careful about, about that. And then, of course, it's always been an arms race between the infinite creativity of students in terms of, you know, looking for shortcuts, and the, and the honesty that faculty, that's incumbent on faculty to address. So we, at Stony Brook where I worked, we, like many other places, we had our students submit papers to Turnitin. Right? And that was, you know, that was one of the earliest, and still exists, ways of checking for plagiarism. But then students would go to paper mills, right, they would hire someone to write their paper. So, you know, and that was, so one of the funniest stories for me was that someone turned in a paper, you know, that they obviously got from a paper mill. But you know how I knew it came from a paper mill? Because the very last page of the paper was the bill from the company.

09:26
Kelly: Oh, my gosh.

09:27
Schneider: Yeah. So, so the student didn't even bother reading the paper, right, just got the paper and handed it in. And if the kid had read it, if the student had read it, they would have noticed that the last page was the bill and would have ripped it off. But again, so this is, this was, this was an easy case of plagiarism. And then as you know, like the companies are starting to use the same natural language processing that, that the chatbot uses to detect whether or not a chatbot wrote it. And you know, this, I mean, this is, this is the reality. So this is, you know, so this is the arms race part of it. And this is why, you know, this is why the line about do we need humans anymore came from. So a chatbot writes the paper, and a chatbot checks it for plagiarism, and then ultimately, a chat back, a chatbot grades the paper, right? So this is, I mean, you know, this is, so where do humans fit in all this. That's, for me, that's, obviously, it's a glib line designed to capture a very real process.

10:41
Kelly: Do you think a hierarchy will develop in the way AI-created and human-created information is valued?

10:50
Schneider: Wow. Okay, let me think about that. Well, I mean, obviously, the goal for the chatbots is so that you can't tell the difference. The other thing that, that is, is interesting to me, so we just did a, we just announced a, an AI institute that IES has funded with NSF. And it's about speech and language pathologies. And this to me is, like this is for younger kids, but it's, it seems to me like where AI is gonna go. So there are, there are about six or seven parts to this, and I'm only going to mention three of them, because those are the three that I actually understand. The other ones are pretty technical. So they're looking for, so, so the fundamental problem is that there are not enough speech pathologists. And as a result, there are many, millions of kids that are, with speech and language pathologies that aren't adequately diagnosed and treated. So part of the problem is that there are not enough, SLPs, speech and language pathologists, in schools. That's number one, number one problem. And number two, is that they spend about 60% of their time doing paperwork, right? And this is, this is, this is an incredible problem. So the three parts of this, of this institute that we just funded, the first part is a universal screener. So we have very good diagnostics for speech language pathologies, but they require time, they require attention, and then take our SLP professionals and we, and they spend more than half their time filling out forms. So, so students are not being adequately diagnosed. So, so this institute, the first job of the institute is to create an AI-assisted screener, where students talk to, you know, an avatar, talk to whatever, and, and then we get a much more accurate diagnosis of what the pathologies are. The second part of it is the design of the, so the first part is assessment, the second part is designing a treatment program. What kind of exercises does the, does this kid need? And like, how do we optimize them for the individual need of that student and the individual progression of that student as they deal with the, with their speech and language pathology? That, to me, is the model, and I think this, this, this is, obviously we're working on this for SLP, but this is the model of individualized instruction that I think AI is going to let us accomplish. Right? So diagnosis, assessment, individualized, treatment, individualized, well, I mean, think about all of education is really some process of identifying what a student's needs are, interests, you know, holes in their education or what they don't know, what they do know. And then like, let's, let's get individualized instruction, let's get individualized testing, let's get individualized attention to, to these, to the shortcomings that the screener identifies. So this can be, I mean, I'm so happy with this because this to me is like the model that we need to move into a much more getting away from, again, air quotes, the "factory model" of education that, that dominates. Right? So, so AI to me is, can potentially be groundbreaking in moving us away from factory models and into individualized instruction. The third part of this, of this, of this goes back to the 60% of the time that people spend filling out paperwork. Well the third part is, guess what? How can chatbots do the paperwork? So it just change, it would change the dynamic going back to what we were talking about earlier. So use a chatbot to generate the, a draft of the humongous amount of paperwork that, that a teacher needs to fill out. The teacher now spends an hour instead of 10 hours, editing it, making sure that it's correct, you know, adding information that the chatbot might have missed, correcting what the chatbot might have gotten wrong, but it's like 1/10 of the time. So now, if this all works, and again, this is all an experiment, this is all cutting edge. If this works, all of a sudden, you know, the amount of paperwork time goes from 60% to let's say 20%. And now we're freeing up 40% of the teachers' time to do what they should be doing. And I can imagine this in post-sec, I can imagine this in K-12, I can imagine this for early childhood. I mean, it's really just a challenge to our imagination.

16:14
Kelly: Let's talk about the personalization aspect. Because in the news recently, there was a company that used the ChatGPT technology to kind of map it to a math textbook, to provide tutoring that is like basically, based on the textbook. I thought that was a super interesting development, they actually called it MathGPT, which I think is funny. But yeah, like, what is the potential there, you know, because that could reasonably be made for any textbook.

16:49
Schneider: So I think we have no idea where this is gonna go. Right? So, so, so just by way of analogy, so we spent 10 years or more building the mRNA technology, right? And that was like a platform of dozens and dozens of companies and people, you know, perfecting mRNA, and then, right, and that was the foundation. And then all of a sudden, you know, COVID came, and instead of 10 years, we got, 10 months later we got a series of vaccines built on mRNA. And now, you know, there's gonna be a malaria vaccine built on it, there's going to be God knows how many different kinds of vaccines are going to be built off of that foundation that we spent 10 years building. So the way I think about this is that we use AI and actually, in some cases, dozens of years building up the AI, machine learning, large language modeling, data science, and we built this foundation. So the ChatGPT is the first thing that hit the public consciousness in a huge way. But that, that doesn't exist except for this foundation that we spent decades building. So just like the malaria vaccine is gonna come off this foundation, right? Well, what, I'm not in a position to guess, well I mean, I could start guessing. But like, one thing I know for sure is that the ChatGPT was just the first one that got into the public consciousness, but there are gonna be many other things because that foundation is strong, and that foundation has been laid, right? And things are just going to pop off, build on that foundation, in probably an increasingly rapid invention cycle. Some of them are going to be crazy, some of them are going to, you know, are going to be like, wow, how have we ever lived without this? Some of them like ChatGPT are going to be like, wow, that is the future. What do I do now, you know, to, how do I deal with the future, which is now? I think it's gonna be a wild ride. And, you know, since, probably like you, I believe that technology can be harnessed for all kinds of good things. I believe we're going to win, I mean, ultimately, it's all going to be on the balance of good. But it's gonna be, first of all, it's gonna be bumpy, and second of all, it's gonna be wild.

19:19
Kelly: Yeah. Do you think, you know, because just yesterday, ChatGPT announced a subscription service, and it made me wonder, could that become something that every college student has to buy as if it were another textbook or, you know, specialized course software?

19:37
Schneider: You know, when they say there's no such thing as a free lunch, right? So somehow we have to pay for these. So you know, so I mean, I hope, right, and so Microsoft of course has invested $10 billion in this and they're intending to incorporate this in all their products. I have no idea what cost, how the cost implications of that are gonna be. But, you know, it's gonna happen, and we're gonna have to figure out how to pay for it, and people are gonna make money out of it, there's no question about it. But people make money out of cars, people make money out of textbooks, people make money out of podcasts. Right? So the whole issue is like, things aren't free, they have to be paid for somehow.

20:23
Kelly: Right, I mean, I guess the question is, will it become a tool that's so integrated into every, you know, sort of college-level course, that it's something that every student would buy?

20:37
Schneider: Well, again, we'll see, right? I mean, I mean, hardware is that way, right? I mean, you know, students often have laptops, right, or tablets, or something. I mean, there's a cost to that. But I, hard to imagine, you know, post-secondary education or college education going forward without, you know, without hardware.

21:00
Kelly: So I also, I was curious about the impact on institutional research, especially wanted to ask you this with your background, you know, when you're collecting data on, say, how well institutions are educating students, and, and AI is finding its way into how students are doing their coursework, and kind of disrupting the way work is assessed, does that kind of complicate how that data can be interpreted?

21:28
Schneider: We know that skills matter, right? And in fact, if you look at the earnings outcomes of college students, the skills that they mastered while they're in school, actually, and often that parallels their majors, right, that actually is, is extremely correlated with, with earning outcomes. Some skills are more valuable. And right now, they tend to be bundled into courses, right? So engineers have a set of skills, but they also have an engineering degree, right? And there's a premium for computer science, or materials engineering, they're premium producing. And the courses themselves are really surrogates, or holding companies, if you will, for the skills that you think. So I mean, I think what you're saying, and it's an interesting question, is like, if this is a new skill, where does it fit? And what is the demand for it going to be ultimately? Look, this thing was unveiled on November 30. So I have no idea, none of us have any idea what the, how the job market is going to respond to this, right? But I think it's, if it's a skill set that matters, and I think it's more than just like, you have to know what prompts are to get the right answer, but you also have to have the intelligence and the literacy skills to turn what it generates into something that's real and usable. I think there's gonna be a premium for that.

22:58
Kelly: What would you say to people who are resistant to embracing this technology?

23:07
Schneider: So, as you know, like, several school districts, including New York, have just said you can't have this in schoolwork. And I think that's wrong. Right? I think it's a tool, and a potentially incredibly powerful tool. And I think trying to, you know, stop it, is just not going to work. I think, I think it's, it's, you know, it's how do we use it for human purposes? How do we use it to advance education? How do we use it to make education better? How do we use it for individualized instruction? How do we, how do we do all these things, is the real challenge. And it may be the case that we just need to, like New York City's doing, like, I hope it's not permanent, but maybe we just need a pause, you know, for a while while we think about these things, and to figure out how to reintegrate it in a measured, productive way. As compared to the way things usually work in the U.S. is like, hey, it's the Wild West, man. Here's the tool, I'm just going to use it, and let's just see what happens. I'm just going to throw a bunch of spaghetti on the wall, and some of those stick and some of them won't stick, right? And that's, that, so this is what I love about the American economy and American education, like, okay, yeah, we'll experiment. We'll try this. And then ultimately, things settle down, and we get better at it. And we can figure out the right uses for these things. So I think banning it, if it's a permanent ban, is a terrible mistake. If it's like, hey, we need the rest of this year to figure this out, and like we can't, you know, we, as what do they say, we can't fix this airplane while we're flying it, right? We need to land the airplane. We need to look at it. We need to rebuild it and then take off again. I mean, if people feel like we need a pause before we turn it loose on high schools, I mean, I could understand that. But I think that nobody's gonna be able to ban this forever. It's just, it's just, it's too a powerful tool that we need.

25:21
Kelly: Well, thank you so much for coming on.

Schneider: It's been my pleasure.

Kelly: Thank you for joining us. I'm Rhea Kelly, and this was the Campus Technology Insider podcast. You can find us on the major podcast platforms or visit us online at campustechnology.com/podcast. Let us know what you think of this episode and what you'd like to hear in the future. Until next time.

Featured

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.

  • stylized illustration of an open laptop displaying the ChatGPT interface

    'Early Version' of ChatGPT Windows App Now Available to Paid Users

    OpenAI has announced the release of the ChatGPT Windows desktop app, about five months after the macOS version became available.

  • person signing a bill at a desk with a faint glow around the document. A tablet and laptop are subtly visible in the background, with soft colors and minimal digital elements

    California Governor Signs AI Content Safeguards into Law

    California Governor Gavin Newsom has officially signed off on a series of landmark artificial intelligence bills, signaling the state’s latest efforts to regulate the burgeoning technology, particularly in response to the misuse of sexually explicit deepfakes. The legislation is aimed at mitigating the risks posed by AI-generated content, as concerns grow over the technology's potential to manipulate images, videos, and voices in ways that could cause significant harm.

  • Jetstream logo

    Qualified Free Access to Advanced Compute Resources with NSF's Jetstream2 and ACCESS

    Free access to advanced computing and HPC resources for your researchers and education programs? Check out NSF's Jetstream2 and ACCESS.