Open Menu Close Menu


Campus Technology Insider Podcast December 2021

Listen: Cutting Through Ed Tech Hype in Favor of Research-Driven Improvements

Rhea Kelly: Hello and welcome to the Campus Technology Insider podcast! I'm Rhea Kelly, editor in chief of Campus Technology, and your host.
How will emerging technologies impact the future of education? While it's easy to get caught up in the hype around trends such as the metaverse and artificial intelligence, true progress comes in slow, incremental improvements in using technology to inform teaching and learning. That's according to my guest Neil Heffernan, professor of computer science and director of the Learning Sciences and Technologies Graduate Program at Worcester Polytechnic Institute. In this episode of the podcast, we talk about augmented reality, intelligent tutoring systems and the need for better research infrastructure in ed tech. Here's our chat.   
Hi Neil, welcome to the podcast.

Neil Heffernan: Thanks for having me, Rhea.

Kelly: So I thought maybe first, you could introduce yourself and your background and tell me a little bit about the work that you do.

Heffernan: I'm a professor of computer science at Worcester Polytechnic Institute. And I guess I'm best known for actually the gentleman that founded ASSISTments with my spouse, Christina Heffernan. And we have about, I think during the pandemic we were up to half a million children actually doing their nightly homework and daily math, math assignments, actually online. And we do all sorts of experiments to try to actually figure out how to optimize student learning. And we write lots of papers about it.

Kelly: Great. Yeah. I'm planning on asking a lot more about that in a little bit. But I wanted to start with a buzzword that I've been hearing a lot lately: the metaverse. So can you talk a little bit about what the metaverse is, and its potential for education?

Heffernan: My first instinct is, is to throw all sorts of cold water on this, Like, I like, like so … I don't know, we have such trendy hubby things that just kind of bubble up. And so I'm like, what ,we're all talking about the metaverse because actually, Mark Zuckerberg decided to name rename his company? And like, and I'm just like, as far as I can tell, this is just yet another bad trendy thing that …  And in education we get these things whether you're, maybe when I was a kid, whether you're left brained or right brained was like totally what everyone wanted to try to figure out and then do education differently because of it. And, and so anyways, I'm, so I'm kind of negative on this super hype element. At the same time, I am a professor that actually, you know, does artificial intelligence and trying to actually figure out what is the, you know, what are the next cool ways we can use artificial intelligence. But when I explain to you how I use, like artificial intelligence, I think like, I think we're like so far away from some of this hype, like 100 years away from this hype, right? Like we know, sure, AI has improved in certain ways, and I teach AI, but the practical applications of this seem to be way over the top.

Kelly: Yeah, well, you know, I see a lot of people drawing the parallel to the virtual world Second Life, which years ago, there was so much hype about. I mean, there were universities spending hundreds of thousands of dollars to build their virtual campuses. And then of course, it like, just seemed to fizzle. So I don't know, is this metaverse idea a Second Life 2.0? Is it different? Is it potentially more viable? Or is it the same thing over again?

Heffernan: So I think it's the same thing all over again, the, the hype that actually happens. Like I'm, I kind of relate this to the field of intelligent tutoring systems. So in theory, if you were going to have some, I guess in some metaverse, second life actually crazy thing, like the next conversation is then going to be, how does education look differently in this metaverse, right? And like, I've certainly, I stay on Zoom all day long or in Gather Town for some of our conferences. And so I feel like, I kind of know … But, you know what this reminds me of? I was actually at an NSF fundees meeting, where actually all of us are funded, actually, faculty members. And there's this competition for what's going to be the next big idea for educational technology. And one of them, one of them was augmented learning, actually, you know, augmented reality sort of stuff. And then I was pitching small improvements in actually ways we give feedback to kids while they're doing their math homework. And I lost the competent, context, contest, like the virtual reality augmented reality because everyone's running around with their little smartphone and they're, that Pokemon app that actually was so popular when you were holding up your phone and looking at through it and, and so anyways like, but when I go look at real education and the, the uses of these technologies is so pitifully far behind actually, you know, the, the hype. And so I'm, I'm, maybe I'm just grumpy.

Kelly: So what do you think some of the obstacles are? I mean, could it be that hardware is not advanced enough to make this, you know, an experience that is conducive to learning, or I don't know. So what, what's what's keeping AR and VR from really advancing?

Heffernan: Well, like, so for instance, there's this very cool augmented, the best augmented reality actually application in education that I know of is a friend of mine, Ken Holstein at Carnegie Mellon actually put Microsoft HoloLens on the teacher's head. And so she's looking out in her classroom, she actually sees above every kid's head, some data from actually what's happening while they're using the computer. So she can actually figure out who to go talk to, and because, you know, oh, there's that Rhea girl, actually, and above her head there's a confusion mark, actually, because the AI has figured out Rhea actually, you know, seems to keep typing in wrong answers and not succeeding, actually, so would direct the teacher. Anyways, Ken Holstein did a great job, did a randomized control trial, turning this on or off, and also whether the teacher had the HoloLens on her, his or her head, first, whether it was even on. Turns out, if your teacher has the HoloLens on her head, and she can see everything that you're doing, even if it's powered off, and the kids don't know, you work harder when you think your teacher can see what the heck you're doing. But actually, interestingly, it was also able to show when it's actually on, it was better. And I mean, i.e., kids learned better on some tests that they actually did at the end of the week, or whatever was the dependent measure. So I thought that was a pretty practical sort of application of augmented reality. But does that doesn't sound Metaverse-y to me. Or does it? I don't know, lots of the … I don't know.

Kelly: Yeah, it's one of those words that, it maybe it means something different to everyone who uses it. But that's pretty interesting. And actually, I was gonna ask about how you see this kind of thing AI, you know, I mean, AR, VR and, and AI being intertwined. And so that's kind of a perfect example. So why don't you tell me kind of the basics of ASSISTments, you know, and what, what it's all about?

Heffernan: Okay, so ASSISTments is, in one sense, just a simple platform. Anyone can go to, that is a math teacher, and can assign, actually, assign kids to do their homework online. The big thing we did and what got us invited to the White House when SRI actually did a $3 million evaluation on our project, using 44 schools that were randomly assigned to either, you know, they use ASSISTments or not, actually, they found out that teachers changed the way that they went over homework, and kids learned more. And we started closing achievement gaps. But what were we doing? We were actually doing really simple stuff, like when your math teacher back in the day said, Here, do all the evens tonight, actually two to 24. We allow a teacher to do whatever math problems they were already assigning. But yet, the child would see on their page, you know, open up your book to page two and do problem number seven. The key thing though, is they type in something and then they get feedback, so that they don't wait until tomorrow to get feedback, they were getting feedback right then. And it turns out simple stuff like getting kids immediate feedback actually, was very successful with now mostly, during the pandemic, most of our use is coming from the fact that we actually have all these, these two free textbooks used in America, one called EngageNY, otherwise known as Eureka Math, and the other one called Open Up Resources, otherwise known as Illustrative Mathematics. And so all these teachers that are using these two free books are assigning their work through us. And then we're doing tiny little bits of AI improvements, ways that none of these teachers even bother really noticing. And like with crowdsourced different hint messages from multiple different teachers, and then so we're randomizing those hint messages. So if you, Rhea, were actually a kid and you had been assigned work, you might get Miss Jones's hint message. You wouldn't even know that it was Miss Jones. But actually, the kid sitting next to you might get actually Miss Smith's actually message as it were. And then we're doing fancy reinforcement learning somewhat silently, trying to figure out, can we figure out like … In fact, I think we actually just learned my PhD student was just telling me yeah, we're pretty sure Teacher, what we call Teacher B in this paper, Teacher B is better for low-knowledge kids and Teacher C is better for high-knowledge kids. But you know, you'd look at and you say, I see no AI here. What's, what's the AI? Actually, so, maybe I have, it's definitely not sexy, like, a metaverse augmented reality world.

Kelly: Do you think that certain subjects work better for this, this kind of like AI help? You know, with math, you have obviously objective answers. But what about history? Or things that are a little bit more subjective, I guess.

Heffernan: I totally, if, if we're going to expand into another area, we'd go hit the more easy to mathify stuff like physics and chemistry, before trying to do English. We are … or history. We right now are excited because we're getting practice at actually doing some natural language processing work, right? Actually, that is like, you know, that's just a fancy way of saying, getting the computer to try to understand the words that child just said, in their open ended response portions. But it's so much easier for us, because we have, most of the time or half the time, they actually have computer gradable stuff. So we actually can have a really good idea. That can help us in various ways. And also, the value proposition to teachers is much higher, actually. Because otherwise, if you go into history, or English, you wind up actually having to do all this sort of multiple choice stuff. And in math, you don't have to do that.

Kelly: You talked a little bit about research, like testing out which hint messages work the best for which types of students. What other research is possible through this kind of platform?

Heffernan: Well, one of things we're doing is we've just partnered with actually two entities, where we now actually have on the student's screen, in addition to the sort of button that says, hey, give me a piece of help. And under that button, they might be that Miss Jones, or Miss, I forget the name of the teacher I just pretended. But there's, you know, we're experimenting with it, what's under that button. We'll put another button on the screen, which is, I, oh, I want to talk to a human tutor. And I don't know if it's a 25-year-long goal, but actually, my goal is, I would love to actually have these bots that can actually be much more interactive than just hey, which hint message should we give out? But can we actually make a little bot that can actually be much more useful to actually help a child in a slightly more interactive manner? And so we are working with these other entities that are providing human tutoring via text. Right? And so wouldn't it be … you know, I was just listening to a talk earlier this morning by a guy from Princeton, who does natural language processing and reinforcement learning at the same time. And, and as we're trying to learn, you know, we as … and he's like, boy, that's hard problem Neil, like, actually, you're trying to, you're trying to figure out how to make these bots. And we're far away from being able to do that well. You know, we somehow think because we can talk to Alexa, like this should be easy, but making, making an Alexa conversation about your real math problem — whoa, that's like so far away right now. But what I want to do is be the platform that is actually helping us run the experiments to learn what works. And, and that's what we're good at.

Kelly: That's interesting. So I mean, I know that there are chat bots use for basically to have people access a knowledge base about questions they might have, say, like, for university admissions, or, you know, the financial aid office.

Heffernan: Yeah.

Kelly: And those seem to, you know, work pretty well, in terms of people just needing to get information, right, you know, without waiting for the office to open in the morning or things like that. But what is it about a math problem that makes it harder to have an interactive conversation about with an AI?

Heffernan: Well, I think one of the biggest problems is actually for the AI field doesn't have good open datasets that actually they can go bang on. So in 2012, when Fei-Fei Li actually came out with the important ImageNet dataset, actually, within, it was a publicly released actually data set that actually … like we all kind of know, Google knows what's a dog, what's a cat, what's, you know, what's the images, that's all came from the ImageNet, actually data release, which inspired hundreds, if not thousands of machine learners to say, Ooh, I can build a better machine learning thing that will outperform what Fei-Fei Li initially did, right? So having good datasets that the field can work on, there are no good publicly released open datasets of human tutorial dialogue. So I'm having to collect them, right? Actually, so that's holding the field back. It's also the case that the, sort of, so the type of dialog you were referring to that we call, the field called "task-orientated" dialogues, that is similar to like, helping you book a plane flight or actually, you know, buy some flowers for your, your, your significant other, those task-orientated dialogues. You know, we can kind of all understand Oh, yeah, there's must be gobs of people sitting in India answering phone calls. And slowly, the companies are trying to figure out how to actually not have to pay humans to intervene as much in having these dialogues, right, so that you can automate this all. We don't have anything near that scale, to actually kind of learn, how do we help do this tutoring and then slowly make computers do this. The other thing is the, the, how do you actually, how do you know whether the dialogue's good? All these task-orientated dialogues are really easy: Their, their dependent measure is, do they get the task done? Do they buy the thing? And what is this, let's minimize the amount of time it takes. In learning, you shouldn't be minimizing how long this takes. Like, like, you have to maximize, what's the odds they're going to actually be able to succeed on the next problem? That's human learning. Like actually, so like you, you can't use a metric like, hey, how, you know, let's just minimize time. Because you could just tell the child, "Type 12." That would probably be a really bad dialogue. Why 12, right? Actually, even if it's, let's assume it's right. So this, this is harder, because … and plus, that signal of learning is also kind of a weak signal. Lots of kids will still get the item wrong, even if they've maybe learned a little something. So those are some of the differences.

Kelly: So when, when, you know, we've got all these emerging technologies, how do you think people should be thinking about the, you know, their potential for education and maybe, you know, not miss out on something that really might have potential, whether, you know, maybe they're skeptical about AI or, or the metaverse, but there's, you know, there is a kernel that should be worked on. So how do you sort of sort out what technologies to invest in?

Heffernan: Well, like, like, and, you know, by the way, coming back to particularly the, your, your, your readership, Campus Technology sort of people, or like, all these universities that are actually adopting things like actually early warning systems. Like, you know, us faculty members should probably get some quick data that, you know, oh, that Rhea girl that's in my class, she's not doing very well. What we really need to do though, is make sure that we are actually making sure we're running the experiments necessary to actually figure out what works. If all we're doing is thinking about, Oh, let's try to guess who we should actually reach out to, and focus on that, we're going to quickly actually get into a heap load of ethical problems, right, about like, who are we targeting with these e-mails? The portion that to me, it isn't done nearly well enough in, in these early warning systems, is the experiments on, on okay, hey, we think we should intervene with actually, you know, this cohort of people for whatever reason. What you can definitely do fairly is, well let's now run experiments, let's test different ways of actually different messages to actually send to this student or different messages to send to the teacher, and then figure out, okay, well, what is it that actually causes kids to finish the class? Right? Actually, there's way too much hype in believing data, big data is going to solve these problems. If you don't have the experimental infrastructure to run the experiments, you will never get to learn what works.

Kelly: Yeah, so I guess the, the investment needs to be in, in the research side, then?

Heffernan: Yeah, well, I guess fortunately, for me, we just actually got a like a $2 million grant actually from the U.S. Department of Education, with the U.S. Department of Education Institute for Education Sciences just funded five different platforms. This the first time the U.S. Department of Education has been funding just platforms, because they kind of recognize these platforms need to actually have better research infrastructure, so they can help the many, many, many, many education researchers with their, their thoughts and dreams. And so I have a site, the side of my work funded by actually, the National Science Foundation, and the Schmidt actually Futures foundation. That's Eric Schmidt's actually, money, the guy that ran Google, but he's donated a bunch of money to us to help, help run experiments. Because they're like, they're like, I think they have their heads turned on straight, what they're like, What are the, what are the things that we can do that will get us like a 2% improvement year after year after year? Right? Like, they're not thinking, oh, let's throw in the metaverse sort of craziness. They're like, what are the, what, you know, and they're totally convinced that the way we're attacking this is like, potentially good, which is like, it's not nearly as sexy, but if we can figure out from all these different teachers, the different types of messages that are actually effective, like, like, we can probably get reliable improvements quick. I shouldn't say quickly, super slowly, but reliable, like I kind of think about, we as a society got really good on the internal combustion engine. Like GM, and Detroit for year after year kept making improvements that actually made these electric cars not be nearly as useful until they now clearly are, right? But actually, we're invested so much in these old technologies. Anyways I'm a big fan of small, iterative improvements with experiments.

Kelly: What would be on your wish list for, I don't know, the future of education?

Heffernan: I guess, actually, my wish list is to actually help more educators actually be able to be empirically driven. Like, for instance, when you think about it, we actually have, we have 1,700 schools of education, that actually give actually, education degrees, like most of your listeners actually have a child that is actually maybe got their math got their degree in education from someplace where they had to sit in a math methods class. And those math methods classes aren't very empirically driven. Wouldn't it be cool if we were actually able to use the sorts of types of platforms that I run and other people run so that we can collect data on for instance, things like, what makes for a good feedback message when a child is actually not doing very well? We want pre-service teachers to actually learn about that sort of stuff. And then that way, they can be better teachers. So like, wouldn't it be cool if we were making our platforms that kids are using, and our users better at the same time, and, and if we could put a tight loop around that? Actually, our nation will learn, right, our nation's education system will get better, not just the computer tutor, the computer systems that the kids are using, we want to improve the system as a whole. And the teachers are this insanely important part of that.

Kelly: Yeah, so basically having the ed tech that helps the students but it's also generating the data to inform the teachers at the same time.

Heffernan: Yeah, like that, that to me is like a core thing of what we do compared to many other ed tech products. McGraw-Hill has this thing called ALEKS, and there's this thing called actually by Carnegie Learning called MATHia, many of these computer tutoring systems, just let the child, take the idea that, hey, if a child learn something, we should be able to let them go on to the next idea. Doesn't that sound intuitively appealing? Sure it does, until you realize, you know what that does? Tomorrow, when the teacher comes into school, everyone is on a different page. And so you can't like do like school like group activities. And so we at ASSISTments never let a child go on to the next problem. So like, we, our goal is not to accelerate you through the curriculum. Our goal is to give data back to the teacher, she can see here, she can see, hey, last night on their homework, oh, my god, eight kids still have no idea what we're talking about. I should do something about that. So I could then be a better teacher and not just, like, begin today's lesson, oblivious to actually did my children… Like clearly I didn't do a very good job yesterday, because like, none of my, lots of my kids came home, you know, did their homework and they don't know what they're doing. We want to help them. That's probably more important to do than the small incremental improvements we can actually make in which hint messages should we give out. Like if we can make teachers pay attention to the data? That will probably be a much bigger impact.

Kelly: Yeah. Well, thank you so much for coming on.

Heffernan: Oh, you're very welcome. Thanks so much for having me. Appreciate it.

Kelly: Thank you for joining us. I'm Rhea Kelly, and this was the Campus Technology Insider podcast. You can find us on Apple Podcasts, Google Podcasts, Amazon Music, Spotify and Stitcher, or visit us online at Let us know what you think of this episode and what you'd like to hear in the future. Until next time.

comments powered by Disqus