Campus Technology Insider Podcast January 2025
Listen: AI in Education: Moving from Trust-Building to Innovation
Rhea Kelly 00:00
Hello and welcome to the Campus Technology Insider podcast. I'm Rhea Kelly, editor in chief of Campus Technology, and your host.
What's the state of artificial intelligence in education for 2025? It's all over the place, according to Ryan Lufkin, VP of global academic strategy at Instructure. While innovative adopters are experimenting with ways to help students engage with AI tools, others may be stuck in the idea of AI as an avenue for plagiarism and cheating. And while it's important to build trust in the technology, perhaps it's time for educators and students alike to put the power of AI to work. We talked with Lufkin about building AI literacy, international AI adoption, personalizing the academic experience, and more. Here's our chat.
Hi Ryan. Welcome to the podcast.
Ryan Lufkin 01:02
Hey, thanks for having me.
Rhea Kelly 01:04
So I thought we could just start by telling me about yourself. Tell me about your background and your role at Instructure.
Ryan Lufkin 01:10
Yeah. So my name is Ryan Lufkin. I'm the vice president of global academic strategy here at Instructure, the makers of Canvas. I always like to call that out, because Canvas tends to be a household name, even if Instructure isn't. So I've been in this role for about two years, and, but I've been in ed tech for, gee it's actually 25 years now. So in, back in '99, I worked for a tech startup called Campus Pipeline, doing the first HTML portal for higher ed that was customizable and personalizable, to help bring some of the student data to life and really engage students. And that kind of kicked off my career. I spent the vast majority of it here in ed tech, and both on the administrative technology side, the SIS and things like that, and then on the LMS side, now at Instructure for the last almost seven years. So I live and breathe ed tech. It's what I get excited about. And I think, as we talk about some of the some of the different trends that are impacting education, hopefully you can understand I, it's something I'm passionate about,
Rhea Kelly 02:14
For sure. I love that, going back to the early days of portals.
Ryan Lufkin 02:18
I know, that, when that was truly, you know, an HTML portal was truly, you know, groundbreaking.
Rhea Kelly 02:25
Yeah. So we're here to talk about AI, and I imagine that is a big part of your life right now, or, you know, it's just one of the biggest trends in ed tech, I think. So I thought I'd ask a tough question, kind of too huge of a question. But how do you like characterize the current state of AI in higher education?
Ryan Lufkin 02:44
I mean, it is, it's interesting because I think it's all over the place, really, and I tend to look at it through the experience of my kids. I'm lucky enough to have a 20-year-old who's a sophomore in college, and I have a 14-year-old who's an eighth grader, and so I look at how their schools are talking to them about AI, and they're very, very different. For my daughter they're, she's given much more guidance in how to use it, how to use AI as part of the tool. AI literacy is actually part of the curriculum, and really understanding that it's a tool that we're going to be using well into the future for, you know, moving forward, and so how do they enable using it? And for my son, it's much more viewed as a cheating tool they should not use, and it's, they're not being given a lot of guidance on how to use the tool at all. And that's creating a chasm, really. And I think globally it's, you know, I'm fortunate enough in my role to be able to travel across the globe, and we see that kind of inconsistency all over the world. You know, you've got those very innovative adopters who are trying to figure out how to use it in the classroom, how to really help students engage with the tools. And then you have a lot of people that are still stuck in the idea that it's a, it's a cheating tool, it has no, no basis in education and, and frankly, a fear about it replacing educators or replacing jobs in the future, I think, is kind of inhibiting that. So, inconsistent across the globe, but I think gaining momentum.
Rhea Kelly 04:13
Do you see it as a K-12 versus higher ed thing, or is it really just a school-by-school thing?
Ryan Lufkin 04:17
It's a school-by-school thing. I think it's more prevalent in K-12. And honestly — not to, not to cast any, you know, disparaging statements about K-12 educators — a lot of them came out of COVID, and they moved mountains, and, and, you know, all of a sudden, to be, kind of we're getting back to normal, only to be hit with AI as one more thing they have to learn. And they're already, you know, incredibly time-poor and incredibly overworked. So to expect all of them to immediately become AI experts really is, is not realistic, but, but many of them are. Many of them are embracing these tools. They're saving themselves time. They're, they're helping. So it's, it's not fair to paint with broad brushes. It's just that's what we're seeing is just kind of so spotty all over the globe. And again, it's not necessarily K-12 versus higher ed. It's not region versus region. On November 30th of 2022 we all started in the exact same spot with generative AI across the globe, and we've just seen different levels of embracing that technology, you know, pretty much everywhere.
Rhea Kelly 05:18
What do you expect to come in 2025, you know, with regards to AI, compared to, you know, the past year?
Ryan Lufkin 05:26
Yeah, I talk a lot about the fact that we've been in the trust-building phase, right? I have a presentation I give, and I put up, I put up a slide that has, you know, the Terminator, and Hal from 2001: A Space Odyssey, and, and all of these different Ais. We've been trained to believe that AI is going to be evil. When it shows up, it's going to be bad. And a lot of the headlines that came out immediately were, you know, people doing some prompt engineering and getting AI to say things that, you know, my AI threatened me, AI told me I should kill myself, things like that. And when you really look at the prompting that went into it, a lot, it takes a lot of manipulation of the tools to kind of get that response. And so, you know, it's feeding on the fears that have been sown since the 1980s and even before that, right? You know, War of the Worlds, even, even prior to, you know, in the 1950s really. So that idea that, that aliens, AI, these are all bad things. And so when they come, naturally we have that fear. So we've been in the trust-building phase, really helping people understand what, what AI is, what it's capable of, what it's good at, right? And I think now we're moving, we're starting to really move in mass, beyond that fear phase and into the how do we put this, this innovative tool to work? And how do we, how do we start saving ourselves time? You know, I had an educator say, you know, I'm the one that wants to write poetry, I'm the one that wants to paint pictures. It should be doing the tough work. And I was like, it can, we can do those things. We just have to move beyond, you know, the easiest implementation cases. I think we're moving that way. I think we're starting to put AI to work. And I think 2025 is gonna be a very productive year in actually, you know, we're starting to see these models get smaller and more affordable, easier to implement, right, more, that's one of the, that's one of the reasons they're so appealing, is they're so approachable. And so I think we're gonna start putting to work in really effective ways.
Rhea Kelly 07:12
It's amazing how many Terminator references come up, like, practically every day it feels like.
Ryan Lufkin 07:18
Oh, it does. It does. And this idea that they're going to become sentient. And why, I'm not sure why humans have this, this complex where we were pretty sure if AI ever becomes sentient, it's going to wipe us out, right? Like, why do we feel that way? Why don't we think it's going to be our friend or be like somebody that really roots for humanity to do more and better, right? I just think it's an interesting kind of complex we've created for ourselves culturally.
Rhea Kelly 07:40
So you mentioned how you travel around the world talking about AI, and I'm curious what you've learned from that international perspective, besides just the variety of this sort of state of AI in different places.
Ryan Lufkin 07:53
Yeah, I mean, it's, it's so interesting because we, we actually had Martha Castellanos from Area Andina University in Colombia on our podcast last week. And I think it was so interesting, because she has a very deep insight into the culture aspects that, that impact, like certainly Colombian society, but society as a whole, and she has such a positive outlook on AI specifically, and it's, its ability to personalize learning, right? That ability to really engage with students that are very difficult to engage with, right? They serve students across Colombia, in urban areas, in rural areas, you know, those that are dealing with poverty issues, crime issues, those that are on the other end of the spectrum are very wealthy, right? And so, how do we personalize those education experiences? How do we make sure we're engaging with students and we identify when they might be going off track quickly, and bring them back into the fold, give them help when they need to. And she had just such an amazing perspective on that. And I think we see these thought leaders. One of the cool things like I mentioned, is we all started at that starting line. Very few institutions or individuals had access to generative AI prior to the end of 2022. And so those that get over the fear and embrace it and start using these tools really are setting themselves up for success. And the ones that you know, bury their head in the sand or hope it will go away, are doing themselves a disservice, right? Like this, the, AI is here to stay, like the internet, like the calculator, like, you know, so many innovations before, it's here to stay. And so I love to see, you know, across Europe, across Asia, you know, across North America, at schools of all different sizes, this isn't just the MITs and the, the Harvards, and the, the incredibly well funded schools making innovation. This is approachable innovation for schools of all sizes, wherever you are in the world.
Rhea Kelly 09:44
Are there any particular countries you think are doing it better than, than you know, the US and their approach to AI?
Ryan Lufkin 09:51
Yeah. I mean, I think the Philippines, it's been interesting, because the Philippines actually had a mandate for more certificate programs to upskill their, their workforce, upskill their, their students across the country, to prepare them for the tech jobs that are coming that way. And it's been interesting to watch the universities down there actually apply AI in ways that are incredibly innovative for reaching these students. A lot of the same challenges like I talked about with, with Colombia, but they're, they're able to create a more personalized experience, create a outreach for these students, and really teach them how to be using AI in a way that maybe, students in the US may not be learning as rapidly how to use these tools, right? And so they're, they're using it as a way to close those knowledge gaps and really accelerate the skill growth for you know, for their area. And incidentally, you know, Instructure is actually opening an office in the Philippines, and so we, we are both helping improve that and then benefiting from that upskilling of those workers. So that's kind of an incredible closing of the loop.
Rhea Kelly 10:56
So you mentioned the potential for personalizing learning, and I'm curious what you think kind of the biggest areas of potential are for AI, for the use of AI in teaching and learning?
Ryan Lufkin 11:08
Yeah, I mean, I think, to me, that's the, that personalization experience — we're just kind of scratching the surface, so that, you know, we know from the, the data out there that all learners learn differently, right? All, all students are going through different things across their, their academic experience. You know, there are times when a student is not successful simply because the way an educator teaches doesn't click with them, right? But you put AI into that instance, and it can kind of fill those gaps. It can, it can say, hey, you're more of a visual learner. What if we supported you with some visual elements that maybe the, the educator isn't providing, right? I don't ever see a world, and I say this all the time, I want to, I wish I could exclamation point this, you know, but I don't ever see a world where educators aren't part of this process. Educators are the magic. They're the storytellers, they're the, they're the ones that create those engaging elements with students. AI allows us to fill the gaps, expand our reach, you know, take care of the administrative tasks so we can do more, right? Focus on the teaching, the things that really, you know, create those aha moments for students. That's what's important. And I think the, the more that we start exploring these different ways to personalize that experience, catch students, you know, in a more timely fashion when they might be going off track, help understand what their, what their aptitudes are really and focus in those areas, right? More as a guide. I just think, you know, I look back to my, my journey, I was a, I was a, you know, six-year college student, right? Because I didn't know what I wanted to do, I changed my major, and I shifted, and I was also a first-generation college student, so I didn't, I didn't have the guidance from my parents on, you know, how to choose courses. What should I do? What should I, Where should I go? Right? And the idea that AI can help understand a student, guide a student, and really help them find success more quickly, you know, more affordably, and, it's just incredible. And again, like I said, we're just starting to scratch the surface of what's possible, we're, we're gaining the trust, and now hopefully we start putting some of these into play.
Rhea Kelly 13:08
What do you think it takes? You know, is it a matter of ed tech providers getting the right tools together, integrating across campus systems? It just seems like there's a lot to be done to be able to take advantage of all that potential.
Ryan Lufkin 13:22
Yeah, it's both, honestly, it's a, it's collaboration, right? And that's where it's been so interesting because, you know, we've added a number of AI features into to Canvas with the feedback of our, our schools and what they want to see. But by and large, a lot of that, they're driving a lot of the innovation, and part of it is that they want, they want to feel the ownership. They want to feel the trust in the large language models that are being applied, that are being put in there. And you know, we do everything we can to build that trust with the tools we use. We use AWS's Bedrock large language models, right? So we're not using third-party large language models. We publish what we call our nutritional facts cards, like, you know, like you would find in the box, on a box of cereal. It has all the facts about the large language model that's being used to help build that trust. And I think the biggest piece is going to be the, the innovation driven internally at universities and colleges and, and by all means K-12 districts, and even, even within companies that are trying to train and support learning within their organizations, they've got to have the vision. We as vendors need to be there to support them with the technology that can facilitate that, right? And again, we're just, every time I think we have these conversations, we uncover new use cases in ways that I think is just remarkable, right? Like, people say, well, could it do this? We're like, yeah, it probably could. Let's make it do that. Let's try that, right? And Zach Pendleton, who's our chief architect, I always like to plug him, because he's kind of like Willy Wonka that way, where he's like, we can figure that out. Like, bring those back to his team, and they start, like, playing with stuff to figure out how you'd apply a large language model. How would you, how would you, you know, carve out the data that would make it smart to be able to accomplish these tasks, things like that. So it's, it's such an innovative time. It's so exciting. If you, if you embrace it, if you get excited about it, man, there's so much opportunity. It's just, I don't know, I honestly get excited about it everywhere I, everywhere I speak about it.
Rhea Kelly 15:22
Yeah, I love that Willy Wonka comparison. It's just, it's kind of magical, you know?
Ryan Lufkin 15:27
You gotta have the dreamers, right? That's what it, yeah, that's what it takes. The, the dreamers, the ones that are, you know, looking at new ways, because it's not …. AI can be applied to the traditional model, right? And, and streamlining tasks, and providing, you know, knocking the corners off of a user's experience in a lot of different ways. But it's the things that we haven't even thought of, the new processes that are, are really interesting — the new ways of doing things. And that's the, I think one of the hardest parts with education, is education works within a pretty, pretty rigid framework of regulation and, and requirements. And I think, you know, you've seen institutions like Western Governors University and their competency-based education model that, you know, predates AI, running into issues with, with, issues around guidelines around funding and seat time, and they're saying, well, our model doesn't actually focus on seat time. It's about getting students to competency quicker. So we're measuring competency as opposed to this metric. You're measuring the wrong metric. And it took them going back and forth with the US government, Department of Education, to really figure that out and get some changes made. We're going to run into the exact same things with AI and our perception of AI. You didn't follow the exact same path to mastery of a skill. You didn't follow the exact same traditional path to understanding, but that's okay, and AI is helping you do those in ways that we haven't figured out. But it takes, like you said, that, that, you know, schools have to, have to be pushing that evolution. We have to be working with the government around the evolution. You know, and that's one of the things I think some countries, kind of as a knee jerk reaction, jumped to provide more regulation, rather than sitting back and maybe waiting, you know, waiting until we actually had a little more insight into how AI was being applied. And I think that's, that'll be a detriment to those, those countries' schools. And again, I think, like we saw the initial ban on AI, right? You know, the beginning of 2023, we had schools all over the globe just banning AI, and then we saw those slowly go away and be removed and say, Okay, that's not realistic. How do we apply these? And I think in a lot of ways, government regulations are kind of doing the same thing. Some of that knee jerk regulation might dial back a little bit. And there's, there's certainly need for regulation, but we've got to be, make, you know, make sure we're not stifling innovation.
Rhea Kelly 17:45
That's like when you talk about automating workflows, you know, you don't necessarily want to just automate the exact same thing you were doing before.
Ryan Lufkin 17:54
Yeah.
Rhea Kelly 17:54
But re, you know, reimagining how the work should be done is a whole nother thing that's…
Ryan Lufkin 17:59
Exactly, that's spot on.
Rhea Kelly 18:02
So what pitfalls remain? You know, what should people still be worried about when it comes to using AI?
Ryan Lufkin 18:08
Well, the interesting thing, you know, again, not to mention Zach Pendleton, but something that, he came up with a metaphor early on around eating our vegetables, right? We already worry about student data privacy, accessibility, security, right? We, we already have regulations and guidance around that, FERPA, you know, the different guidances to make sure that we protect those things. We just need to make sure, as we apply these AI tools, that we're eating our vegetables and meeting those same requirements, right? Those, we've got those guidelines, let's just make sure that we don't, we don't do something dumb, right? But certainly, like student data privacy is always one of the biggest issues. You know, educator intellectual property, university intellectual property, things like that. How do we make sure that, that we don't put that at risk? Ultimately, I think the biggest challenge really, is, how do we make sure students are using AI to enhance learning, not to avoid learning, right? I think, I think that's the biggest challenge is everybody's worried that we're going to see a future where AI, students are using AI to do their homework for, for educators who are using AI to grade their homework. And so it's AI teaching AI, and no one's getting smarter in the process, right? That's what we have to avoid, and we need to make sure that these tools streamline our process, save us time, make us better, as opposed to making us worse. And I think knowing that that's an issue, and that being such a concern for education as a whole, I think we'll get there, but it'll be a process.
Rhea Kelly 19:41
I came across a quote, something you said in another podcast called the EdUp Experience, where you said, "We're about to face a wave of AI feral children hitting higher education — students who know how to use these tools but don't know how to use them ethically. And then you emphasize the importance of AI literacy, and training both students and faculty on that literacy. I love that term, "AI feral children." Could you kind of dive more into what you were talking about there?
Ryan Lufkin 20:13
Yeah, there's a, there's a video I came across years ago, and it was a little girl, and they handed her a magazine, and she tried to scroll on the magazine, and then she looked at her finger, and she wiped it on her shirt, and she tried to scroll again. In her perspective, someone, she was using, using an iPad, right? She was used to scrolling in that digital experience. And it shows how digitally native students are. And you know, as we get older, we don't all have that same perspective on the digital world, right? We tend to look at the world through our own experiences. And we saw it through COVID, when people would be like, Oh, my student's missing this or that. Well, that was your experience, and they're gonna have a different experience, so let's make the most of their experience. And I think what we're seeing now is we've got these digitally native students. So I see it through my own son that I mentioned earlier, but I see, I look at him and his friends, and they truly don't understand why, why ChatGPT would be bad, but Grammarly or these other tools are good, right? Why can they use one? To them, they're all just tools in their arsenal of accomplishing their tasks. And if the tasks are being given to be accomplished with these tools, why wouldn't they use them, right? That's, they just truly don't understand. And so when you've got a world where primary education is not addressing or teaching, you know, AI literacy, including AI ethics, when to use it appropriately, these students are going to crash as a wave onto higher education, and it'll be up to college educators to make sure to correct these bad behaviors that students have already built, and, in some case, remediate the learning that hasn't occurred because they've been using these tools unethically or inappropriately, right? And so we've got to, we've got to work together, and that's why I think the barrier between, you know, K-12 and higher ed has, should just be abolished, right? Like, we've seen more and more enrollment, joint enrollment, I think we, we draw kind of a artificial barrier where, for a student, their learning journey is their learning journey, that just happens to be a chapter in it. We need to be making sure that we are working to better, working together better as educational institutions, whatever level you are, to understand that you're playing a role in a student's educational journey. And if we're not teaching those ethics early on in that journey, it suffers everywhere else across the rest of the book, right? And so we've got to work together on that. And so, yeah, I use some, calling them AI feral children may be a little alarmist, but I do feel like we've got to get a better focus on teaching these students. You know, elementary school, whatever level it is, AI literacy needs to become part of our core curriculum at a very early age. Just like digital literacy, we've been pushing for that for quite some time. Students are using these tools. Why are we, why are we going analog when these students are already carrying digital devices in their pockets, right? Let's fix the problem, use the tools we have, and raise awareness.
Rhea Kelly 23:08
Yeah, it just makes me wonder, is this something that would ever be included in, like the Common Core, you know?
Ryan Lufkin 23:14
I think it should be, honestly, I, you know, we've talked a lot about in the past about coding being included in Common Core, right? Like this, this, this idea that you need to understand digital language and things like that. AI and AI literacy, the ability to write AI prompts, this, this is, I think, very rapidly becoming a core part of our society, and we're better off as a society if people actually can recognize deepfake. You know, it's really hard for people to understand deepfake video and images if they don't know that AI is capable of generating those, right? They take them for granted. But if you're taught at a very young age what AI is capable of, man, you're gonna be a lot more skeptical when you see those videos, when you see images of the Pope in a puffy jacket, right? Like you're gonna, you're gonna understand that that might not be real and question that. And so I think as a society, it's incredibly important we address that.
Rhea Kelly 24:02
Yeah, I hope we get there.
Ryan Lufkin 24:04
I hope we get there too.
Rhea Kelly 24:08
So what can you tell me about Instructure's strategy when it comes to AI, sort of moving forward?
Ryan Lufkin 24:14
Yeah, it's been interesting because we work very closely with AWS. And you know, Canvas was, we like to say Canvas was born in the cloud 12 years ago, and in partnership with AWS. And so AWS has built what they call their Bedrock large language models. And it's seven large language models that they host. They, you know, we're not passing data outside of AWS, where Canvas is also hosted. But one of the cool things that's been interesting, especially lately, is that they've been able to start shrinking these, these large language models, making them more affordable, making them more compact, right? That was one of the early on, people got very excited about AI, and they were more concerned about the security issues, the privacy issues, than the cost, right? And what we've seen is these tools across an organization can be incredibly expensive. So we need to find ways to make sure we're rolling them out in ways that aren't going to drive up the cost of technology for students. And so, you know, we've rolled out a number of features in Canvas, everything from translation tools to, you know, insights tools that are, you know, easy analytics tools for educators to use, to large discussion summaries, things like that, where you know, instead of having to read through hundreds of discussions posts, you can click a button and it'll summarize the conversation for an educator, things like that. And so those are the basics, and we're going to continue to add features that aren't driving up the base price of our product, because ultimately this needs to be affordable for everyone. And then the second piece is, we, you know, we're built on the, the LTI standard, right, which is, I always compare it to Legos for people that don't know. It's a common language that allows communication, or education tools to work together. And so if you have a, you know, third-party app with an LTI app, it'll plug directly into Canvas, and then as they do updates, and we do updates, it doesn't break the system. They work together seamlessly. But what we're able to do is really expand our LTI plumbing, for lack of a better term, so that as schools want to build their own or plug in third-party tools like everything from Microsoft Copilot, you know, Praxis Pria is a good partner of ours. You can plug these tools in, and they can actually power interactions throughout Canvas. And so it allows that flexibility. And we're seeing more and more schools say, Hey, I'm going to stand up my own large language models within AWS as well, or on campus, and let my students experiment and build with this. We've had, I believe it was University of Central Florida build their entirely, their own search tool across Canvas, built on their large language model. And so allowing that flexibility, that choice, is really key for us, and I think that's, we've gotten a lot of great feedback. Again, we don't do much without really, you know, the guidance of our customers. We run customer advisory boards and advisory councils where they tell us what they want and what direction they want us to go and, and we heard loud and clear that they were worried we'd run too fast down the road with AI and create issues. And we certainly saw other vendors in the space do that. And we've, we really said, You know what, we're going to be very measured and very deliberate with how we roll out these tools. And we've gotten great feedback about that. That's, that's, the trust is there, the biggest piece for us. Like I said, we're in the trust building phase. We don't want to squander that. We want to make sure that we build that trust together. And at the same time, we've got schools like University of Michigan and MIT and, and all across doing amazing things with AI, both, both our tools and the tools that they're plugging into Canvas.
Rhea Kelly 27:38
Are there any tools that universities are creating that you would want to, like, bring in and make it a part of Canvas?
Ryan Lufkin 27:45
I mean, honestly, there's, there's, there's so much around building courses, making courses engaging, right? And I think one of the, one of the exciting things that we have kind of talked about is we want more show and tell, right? I want to see more of what schools are doing. Well, I said I was in Area Andina down in, in Bogota in October, and they were showing me some of their courses, and they've customized these courses. And I'm like, I want to show this to everyone. I need to bring, like, can you, can you record video of this so I can take this back and show everybody? And I think that's, that's what I love about education, is it's so collaborative. And so, you know, you don't ever have to start anything from scratch in Canvas. You know, through Canvas Commons, you can share learning objects, everything from a course to a module to a quiz, right? And so if there's, whatever you want to do, somebody, somebody's done it, they want to share that. We want to encourage that more with the AI tools. If they're building AI tools, how do they share that with, with other members of the community, and make sure that they're there. People get excited about it, but they're also nobody's starting from scratch. We build together better than we ever do by ourselves.
Rhea Kelly 28:51
Yeah, that's that's pretty cool, like a repository of tools and best practices that people have….
Ryan Lufkin 28:55
Yes. I mean, from the very beginning, that's one of the things I loved about,I've never worked for an ed tech company where whatever you want to do, somebody's done it, and they've recorded themselves doing it, and it's available on the community. All you have to do is do a Google search and it'll, you'll pull up a video of one of our advocates who's been like, Oh, you didn't know how to do that? Let me show you that. Right? It's amazing.
Rhea Kelly 29:14
All right, I think that's a good place to end it. Thanks so much for coming on.
Ryan Lufkin 29:18
Well, thanks for having me. This has been a great conversation.
Rhea Kelly 29:24
Thank you for joining us. I'm Rhea Kelly, and this was the Campus Technology Insider podcast. You can find us on the major podcast platforms or visit us online at campustechnology.com/podcast. Let us know what you think of this episode and what you'd like to hear in the future. Until next time.