AI and Our Next Conversations in Higher Education
A Q&A with Instructure's Ryan Lufkin
In recent years, technology industry press coverage has focused largely on the new and amazing capabilities AI offers. It seems like our dream functionalities have been delivered, with more yet to be imagined. And the play of tech giants on the world stage has been both entertaining and a little scary. This may feel like everything you could want in a major technological shift — but is it?
Happily, in the education market, we have another perspective. We still hear the voices of leaders asking us to consider what is our best use and adoption of the technology — just as they have always done when it comes to any groundbreaking technology applied in education. One such voice is Ryan Lufkin, vice president of global strategy for Instructure, makers of the market leading Canvas learning platform. Here, CT asks Lufkin how the focus of AI topics in education will move in the coming months, from the latest cool features and functions to the rigorous examination of implementations aimed to support the enduring values of our higher education institutions.
When transformative technologies finally become established and familiar to us, our conversations focus less on the technologies themselves and more on the best strategies to solve problems with them. (Image by AI: Microsoft Image Creator by Designer.)
Mary Grush: In higher education, how will our discussions of AI change in the coming months?
Ryan Lufkin: In 2026, the AI conversation in education will shift from experimentation to accountability — and that's a good thing.
In 2026, the AI conversation in education will shift from experimentation to accountability — and that's a good thing.
Grush: It sounds like a really good thing! What are some areas where that will likely be manifest?
Lufkin: Institutions will need to focus on governance, including transparency, vendor selection and management, ethics, and academic integrity, while also showing what has actually improved.
Grush: That's such an extensive range of things to consider. Over all, what's the key, most important factor as the AI conversation in education shifts, as you say, from experimentation to accountability?
Lufkin: Without a doubt it's our absolute requirement for student data privacy in training AI tools.
That is a hard and fast rule. And if you aren't a vendor who's experienced in the higher education space, you might think that rule is fungible, and it's absolutely not. So, at Instructure we spend a lot of time working with our partners and our universities to say, look, as you're choosing vendors, or as you're building this AI infrastructure, you need to put data security, data privacy, and data accessibility as the non-fungible requirements for any of those processes.
And at Instructure we look at the LMS as the best way to manage, to provide a framework to manage AI tools, to allow innovation with AI tools while not undermining those most basic requirements of privacy protection for students.
Recently we've seen numerous schools starting to embrace this notion of accountability, as opposed to what we initially saw as individual users or individual departments rolling out AI experimentation. What we're seeing now with our customer institutions is much more organized, well-managed implementations across their campuses.
Grush: How does that market knowledge spread?
Lufkin: We're starting to see a lot more collaboration among institutions, ways of looking to see what other institutions are doing. We're seeing more helpful models from institutions like Arizona State University, which has done some great things with collaboration and knowledge sharing. ASU created an Agentic AI and the Student Experience summit at the end of this past year that was probably the best AI conference I've been to as far as seeing what schools are doing to start implementing AI in pedagogy, in the actual teaching space.
Grush: ASU's CIO Lev Gonick commented in a December 16, 2025 RANT Podcast hosted by Eloy Ortiz Oakley, that the AI conference organizers saw a big surge of interest in that conference — instead of the predicted 200-or-so registrations, there were 650+ attendees from 25 countries, including more than 100 speakers. It's a wonderful illustration of how higher education comes together to share and shape knowledge about emerging education applications.
Lufkin: And we've seen more great examples from schools like Florida State University and others that are starting to, at scale, roll AI implementations out across their programs.
The most credible use cases — and model product implementations — out there to be shared, will be the ones that take real work off educators' plates, speed up and strengthen feedback for learners, and lead to measurable gains in teaching and learning outcomes. So look for those.
Grush: I'd like to hear a little bit more about Canvas, your company's own learning platform, and how it impacts AI in the education space.
Lufkin: One of the things that's really impactful about Canvas specifically is that it was designed as an open platform from the very beginning. And so we have more than 600 APIs into the product, and we work with common standards, like those from 1EdTech.
We're a defined building block system, so that other vendors can plug directly into those APIs. That's one of the reasons that the top AI providers like Anthropic, Google, Microsoft, and others are working with Instructure: because we can provide an actual framework for them to integrate their tools.
At Instructure, we have developed our own agentic AI model, the intelligent Ignite Agent. But we provide flexibility for our schools, so that if they want to use the Ignite Agent to perform tasks within Canvas, they can, or if they want to use Microsoft, or if they want to use Google, or whatever they choose — they can use any of those tools. And we further partner with AWS so those tools can all be housed directly through what AWS calls its Amazon Bedrock set of large language models. This provides an additional level of security and scalability through AWS as a hosting partner — the same hosting partner that Canvas itself uses.
So that level of flexibility drives a degree of innovation in the learning management tool that nobody else in the market is now providing.
And it's all because, from the very beginning we said that we're not going to succeed by creating a walled garden. That's not what our customers want. That's not what's best for them. And that has now really proven out over time as well. So as we move towards building a model context protocol (MCP) to organize the data to help large language models better plug into the learning management system, we have schools that are simply over the moon with that approach.
That's the bottom line for our customers. Rather than Instructure developing a lot of features for Canvas ourselves — and of course we have developed a large number of features and will continue to do so — still, the real flexibility comes in building the flexible technology ecosystem that our schools want. So that's what we're supporting.
Grush: Looking now to broader societal trends and the changing nature and needs of learners, what general directions will be or should be factored in, to our next conversations about using AI in higher education?
Lufkin: In 2026, the "new traditional learner" is becoming the default. Tenures are shorter, careers are longer, linear paths are rarer, and people will engage with education throughout life. Instructure's recent State of Learning and Readiness Report, "Building a Future-Ready Workforce," found that 64 percent of U.S. workers plan to change jobs within the next two years. That kind of mobility will continue to raise expectations for flexible, work-relevant pathways and accelerate the global shift toward stackable credentials.
Grush: Those trends represent some very complex and important changes for colleges and universities. And I think by now there's an expectation that AI will play a critical role in change at those levels. But will we get to the point soon — as we have in the past with other transformative technologies — when AI has established its impact and you won't hear it spoken about as much? When it just becomes part of the environment… not a separate thing that you have to keep defining.
Will our conversations finally land on the values for education — rather than placing further concentration on the workings of AI?
Lufkin: Of course understanding AI is important, but that literacy will be there. We're building on our sophistication working with AI every day. I think, yes, 100 percent that in the coming months you can expect the conversations about AI itself to fade into the background, given our experience with it. We won't have to talk so much about AI. But it will be there supporting our most strategic challenges, as we've mentioned. And that's where our conversations will be.
Grush: What should higher education leadership consider as they try to prepare for these high-level changes and the likely predominance of AI-driven strategies?
Lufkin: I'd want higher education leadership to keep in mind that the strongest examples of AI use won't necessarily be the campuses running the flashiest pilots, but those demonstrating responsible and broad, consistent adoption aligned with real outcomes.
About the Author
Mary Grush is Editor and Conference Program Director, Campus Technology.