Campus Technology Insider Podcast April 2025
Listen: How to Assess Your Institution's AI Readiness
Rhea Kelly 00:00
Hello and welcome to the Campus Technology Insider podcast. I'm Rhea Kelly, editor in chief of Campus Technology, and your host. And I'm here with Kathe Pelletier, Senior Director of Community Programs at Educause, to talk about assessing institutional AI readiness in higher ed. Kathe, welcome to the podcast!
Kathe Pelletier 00:26
Hey. Thanks, Rhea. Nice to be here.
Rhea Kelly 00:29
So Educause just recently came out with a higher education generative AI readiness assessment. Can you kind of give a brief overview of what that is?
Kathe Pelletier 00:39
Yes, so it's actually our version two, and it's exciting to be able to build on the first one, which we developed, gosh, it's maybe a year ago or so. And we knew at that time that we were just, things were moving so quickly that we're, we were going to have to come back and open it up again and both look at the content as well as the approach to the assessments. So I'm really excited about the version two, which has a couple of components I think will be important for the audience to know about. The first, really, is just the purpose of the assessment and the complexity of, amidst the complexity of the environment where that we're still in. When you talk about AI readiness on your campus, that often leads to, you know, eyeballs swirling back in their heads and people saying, like, I don't even know what is even happening on campus, much less, you know, what could be we be ready for. So we really tried to communicate in the setup of the assessment that we want institutions to be talking amongst themselves around what, what are your ideals and goals and vision for AI on your campus. Is it, you know, maximizing AI in your classroom? Is it an enterprise data model? Is it making sure that you are working with your vendor partners to ensure that you have ethical functionality? You know, so that can look a lot of different ways. So we really want institutions to level set first around what their goals are, and then from there, the jumping off point of the assessment really helps them to provoke conversation about aspects of readiness that they may not have considered and, and then lead them into a path of, you know, let's work with our teams to, to build readiness towards these areas. So the conversation piece, I think, is really critical. The other piece that I think, that I'm really excited about, that we've added this time, is part of the conversation piece, and that is some high-level templated ideas for action planning. So once you receive results, then you say, you know, oh gosh, we're maybe not as ready as we thought. Or there's some areas that we really have to shore up. Or we're really doing great on governance, so let's really zoom into that and build on that, that strength that we already have. So the action planning resources really are intended to extend that conversation to what's next, not just, oh, now we know that, you know, we're either, you know, doing great, or have a ways to go. But really, again, engaging across the campus around how to, how to step into the plans that you need to make to, to get to the, the future of AI that you're hoping for. And then finally, we have resources that are available for folks to, to use to educate themselves, either on what's happening in the field and what the landscape looks like, or other tools that we might have that can help folks with that, with that readiness.
Rhea Kelly 03:41
So also recently, the second annual Educause AI landscape study came out, which I gotta say, is one of my favorite things. It just has so much interesting data in there. So I'm curious now how that informed you in revising the readiness assessment, you know, year over year.
Kathe Pelletier 04:01
Yeah, that's a great question, and I will have to tell our researcher, Jenay Robert, that the landscape report is one of your favorite things. It's one of my favorite things too. I think it's really a fascinating read and really normalizing, I think, in terms of where folks are on the spectrum of implementation and how they're approaching strategy, and so I invite our listeners to take a peek at that as well. But we were at an interesting time when we were developing this version two, where the landscape study was in the field, and we were just beginning to see data as we were developing this new version. So we really leaned a lot on the first landscape study and the findings from that, but we did have access to some of the early insights from this, the second annual study that that we could include in the design of this new report, or pardon me, the new assessment tool. And that was one of many resources that we leveraged when we were looking at, when we opened up the hood and looked at the content, we did a scan of, probably, gosh, I want to say, 10 to 15 other readiness instruments and other frameworks around readiness that were already out there in the field. And we scanned the themes, and really looked at what the pieces were that were standing out as really common, and the things that might be missing from some of those instruments as well. And then we convened a panel of our Educause members who work in various, work with AI in various ways, and we invited them to both prioritize those elements that we were seeing, but also give a voice over to what they mean and how they would interpret that and how they would want to see those represented in this kind of assessment. So it was, again, the landscape study, I think, helped to spark some ideas about what we might want to include, but we really looked broadly about, you know, as we were refining the content areas.
Rhea Kelly 06:06
One of the data points that stood out to me from that landscape study was that only 22% of respondents reported having an institution-wide approach to AI-related strategy. And I think the report said or noted that that implies, you know, that progress and experiences with AI on campus remain uneven, kind of, across the board. So with this assessment tool, should that be a starting point for AI strategy? Is it a status check for AI strategies that are already underway? Sort of, how should institutions be using it, you know, according to where they are on that journey?
Kathe Pelletier 06:43
Yeah, that's such a great call out. And I think, you know, wanting to lean into that, you know, institution, context-specific approach for this idea of readiness and, and AI strategy, I would say, probably we could say, objectively, it's good to have an institution-wide approach to AI strategy, because then that helps institutions be aligned in their efforts and to work kind of collectively together towards something that's ever changing, and, you know, having a North Star I think is really critical. That being said, some institutions are so decentralized that it, having an institution-wide approach at this point in, in AI's life cycle, or AI's maturation as a technology, is probably less than, you know, less possible than it might be with a smaller, centralized institution. So I think acknowledging your institutional context, whether it's, we want to have a centralized approach but we don't yet, and using the assessment tool is a great way to get there, because assuming you're getting all the right people around the table and having that conversation institution-wide, that, that can inform your, the development of an institutional strategy. At the same time, you might recognize that right now, we are siloed, or we are decentralized, and that's just where we are, and that's okay, and with this group of my colleagues, where there's a lot of energy around AI, so maybe it's in the teaching and learning space that you pull your teaching and learning folks together and really wrap your heads around where you might want to take some next steps, so that, that's, that's an approach too. So I love the idea of pairing the use of the assessment tool with a thorough read of the landscape study, because I think that can also signal places where you might be, quote, unquote, behind your peers, or you might be leading, and still feel like you're behind, but you really might be ahead of the curve. And, and that institutional approach, I think, is one thing that we'll see probably in future years, that number will, will go a little bit higher. But again, there, there might be reasons that your institution isn't yet there and, and really kind of starting with where you are and where, where you want to get next, is the thing that I think will help, help institutions avoid the paralysis of like, oh gosh, there's so much to do. But just putting one foot in front of the other in a coordinated, intentional way, I think, is critical.
Rhea Kelly 09:29
I love that you mentioned having the right people around the table, because that leads to my exact next question, who should be at the table? Like, who is it important to be a part of this assessment process?
Kathe Pelletier 09:41
Yeah, I, my first instinct is to say everybody, but I know I've talked to some folks on campuses, and they're like, We have a work group made out of 70 stakeholders, and it's so big, and it's, you know, so massive, that we can't get anything done because there's so many voices in the room. So you don't want to be too crazy when you, when you include every, but really, I think the another insight from the landscape study, I think that's worth mentioning here too, is that we are still hearing that there is a lack of communication or lack of awareness across departments around what's happening related to AI, And that's gotten a little bit better from our last study, but we're still seeing that the bubbles that, that people are in, in their various, from their various vantage points. And so one, one way to think about who to include might be to go through the various sections of the readiness assessment and think about who might be those stakeholders that represent each of those sections. So, you know, one example might be, you know, making sure you have the folks that are responsible for professional development for faculty at the table, and thinking about literacy. Then think about student literacy of AI, who are the individuals who might be supporting that. And then you've got your governance pieces, and who might be up at the table there, and really making sure that you've got folks that, that either represent or, or a set of, of groups that, that might be responsible for the various sections of the, the assessment. And that actually ties into the idea of maybe a hub and spokes model, where you have a small or medium sized group that's really at the table for the assessment, but they are representing the other larger groups that might be working with other teams on some of this work. And so it's a way to not have literally 70 people in your session, but you can make sure that you're representing the groups that you need to. And then that being said, maybe it's a great idea to have 70 people, or 100 people, or your entire staff, take the assessment, because one of the interesting things I think that can happen is that you see where there's differences in responses. So I might think our governance structure is really strong and we have, you know, high quality, you know, we, we have data definitions, and we've got our data management is really on track and, and, you know, we're really ready for the more sophisticated uses of data that we might need to do with, with AI. But my peer over there, and you know, in the data science area, might say like, oh, man, we got a long way to go, and we, we really need to shore this up. And so, so having those points of difference, I think, are really important to dig into and to understand, help you understand each other's perspectives and the needs of the various stakeholder groups. So if you have many people who are taking the assessment, it would be, you know, that much more opportunity to to see those perspectives come out.
Rhea Kelly 12:48
Can you break down kind of the specific areas that are covered in the assessment? Just briefly, I know you mentioned a few, but, but what does it, what does the sort of scope look like?
Kathe Pelletier 12:58
Yeah, so the first section is about strategy, and that really is about how AI aligns with your institutional priorities and long-term planning. And so there you might see questions related to, you know, do you have an institutional strategy, or how does your work with AI connect to your institutional mission or your strategic plan, and that kind of thing. I've mentioned governance, that's another section, and that's inclusive of data governance, but more broadly, how are you making decisions about AI? And so that might include policies and guidelines. It might include how you're thinking about ethics. It might include things related to frameworks for decision-making and how you're, that are guiding AI adoption. And then the third one is, right, all on technology. So what kind of infrastructure do you have? What's your security approach? Any interoperability considerations as you're connecting systems or feeding data into an AI model? And then the fourth one is around workforce, and that's, you know, thinking about faculty and staff readiness, the professional development pieces that I mentioned before, and then any other support structures that really help enable the ongoing literacy development of your workforce, and just workforce development overall, and those considerations as AI is thrown into the mix. And then finally, we have a section around teaching and learning, and that is really looking at AI's role in your curriculum, in your pedagogy, and as you're thinking about student success. So those are, that's the kind of complete sections of the report. We had a fantasy of creating kind of a choose your own adventure through the assessment, so that the approach would be truly personalized based on the context of your institution. And we just couldn't work out a way to create this so that it wasn't confusing and just kind of over-engineered. But again, really want to bring it back to this idea that the assessment really is intended to help you, in your context, make good decisions about where you want to go next. And so for example, you know, this may be a silly example, but if you really aren't using AI in teaching and learning, and you've already decided that, you know, that's not a direction you're headed, maybe that section isn't as important as some of the other pieces, but it is available for you in case you would like to peek over there and see what's going on. Or maybe a more realistic example might be, you're not planning on creating an enterprise data model, and so maybe some of the technology pieces may be less important for you, because you're not connecting your, your data sources, you're not connecting various other systems. But yet again, it kind of signals to you if that was something that you were interested in down the road, the pieces there are things that you would need to, to get your arms around before you went there. So it's, it's both a way to look at how are you doing right now, but also pointing you, giving you kind of a direction if you wanted to make some pivots in your strategy,
Rhea Kelly 16:09
Are there areas you expect institutions might be especially ahead or behind on in general?
Kathe Pelletier 16:16
Gosh, that's such a great question. I, you know, one thing that comes up a lot is governance, and I really think it depends — and maybe governance and technology maybe — I think it depends on where your emphasis has been. So some institutions fit in the kind of institutional persona of like charging ahead and pushing the boundaries and really testing new use cases and believing that the governance will follow. So kind of learning as they go, and learning what systems need to be connected, or what data they might need, or how they might need to treat those data differently, and kind of building those structures and those processes and those systems as they go. On the other hand, there are some other great examples of institutions that are saying, we are going to be really cautious with our use right now until we are confident that we have really comprehensive structures for governance and decision-making and policy and ethical frameworks, and, and really building the culture of the institution around, you know, ethical and responsible use for AI, and then pulling in the AI systems and tools. And so they might really be, you know, ahead with the governance pieces, but maybe less advanced in the actual implementation of AI initiatives. So that's been an interesting dynamic that we've seen. But pulling from our AI landscape report, again, I would say the biggest need that we see is around literacy for stakeholders. You know, we're seeing that students are using AI more than faculty are. And you know, if faculty really aren't prepared to either teach with AI or even understand how students are using it in appropriate and maybe inappropriate ways, then we're, we're kind of at an impasse. And, you know, with, with so much implication for AI in the future workforce, or the current workforce, I think the more faculty education there is to help infuse it into curriculum where appropriate, and to offer opportunities for students to be learning how to learn and work with AI alongside them, I think is going to be really an important piece.
Rhea Kelly 18:32
So what happens once the assessment is completed and you like, click submit, I'm imagining. Like, what happens next?
Kathe Pelletier 18:43
All your problems are solved. So what happens next is that you will receive an e-mail and it will show you the content of your responses. And the way that we set up the assessment, you'll see the, it's kind of a squishy Likert scale of, we're doing really good on this, or we were kind of okay, but could do a little bit better, or we haven't really started, or this, this, we're really at the beginning of this. So that's how the scoring works, and we really wanted it to be squishy on purpose, so that, you know, not getting caught up in numbers or, you know, really a quantitative measurement, but more of a directional sense of, you know, we're really great, or we're okay, but need some work. Or, you know, again, this isn't important, or we, we really have a lot of work to do. So you'll see all of your answers reported out. You also will receive a great list of resources that can be some next steps for you to explore with your colleagues that you've taken the assessment with, and bringing us back to the really important opportunity for discussion of, this is where you might find that my answers are different than yours, and let's, let's have a conversation about it. So that report will allow you to really think about or to talk about, where are we aligned? And where might we be, where might we have different perspectives? So you'll be able to do that with the report. You also get the resources. And then I believe the action plan prompts are also in the report that you receive, so you can just take that and make a decision if you want to facilitate that conversation, you have the structure to be able to do that as well.
Rhea Kelly 20:24
And then what should institutions' next steps be? Is this just a trigger of another round of conversation? Or, like, what are those action steps? What would be some concrete next steps?
Kathe Pelletier 20:36
Yeah, that's a great question. You know, I keep saying conversation, I keep saying discussion, and that really is the power of the assessment. I think that the content of the assessment is important, obviously, as it points to things that institutions need to be thinking about. But the really key value is this prompt for discussion. And so you mentioned the action plan, and what we've done for that is take our, what we know about foresight and applied it to this action plan template or set of prompts, and we really want institutions to be…. And so one of the things about foresight that's really powerful is that it allows you to think about not only possible futures, but also the preferred future that you really want to happen. So there are prompts within our template there that guide institutions to be thinking about what is their ideal future in terms of AI at their institution. And so that's one of the conversations that you might have. And then there are several other prompts that invite institutions to think about, you know, what are the momentum points that will really enable them to get the actions that they need to take to get to that preferred future? What might be some barriers or sticky parts that will be more challenging that they'll need to overcome? And what are the priority levels of, of each of those actions to take? So it really helps them to think, to create in their own context the actions that they will want to identify and prioritize and then turn into, you know, project plans eventually, or initiatives that will get them closer to the next level of readiness in the areas that they're focusing on.
Rhea Kelly 22:23
So the assessment doesn't tell you what to do, but it gives you the tools to determine what those next actions should be.
Kathe Pelletier 22:31
Yes.
Rhea Kelly 22:31
Yeah, in your context.
Kathe Pelletier 22:33
Yeah. We, we really, when we were developing the assessment, we really wanted to be able to give a recipe that that others could follow based, you know, maybe based on the way they scored or or other things that we knew about the different aspects of AI readiness. And then we just kept coming back to the importance of institutional context and how it would, even if we could pull off some sort of magical recipe that, that we could send to the people who are taking the assessment, it might not really be that beneficial, because we wouldn't, it wouldn't be considered within the context of the institution. And so, you know, maybe there will get be a point in the future when we're able to be a little bit more prescriptive with, with some actions, but the resources that we're providing, one example of a resource that's included is actually an action plan that we created with a panel related to generative AI. So it's just focused on generative AI. But in that particular resources, there are great examples and lists of actions at the individual level, the department level, the institutional level, and then across the institution that that you might take to move forward with a future that includes generative AI. So that's a great source for some ideas for those actions, as well as, you know, the landscape report and other things. So we're trying to offer the, the maybe raw material that institutions then can take and pull from as, as it relates to their own institutional setting.
Rhea Kelly 24:15
Thank you for joining us. I'm Rhea Kelly, and this was the Campus Technology Insider podcast. You can find us on the major podcast platforms or visit us online at campustechnology.com/podcast. Let us know what you think of this episode and what you'd like to hear in the future. Until next time.