Campus Technology Insider Podcast February 2025
Listen: More Optimism, Less Distrust: Educause's 2025 AI Landscape Study
Rhea Kelly 00:07
Hello and welcome to the Campus Technology Insider podcast. I'm Rhea Kelly, editor in chief of campus technology and your host. And I'm here with Jenay Robert, senior researcher at Educause, to talk about the latest Educause AI Landscape Study. Jenay, welcome to the podcast!
Jenay Robert 00:24
Thanks so much. It's a pleasure to be back. Always a joy to chat with you.
Rhea Kelly 00:28
The Educause 2025 AI landscape study just came out a couple of weeks ago. Could you give kind of a brief overview of the study, kind of its history, your methodology, that kind of stuff.
Jenay Robert 00:40
Yeah. I mean, in regards to the history, this really started late 2023, which is when we started to feel like it was time to do our first landscape study. So you know, if you try to do research like this too early, when something shows up on the stage of higher ed, people are still kind of trying to get their feet under them. But by late 2023 we felt like, okay, this is a good time to kind of get started on this line of research. And I have to say it's one of my proudest moments for me and my team, because we went from a meeting with Susan Grayjek, who was the VP of Research at that point, and that was October 2023 when she said, "I'd like to see this happen." And we got that published by February 2024. So it was this huge team effort, and really fun. And we had so much fun that we decided to do it again for 2025. So yeah, we followed a similar sort of research plan, really, like all of our research, this starts with listening to the community. So figuring out what's keeping all of you up at night, what are the things you really need to know. As an aside, sometimes you'll see me pop up at a random like AI summit or something like that, which might be a very practitioner-focused event, and usually what I'm doing is just trying to kind of see what's happening on the ground and connect with community and get research ideas. And so that's, that's really the story behind the research.
Rhea Kelly 02:07
Yeah, so this being the second year of the study, that makes it super exciting to be able to make some comparisons between this year and last. Were there any standout differences?
Jenay Robert 02:17
Yeah, there were a few interesting things. Some a little more subtle, some not so subtle. So I think on the subtle side of things, we're seeing perhaps a slight shift towards enthusiasm for AI tools. So fewer respondents saying that they're mostly cautious or that they're indifferent or that they don't know how they feel, and then a little bit more respondents saying that they're optimistic or they have a mix of caution and enthusiasm. So in general, even though this is a subtle finding, I like to really point this out, because I think we tend to hear that there's people who are either 100% for or 100% against AI tools at our institutions. And while those extremes do exist, I think our data don't support that most people fall on one side or another. I think most people are really somewhere in between, and perhaps leaning a little bit towards optimism. Another piece would be that, we're, we asked our respondents about what they think about the future of AI in higher education, and so this year, 55% of respondents predicted that academic dishonesty would increase in the future, which is a decrease of nine percentage points from 2024. So I do think, and this is also based on kind of the conversations I'm having with colleagues, I do think that some of that initial panic, while, while we're still quite concerned about academic integrity issues, I think some of the initial panic is starting to subside, and people are really thinking about, how can we, how can we leverage this point in history to push back on assessment practices?
Rhea Kelly 04:00
That's an interesting shift, sort of more enthusiasm and less distrust overall.
Jenay Robert 04:06
Yeah.
Rhea Kelly 04:07
Were there, oh I know that there were some new questions because I noticed some, some new results that I didn't remember from last year. So I'm curious about that process of how you came about, you know, deciding which new questions you needed to add.
Jenay Robert 04:21
Yeah, the biggest change there would be, there's a section devoted to AI use cases, and this is really trying to drill down from that high-level strategic planning view, which I think the 2024 study was very much at that high level, but that came from the community. So like I said, I'm often lurking in these events or having conversations with our members, and we, that was the biggest thing we're hearing from our members, is, yes, the strategy is important, yes, leadership is important, but we really need to know what's happening on the ground. How are people actually using these tools? And so that is a tough process, I'll be honest with you. I want to ask all the questions all the time, but trying to keep the survey to a manageable length, essentially kind of starting off by building in this new section and then making the hard decisions about which questions to cut after that.
Rhea Kelly 05:17
Yeah, that's something I've sensed as well, that there's sort of almost being just tired of being stuck in the strategy stage and needing to, let's just like, do something.
Jenay Robert 05:28
Yeah, we get a lot of questions. I, you know, I was traveling all over the world presenting on those research results from 2024 and one of the big questions I would get consistently was: Okay, so how is this playing out? You know, what are we actually seeing? And while I was able to speak to that sort of colloquially and just out of observations I've had from chatting with members, I didn't have great data to support any of those claims. So I think that's really the direction that we're trying to shift with our research, both in this study and in upcoming AI studies. Of course we're already talking about the next round of AI research, so I think you'll see future Educause AI research really kind of digging into those deeper questions.
Rhea Kelly 06:15
So last year, I remember one of the big findings was the high rate of "I don't know" answers. So I'm curious if that has changed a year later. Like, do we know more about AI?
Jenay Robert 06:26
The answer is a little bit, maybe. So, you know, I went back over looking at the "I don't knows" specifically over the two years, and I'd say there weren't huge shifts. There was nowhere where we saw a massive decrease in those, and in some cases, maybe a slight bump up. But I can point out a couple of areas where I think there's some decline in the "I don't know" responses. So in 2024, 20% of respondents said they didn't know if AI was impacting their institutions' policies. And in 2025 that decreased to 12% of the respondents who said they didn't know. So perhaps a little bit more awareness about how AI is impacting policy. And that's actually a really great shift, because, as we talked about last year on your podcast, and we've been talking about with everybody, it's so important to communicate across the institution how policies are changing, or how policies are perhaps not changing, but still applicable to new use cases that involve AI. So we're really happy to see that shift in the community. Another one was related to folks saying, not being sure if their data, their institution was preparing data to be AI-ready. So that number decreased a little bit. And then also asking about the adequacy of cybersecurity and policy guidelines to address AI-specific risks — this is a really nice statistic that shifted. So in 2024, 40% said they didn't know, and that number dropped to 20% in 2025. Now there's still a lot of work to be done there, because I still think the majority of our respondents think that those policies are not adequate, but at least more of them are sort of in the loop on what's happening at their institutions. And again, I'll just reiterate that the, looking at this "I don't know" as a significant result is so important, because the people who take our surveys are the end users of these tools, and so if they're not sure about the policies or the impacts of those technologies at their institutions, that's, that's an important gap to pay attention to, for leadership to fill people in.
Rhea Kelly 08:41
I thought it was interesting that this year's study touched on how institutions are accommodating new AI-related costs. Could you talk more about what's going on there?
Jenay Robert 08:51
Sure, and this was, again, something that came out of discussions with members, attending events, just kind of keeping my ear to the ground. We touched on it a little bit in the last survey last year, where we asked about if institutions were partnering with third parties, for example, to help fund pilots or whatever initiatives they had going on. We wanted to get a little bit more clarity on that this year. So first, important to comment on this, about a third of respondents said that their institution is not accommodating new AI-related costs at all. And over 40%, speaking of the don't, the don't know result, over 40% said they don't know how their institution is accommodating new AI-related costs. And what that says to me is that either budgetary impacts are not very significant across higher ed right now, or maybe they are, but they're not really being talked about widely. So that's something that perhaps institutions individually can dig into, kind of think about, what are the impacts, and how are we accommodating them, and are we raising awareness across the institution about budget that maybe is or is not available for this type of work. So for those respondents who said that their institution is accommodating new AI-related costs, a little over half of them said that it was primarily from reallocating budget previously spent on other things. And then we asked the survey respondents to describe what those sources were in the open-ended comments. So we know that there are things like discretionary funds, flexible technology budgets, innovation budgets. So I think it's, what, for the most part, what I'm seeing is that there's a little bit of shuffling in terms of flexible monies that they can apply to exploring AI-related costs. And then finally, in 2024, 63% of executive leaders said they weren't working with anybody to help fund AI-related costs. But in 2025 that number dropped to 56% of executive respondents. And the reason we're focusing on executives here is because we think that they're probably the best place to understand what's happening with those financial partnerships at their institutions. So seeing, seeing the, that number drop, in terms of people who aren't working with third parties, that certainly points to some increase in those types of partnerships.
Rhea Kelly 11:22
It's kind of interesting that AI costs would be seen as something special, like needing, you know, flex funds or whatever, and not just part of, say, the cost of your learning tools. You know, like they haven't quite become part of what, what is just considered your normal array of tools. Is that, do you get that sense?
Jenay Robert 11:40
Yeah. I mean, I think it's, if you kind of think about being on the ground in procurement or something like that, it's a question of, do we need to switch out a new tool? Do we need to turn on a new feature? And so that, that would require at least some shuffling in the budget. And so I think it makes sense in that way, that that's what we're hearing from institutions, that they're not necessarily, for the most part, going out and trying to get new money, but that they're kind of just reallocating — well, we used to subscribe to this tool, and now we're going to kind of shift over this way. But I think that where you might see needing new money is if you really want to be intentional about evaluating new tools, about bringing in new AI literacy programs, for example, training for faculty, you're either going to have to eliminate something else that's been important up until this point, or you're going to have to try to find some new money to support that. And I think we see this in the cases of institutions who are, for example, hiring new executive leaders or hiring new staff. That money certainly has to come from somewhere.
Rhea Kelly 12:50
It also makes me wonder kind of what kind of costs are we talking about. Because hiring new leaders that are focused on AI, that would be, you know, a different type of thing than investing in, you know, a ChatGPT rollout across your whole institution.
Jenay Robert 13:06
Yeah, and that would be a really interesting avenue for this future work that I've discussed, where we really want to dig deeper into what's happening on the ground. That's one of those frustrating limitations of these big surveys, is that I want to know all the things, and I can't ask all the things. And so I do think that kind of getting a little deeper into the financial impacts would be very interesting for the future.
Rhea Kelly 13:31
Yeah, for sure, yeah, because there have been so many examples, like CSU recently, huge AI, you know, initiative. And I can imagine being a small college, just thinking, how are they coming up with the money to do that?
Jenay Robert 13:45
Yeah.
Rhea Kelly 13:45
Interesting stuff. So could you talk about institutions' perceptions of AI risks versus benefits? Like, what are people still worried about, and what are they most optimistic about?
Jenay Robert 13:57
Yeah, I, this is maybe one of my favorite things in this report, because, well, I'll just go into it, that everybody seems kind of concerned about all the risks, and everyone seems kind of optimistic about all the potential. And so this is always a risk when you ask people to kind of rate a list of things, they can kind of say, Yes, I'm very concerned about everything, and I'm very optimistic about everything. And in some ways, that seems like it's not an interesting finding, because, oh yeah, we're all worried about everything. But I think that in the case of AI, this really is a very important finding, because it validates the feeling that we have in the community right now where people feel that they are drinking from the fire hose. They don't know where to focus their attention. Everything seems important and urgent and interesting, and there's potential and there's risk. So there's a long list of risks and a long list of potential uses that you can find in the report. Um. And the punchline there is that, you know, everyone's worried about everything and everyone's excited about everything. So, yeah, I mean, I think, I think this is really a case where you've got to drill down into what's happening in your local context. What are your faculty excited about? What are your students excited about? Every institution has a different focus and a different approach to education. Are you a liberal arts school? Are you a research-intensive environment? And so with all of that context in mind, that's where kind of building off of the work that we've done, investigating locally would be really helpful.
Rhea Kelly 15:39
Yeah, it makes sense that the risks and benefits would be very individual to, to the institution's unique sort of circumstance.
Jenay Robert 15:47
Well, there are risks where all, we all agree on, of course, still, right? Like so we still all agree that data privacy is a huge risk, cybersecurity is a big risk, ethical use of these tools, understanding where human creativity begins and ends, and are we encroaching on that? Are we impacting the way students are able to learn to think critically? So these are all things that I think the community very much agrees on, but there's just so many of them, and so trying to decide where to focus and how to shore up your own policies and guidelines and practice can be quite challenging. Legitimately.
Rhea Kelly 16:29
Were there any risks that people were less concerned about than last year? So I think you mentioned plagiarism concerns were less. Am I getting that right?
Jenay Robert 16:38
Yes. So indirectly, we, I wouldn't, I don't know that I would say that they are less concerned about plagiarism, but that they, they're, this kind of arose when we asked about people's impressions of the future, and what are their predictions of the future. And so in that, in that way, you know, we can say that I think our respondents think that there is less of an impact on plagiarism, and, in the future. That there will be, there won't be this big, terrifying sort of fallout from AI. But I will say with regard to all those other risks, those we didn't quantify in last year's survey. So in last year's survey, what we did was we had open-ended comments and asked the community, what are all the things that you're concerned about? And we used those data actually to build out these quantitative, closed-ended questions on the 2025 survey. So I have to say, though, if you go back and look through those percentages, folks are quite concerned about all the risks. You know, you're gonna see like 90% people saying they're concerned, 80%, so I imagine that if we had quantified last year, it would be hovering around this, the same numbers. Yeah.
Rhea Kelly 17:59
So are there any other findings that stood out for you?
Jenay Robert 18:03
I mean, that was definitely one of them. Just kind of all of us drinking out of the fire hose. It felt good to me, at least to feel like, that, I was validated. Like, yeah, oh yeah, okay. We all feel the same way. We're all very concerned and very excited. I think something else that I want to point out, because it isn't as big of a finding in the report, but in that risk section, we asked an open-ended to elicit more risks than what we have listed. And one of them that came out from there, just a few comments, but I think very important to pay attention to, is that there's this emerging friction or tension between some faculty and some staff with regard to how they feel about AI. And specifically, one of the respondents described this as a lack of collegiality, which I, you know, and I, when I wrote this section of the report, I pointed out that, based on our Horizon Report research, we know that we're living in a time that is — and it's common sense too, we don't need to point to research to this — we're living in a time where people are increasingly divided. And so having another thing on our campuses that could act as sort of a divisive point of concern, we want to pay attention to that and try to catch that early and bring people together as early as possible. And as much as I said that the majority of our respondents are living in kind of an in between space and don't feel strongly 100% for or against using AI at our institutions, every institution is different, every department is different, every individual is different. And so we do see those big differences on our campuses in varying degrees. So that's something I want to kind of point out to our community, to try to get a handle on how much of that is happening at your institution, and perhaps get ahead of any lack of progress due to that. And then my colleague and co-author on this report, Mark McCormack, wrote a phenomenal section. It's one of the last sections of the report about differences between smaller and larger institutions and this emerging sort of digital divide at the institutional level, with regard to AI. So Mark pointed out in the report that smaller and larger institutions are quite similar in things like their motivations for using AI or their optimism, optimism about AI, but that larger institutions — and again, this is kind of something that is logical in many ways, but having this data is really important for raising awareness about it — that larger institutions seem to have more resources and more capabilities to implement AI at their institutions. And so I want to highlight this finding, not only because it is interesting and validates some of the things that we've all suspected are happening, but this type of a divide between institutions can trickle down to students, of course, and at the end of the day, if we're really trying to look ahead, this is the type of thing that can eventually widen digital divides between students. I don't know that we're at that point just now, but I think as we see more implementation of AI tools and teaching students about AI, if not how to use it, at least what it is, and how to be aware of the implications, that, that could drive differences in or drive an increase in the digital divide among students.
Rhea Kelly 21:33
Okay, so I have a question out of curiosity, which you can, you can opt out of answering, if you like. But have you tried using AI tools to help analyze data from the survey?
Jenay Robert 21:45
I have not. So our data policies at Educause actually would prohibit me using any of the tools at my disposal currently. But having said that, I will say that we're always trying to look for, what is that next step as a researcher, I'm very interested in figuring out what the future of research looks like, and specifically social science research, because that's what I do. And so always looking for those tools that, as they develop, walled garden tools, local tools that are stored in local drives, I think would be the way to go for me as a researcher, and certainly what most of my colleagues are using, but I just haven't explored that yet in practical terms. But I will also say I'm very excited about perhaps someday not needing to take three weeks to code all of the open-ended data that comes from the Horizon Report in particular. I know that's not the report we're talking about right now, but that is the research that's most time-intensive for me. So someday, if there's a tool that can do it well enough.
Rhea Kelly 22:52
That would be, that would be a good use case for generative AI, I feel like, but yeah, those open-ended questions are brutal. So final question, what recommendations should institutions walk away with from this study?
Jenay Robert 23:06
I'll beat the same drum that I have been for the last year since the previous study, and say, communicate, communicate, communicate. There's so much going on at our institutions, in pockets and in silos, and we're not necessarily reaching across those silos to communicate about the work that we're doing, the AI use cases that are popping up, communicating about how policies and guidelines support or prohibit some of those uses. So that is number one, communicate. Make sure that the right people are at the table when making decisions about AI. And this includes students. You hear this a little bit here and there, but it's hard sometimes to include students in those decision-making loops. So to the best of your ability. Some institutions, on Educause Shop Talk actually, but I think it's a few months ago now, we had a couple of folks from Penn State on there talking about how they had this AI Student Advisory Committee. So look for examples like that from the community and see what you can do to bring in the student voice as much as possible. I have a lot of people who say, I just don't know where to start. This is really interesting research, but I don't know anything about AI, and oftentimes they're talking about generative AI, but you know, the full umbrella of AI tools, and so for folks who are listening who are in that boat, I would say, start just by educating yourself about the technology. I've spent a lot of time over the last couple of months watching YouTube tutorials from faculty who teach classes on AI. How to, what even is an algorithm? Okay? You know, if that's a question you don't know the answer to, it's worth just googling AI 101, generative AI 101, something like that, and figuring out from there. Subscribe to one or two bite-size blogs or newsletters. Attend an AI Summit. I've been to a couple of AI summits over the last couple of years that are just phenomenal. And specifically, a couple, a week ago now maybe, I was at the AI2 Summit from University of Florida, and that's where you hear all sorts of amazing things that are happening on the ground, in the classroom, in research, research partnerships between colleagues at our institutions and colleagues in industry, training for faculty and staff that's happening in institutions. So you know, really trying to attend an AI summit would be something I would recommend. And then, of course, check out all the other Educause resources we have. There's events constantly. If you look for the Educause event finder, which is linked in the full report, there's always events coming up that are related to AI. There's an Educause library dedicated to AI. So all of the AI resources we have can be found in one easy place, so that you can just find that by googling Educause library artificial intelligence and it should come up. There's an incredible community group on the Educause Connect community platform, and they are constantly helping each other with troubleshooting and developing new tools and policies. Every, everything you can imagine is happening in there. We actually had a couple of the leaders from that community group on Educause shop talk recently, and just some amazing work that they're doing there. So, and, and reach out to me if you have more questions. I'm always happy to hear from our members.
Rhea Kelly 26:38
Yes, definitely. Well, and for that question, like finding out what an algorithm means, a piece of advice I gave someone recently is like, go ask AI. It'll give you a pretty good answer. You know, you always have to evaluate what it, the output, but it's an interesting way to learn more about what, what's possible and what it can tell you.
Jenay Robert 26:57
Yeah, find, find your chatbot of choice, and, and, and YouTube. I mean, I'm not, not everybody feels comfortable just signing in, up for a chatbot, or having chat, having a chat with a chatbot. I get that. But, you know, searching for some of these things on YouTube and learning a little bit more, attend some webinars. Start as small as you have to, but you have to start because AI is becoming more and more ubiquitous. One of the results I didn't talk about was that we asked survey respondents the extent to which AI is impacting very, various areas of the institution, so like teaching and learning, technology, business operations. And you know, the big finding last year was AI is impacting all these areas, and not a tiny amount, like a pretty significant amount. And the big headline this year is AI is still impacting all these areas and more than they were last year. So all those impacts, the percentage of respondents saying yes, this is impacting this area, that increased across the board, with the exception of teaching and learning, which was already maxed out at like 90 something percent. But every other area that we asked about increased. So it's not going anywhere. People don't, don't imagine that this is going to stop, and at the very least, I argue that we have an ethical responsibility to our students to prepare them for a world where generative AI tools are around and they need to know what is existing in the world around them, how to trust information or not trust information that's coming from these tools, so that at the very least, that's an ethical responsibility that we have in higher ed.
Rhea Kelly 28:48
Thank you for joining us. I'm Rhea Kelly, and this was the Campus Technology Insider podcast. You can find us on the major podcast platforms or visit us online at campustechnology.com/podcast. Let us know what you think of this episode and what you'd like to hear in the future. Until next time.