Open Menu Close Menu

Transcript

Campus Technology Insider Podcast March 2024

Listen: Getting Comfortable with "I Don't Know": Educause's 2024 AI Landscape Study

Rhea Kelly  00:08
Hello and welcome to the Campus Technology Insider podcast. I’m Rhea Kelly, editor in chief of Campus Technology, and your host.

Recently Educause released its inaugural AI Landscape Study, which polled the higher education community about AI strategic planning and readiness, policies and procedures, impact on the workforce, and the future of AI in higher education. For this episode of the podcast, I sat down with report author and Educause Senior Researcher Jenay Robert for a deep dive into some of the thinking behind the study, what the survey findings tell us about institutions' AI journeys, and how "I don't know" might be the theme of the day when it comes to AI. Here's our chat.

Hi Jenay, welcome to the podcast.

Jenay Robert  00:59
Hi, thank you so much. I'm really excited to be here.

Rhea Kelly  01:03
So Educause recently released a pretty extensive survey report on the AI landscape in higher education, which is great, I think it's so interesting. But I always like, when you're talking about AI, I like to start with definitions. So could you talk about how you defined AI for the purposes of this study?

Jenay Robert  01:23
Yeah, and this is a great starting point. I have to say that I'm just back home from being at the very first Educause Innovation Summit, where we focused for a day-and-a-half on AI. And this was a big piece of setting the stage there too. And that was actually a big takeaway for a lot of the attendees saying, wow, I realized that I'm probably not meaning the same thing when I say AI and other people are saying AI. So it's a great starting point for any conversation around AI. In the survey instrument itself, right in the front matter, which I know everyone reads very carefully, we did define AI by the National Artificial Intelligence Act of 2020. So just briefly that reads, "a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments." So we were really trying to focus on that broad umbrella of artificial intelligence.

Rhea Kelly  02:26
I also noticed that the report mentions how common it is for individuals to conflate general AI with generative AI. So how did that impact your interpretation of the survey results? I mean, like, people, obviously they were given the definition, but that doesn't mean their brain's still not conflating those two things.

Jenay Robert  02:45
Yeah, exactly. And like I said, you know, I know everyone reads that front matter really carefully. So you know, we know people are probably skipping that. And we know that even if they did read it carefully, it's probably kind of hard to tease that out. And, but yeah, I think generally, just kind of keeping that in mind as we go through the results of the study, we, we just kind of know that for the most part, a lot of respondents have that in mind. It's, even if, even if it's someone like you or me, where we kind of know that there's, there's this difference and operationally things are happening differently for different types of AI at our institutions, that still just happens to be top of mind. So you know, I think for any survey, you have these sort of, you know, in research nerd speak, we'll say limitations of the study, but really, it's just about framing, kind of understanding the extent to which you can generalize findings, and so forth. So I think that as, as we kind of go through those results, we just kind of keep that in mind, and then have conversations like this where we say, hey, you know, we, we can look at these findings in this way or that way, but we know that there's probably a focus on generative AI when, when people are answering, and so what's happening with this other kind of AI, you know, and so just trying to tease it out a little bit more in, in more sort of like qualitative work.

Rhea Kelly  04:06
Did you get a feeling for kind of the overall state of AI in in higher ed from this survey? Because it kind of reminds me of a few years ago, when, when digital transformation was like a hot topic, and Educause broke that down into a journey of three stages: the, you, first you learn about digital transformation, then you plan, and then you do. So I'm wondering if we could apply those to what stage institutions are in when it comes to AI?

Jenay Robert  04:33
Yeah, I think it's definitely a useful framework in this context. And certainly at the more local level, it's going to be a great way to try to pinpoint where you are as an institution. In terms of sort of the national landscape or the international landscape, I think institutions are kind of doing everything at once. You know, AI, particularly generative AI, over the last year has come at us with such speed, with our, the way our students are using it, with the opportunities that we see, and with the risks that we see. So I think it's, it's just such a fast accelerating area that we're kind of having to build the plane as we fly it. We're trying to learn about all these different pieces and parts, we're trying to learn the definitions, we're trying to see the opportunity and understand the risks, but at the same time, stakeholders are already bringing the technology to action. So we don't necessarily have as much time to really learn, go into a deep learning phase, a deep planning phase, and then an action phase. We kind of have to figure it all out at the same time, which I think is part of the, for me, excitement of it. Like it's, it's so interesting and exciting to kind of see this different way that we're having to deal with this. And, but it's also, you know, nerve wracking, because some of these things that are happening are very high stakes and obviously at much higher risk for getting it wrong, since we have to kind of do it all at once.

Rhea Kelly  06:01
Did that, sort of, from that AI summit that you mentioned, did you get a similar sort of takeaway from the participants there?

Jenay Robert  06:10
Definitely. So I actually talked about these research results as one of the first sessions of the, of the summit. And one of the kind of big findings that I pulled out in my talk was that we see this really high rate of "I don't know" responses throughout the surveys. So it wasn't particularly surprising — I think I expected to see a lot of "I don't know" because this is, this is a landscape study, we are trying to hit a lot of topics, this is something emerging, and it's reasonable to think that many stakeholders would kind of not be in the know. But to have questions where you see 30, 40% of respondents saying, I don't know the answer this question. I don't know what's happening at my institution. That is, that's kind of striking, right? And so I brought that out to the folks who attended the summit, and by the end of the second day, people were saying, I think "I don't know" is like the theme of the week. I think it's just kind of where we are with this technology, in the way where we all just don't know so many things. And so that's one of the big values that I see in things like this research and going to AI summits and connecting with other people in the space, is that it makes us feel a little bit better, first of all, that we, I'm not the only one who doesn't know what the heck is going on on campus. But then it also gives us an opportunity to connect and figure out okay, well, where do we go from here? How, how can we become more in the know? So yeah.

Rhea Kelly  07:38
I guess getting a comfort level with "I don't know" is, it might be kind of challenging for, for some people. You know, there's a discomfort there.

Jenay Robert  07:48
Yeah, for sure. And we are all here for our students primarily, right? I mean, that's what at the end of the day, we want to do the best we can for our students. And so when we're so passionate about what we do, as everyone in higher education is, it's very unsettling to be in a place of "I don't know," because to us, every day is such an important day — it's a day that we have an opportunity to serve our students. And so to be in a place where we don't, we're not sure that we're doing that to the very best of our ability, that, that takes us off balance a little bit. So yeah, I think that's one of the ways that it's quite unsettling. But then kind of understanding that, in some cases, "I don't know" can, can lead to pretty catastrophic outcomes. One result in the report looked at the extent to which folks felt that the privacy and security policy at their institution was sufficient to address AI-related concerns. And for a large portion of stakeholders to not know that information could have very serious outcomes. It could mean people are bringing in technologies that are not safe, that are not protecting the privacy of our students. So it's, it comes at us in multiple ways.

Rhea Kelly  08:54
Yeah I was just going to ask about the, the stakes, because you mentioned how like, there are some high-stakes things going on. So one of them obviously is the privacy issue. And then I suppose, you know, the need to serve students and support student success. Is there anything else that that contributes to that high-stakes feeling?

Jenay Robert  09:12
Yeah, so you've definitely hit on two of them, right? So thinking about serving our students to the best of our ability, making sure that they're actually learning. I mean, that's where the whole academic integrity discussion comes in. People are really fearful that they're not going to be effectively teaching students because students are using these tools in ways that sort of bypass the learning process. Privacy and security, of course, huge themes over the last few years. It's become even more and more important for our institutions because every single day, our society and our institutions are more data-rich, more data-reliant. Many times in the work that we do at Educause our members say that data is just as valuable as currency and we need to protect it accordingly. I would argue that it's even more valuable than currency, because now you're looking at human capital in a lot of cases. So yeah, those are definitely two of the big ones. And then third, looking at access and equity issues. So as — and I think that this has less impact in the early stages as it could as we progress farther — so as we see more and more AI tools being integrated into teaching and learning, we need to be concerned about reinforcing various types of biases. We need to be concerned about access to education — does it make it harder for some people to actually access education because they don't have access to these tools? Are we widening the digital divide? Right? The accessibility of these tools is still in early phases. So while that should be a foundational component of any tool that's created, we know that digital tools are not usually created with accessibility in the foundation of the creation. So that, that, that is then another layer. So I think in the, in the access and equity world, that's kind of the big third kind of high-risk area, in my mind.

Rhea Kelly  11:13
What does the survey, or the survey results tell us about what institutions need to be doing, like when they're formulating their AI strategy, developing policies, and sort of navigating implementation?

Jenay Robert  11:26
Yeah, so, you know, I already touched on this "I don't know" theme of the week. And I think that that's something that — I tend to joke when I present on these data where I say, okay, another presentation where we talk about silos in higher ed. You know, it's become a little bit of like the standard. But in this case, I think it's even, it's exacerbated even more. So you've got people in highly technical areas that really understand the technology and the capabilities, you've got folks in data privacy or data and data security areas who are really experts in those things, the teaching and learning folks who are trying to figure out what do we do with this on the ground, administrators who are trying to lay out policy and kind of guide the ship. It's just everyone across the institution has some stake in this — this is something that touches everybody. And so kind of bridging those gaps in communication and getting both top-down and bottom-up involvement in policy and guidelines and strategy is really, really important. And then connected to that but separate, I'm really encouraging folks to collect local data. So we have linked in the report the full survey instrument that we use. So that's one option for any institution to just take that instrument and use it at your institution — and I'll say if anyone does that, please email me because I want to know about it — and see what you get. But I'm always encouraging folks. At the, at the summit, we talked in some small groups about doing things like creating student advisory groups, which is great for any topic, not just AI. So yeah, those are the two big things: really shoring up communication across silos, and then collecting local data, figuring out where your stakeholders are, where your local community is. Every institution is so different and so unique that pairing local data to a larger study like ours is really important.

Rhea Kelly  13:28
So now's the time to survey your constituents about AI and then and then benchmark against the Educause report?

Jenay Robert  13:37
Yeah, you know, and, and find out what the unique needs are for your, folks at your institution. So depending on geographically where you are or what, even just the culture of your school, the types of students you have, the, the sort of acceptance of technology of your faculty, right, the extent to which faculty are integrating technology, these all play such a role. How much funding you have — some institutions have the funding and the capability to hire a new VP-level AI director, for example, and other institutions are just trying to get by with the staff they have that have already gone through cuts in the last couple of years, and they're trying to figure out, okay, well now we have to add these responsibilities on top of existing job roles. So, you know, there's such a wide range of institutions and just people and needs, that collecting that local data is really vital.

Rhea Kelly  14:39
Yeah I love how you touched on how it's impacting job descriptions. That actually leads right to my next question, because there's a section of the report that focuses on that higher education workforce and how AI is, is like changing that up. So do you think we're going to see AI making its way into more job titles?

Jenay Robert  14:59
I, so I hope so, but I'll say not kind of in a blanket way I hope so. I think this really kind of connects to conversations we've had at Educause in the past around data governance, and you know, everything related to AI is just data heavy. It's just the nature of the technology. And so I do hope that data and data governance and privacy, data privacy and data security, I hope all of those topics start making it into formal job descriptions, and then as appropriate, AI to make it into job descriptions. So I think that for this question in particular, sometimes, the jury is still out in some ways, we're still trying to figure out how prominent the role of AI is going to be at our institutions. And that's why I'm hedging a little bit in terms of do I expect to see AI in all job roles? Well, I think our community needs to kind of make some decisions about how they want to use AI first, and that's not really my place to impose that. So I'll be interested to see the extent to which that happens. But what I, I do hope is that if an individual is asked to take on a significant role relating to AI, that that is codified in their job description. And I mentioned some reasons for that in the report. Folks are burned out in higher ed. We've got two recent workforce reports out of the research team at Educause and another upcoming workforce report, and across all three areas, which were, let's see if I can remember off the top of my head, teaching and learning workforce, the IT workforce, and privacy and security workforce. So across all three areas, we see that folks are saying they're overworked, they're overwhelmed, they're burned out. And so really what we're seeing now is a lot of new responsibility being added for folks without that formal description, job description, and so forth. So that, that creates more opportunity for burnout. It can limit access to resources that folks need to get those, to get that work done, and so forth. So kind of a yes/maybe answer to your question. But definitely if people are doing these things, they should be formally recognized and have a space and the resources to do them.

Rhea Kelly  17:21
I believe that in the teaching and learning workforce study, AI was one of the top sort of time-consuming parts of the, you know, people's jobs, and perhaps contributing to that burnout, I wonder.

Jenay Robert  17:36
Yeah, I think, you know, in the teaching and learning area in particular, it's a really interesting, we had a conversation, a few of us at the summit this week, around impact to teaching and learning in particular, where it's, of course, on a regular basis, faculty are always developing courses, keeping up with course content, modifying course content. That's sort of the standard practice and part of the job of teaching. I've, I've taught courses in the past, I know that that's the case. But the sort of splash that generative AI in particular has made over the last year, is really creating a need for that reworking and retooling sometimes in very dramatic ways. Sometimes a course is completely outdated overnight. And so I think that's where we're seeing faculty, yes, they are always needing to update their courses, but now in this very extreme way. And there aren't more hours in the day, and there isn't less research to do, there isn't less student advising to do, there aren't fewer committees, there will never be fewer committees at our institutions. But, you know, nothing has adjusted. It's just oh, by the way, before you start that class in three weeks, you're going to need to completely rewrite it. So I think that's really where there's sort of a unique challenge for teaching and learning folks.

Rhea Kelly  18:53
So were there any findings from the survey that like surprised you?

Jenay Robert  18:58
Every time somebody asks me this, I think I'm not sure what would surprise me anymore. When you do survey research for quite some time, you start to realize that anything is possible, anything can happen. But definitely some things that were really interesting, just, oh, okay, yeah, that I can, I can see that, that's cool, or not cool as the case may be in some of these instances. But one thing is that I think stakeholders in frontline roles, so thinking about faculty and staff, may not have a very accurate perception of their institutional leaders' perceptions of AI. The data it kind of points to this idea that leaders are feeling this, this mix of optimism and, you know, speculation, and they're cautious, and some of them are pretty optimistic, and a few of them are super pessimistic about it. But then when you disaggregate those data by job role, you can see that the frontline folks are more likely to say that their leaders are pessimistic. So this is interesting and a little concerning in the sense that if people in frontline roles are working really hard to react to AI challenges, to retool their curriculum, to whatever the case may be, they should know that they have the support of their leadership. And perhaps, and again, this is a great place to collect that local data because at your institution, it, it might be different, but at least the large study is pointing to this potential for miscommunication about that general orientation towards AI. And linked to that it was a very similar difference in terms of what folks, who folks thought were leading the, were leading in terms of strategy at their institution for AI. So each stakeholder group, we broke it down to like leaders versus frontline folks, it really was the leadership were more likely to say that the leaders were in charge of AI strategy. And then people on the frontlines were more likely to say that people on the frontlines were in charge of AI strategy. So again, that mismatch that could seem kind of inconsequential at a surface level, I think if you dig deeper into it, it could lead a little bit more to that sense of isolation, the disconnection from resources that you would need, the disconnection from support that you would need. So yeah, that, and that "don't know" across each category, I would love for people to kind of read through the report and just every time they see that "don't know," like, jot it down. Oh, 30% don't know this, 28% don't know that, you know. So just reinforcing that need for communication and collaboration across units and job roles.

Rhea Kelly  21:38
In a way, seeing all those "don't knows" kind of, I imagine, it would make me feel better about not knowing things.

Jenay Robert  21:46
Yeah, right? I mean, it's the general state right now. We all are kind of figuring this out. And in higher education in particular, we're in an industry where we, we value expertise, we value this idea that some people are very much in the know. So I can't tell you how many times in my work over the last year related to AI, I would ask Educause members for their input on various AI topics, and one of the first things they'll say is, well, I'm not an expert. And so now over the course of the year, I've learned that one of the first things I have to say is no one's an expert in this, there's truly no one that I can go to that claims expertise in this particular question, whether that's, you know, the best way to implement generative AI tools in a writing course, or — there's just no right answer to some of these things. And that, that too is reflected in the study. So towards the end of the report, we talk about appropriate uses of AI in higher ed and inappropriate uses of AI in higher ed. And I specifically created that question because I understand that these things are fluid, that the word appropriate means different things to different people, but as a community, we are starting to try to figure out how we want to use these technologies. So there again, I think in the spirit of "I don't know," the interesting thing is we saw things pop up on, on both sides of that coin. Assessing student work for, for example, was listed in both columns: People were saying it's a great grading tool, and other folks said, this should never be used to evaluate student work. And there were people who said AI should never be used for anything at all in higher ed. So, you know, I think, I think in many ways, we're all just figuring it out together. And there's not a single expert in this just yet.

Rhea Kelly  23:45
Yeah, I wanted to talk a little about opportunities and risks, but I think we've touched on the risks already. But what are some of the opportunities that institutions are seeing, I guess, from, from their perspective?

Jenay Robert  23:57
Yeah, I think probably the biggest one people are excited about and are talking about is the opportunity to serve our students better. And we see this in a couple of different places in the study. So in the beginning of the report, I talk a little bit about institutions' motive, the motivation that they had for engaging in AI-related strategy, and then the goals of their AI-related strategy. And so really kind of interesting result here, where institutions were very much motivated by kind of this need to keep up, basically. So saying that, you know, students are already bringing this technology to the classroom, and we don't want to fall behind the times, that sort of thing. But then in terms of goals, those goals are very much focused on student success. So things like preparing students for the future workforce, creating new and better ways of teaching and learning, and so forth. So yeah, I think in terms of opportunity, that's the major one is what can we do for better personalized learning experiences, streamlining things that students don't need to take up their time with, meeting students where they're at, 24/7 sort of support for students. I can say that we did, as part of the summit this week, we did a little like hackathon, where folks kind of designed these, what would, what would be like a dream tool that you would bring. And many of them focused on student success and supporting students holistically, even outside of the classroom. And then I think, sort of distant second, third, fourth to that, we see people interested in opportunities to reduce workloads and streamline operations at institutions. And again, that's an area of mixed sort of, will, will AI really help us streamline our workloads, or will we just offload some stuff and pick up some new things? Our editor at Educause made the, made the observation that we all thought the washing machine was going to free up our time, but look how that turned out. So an appropriate comparison.

Rhea Kelly  26:06
Are there any other takeaways you think are important to mention?

Jenay Robert  26:12
Yeah, I really just want to reinforce this idea that the jury's still out, you know, we, we still haven't made these decisions. And I really want folks to feel that they can take an active role in the future of AI in higher ed. We spent a lot of time talking about that both in the research and in the summit this week. I think all too often, we're so busy in our institutions, that we're kind of trying to keep up with what's coming at us. But through research like this, and through the research of things like our Horizon Reports and our Horizon Action Plans, we're trying to give our community the tools that they need to create the future that they want to see instead of being recipients of just whatever the world decides. So that's, that's probably one of my biggest takeaways. And then the other is, I would love for folks to really pay attention to what is not being talked about in the popular discourse. So there's lots of talk about academic integrity and students cheating and what's, what, what can we, what should we be limiting, and concern about data privacy and security. We don't, we don't see quite as much as we need to around the access and equity issues, thinking about the accessibility of the tools. I know that people kind of hand wave a lot of times about reinforcing various biases, but those are some of the most central issues that we need to keep top of mind. Yeah, and so, and, and the other one I always like to make sure, even I forget, because it's not that much in the discourse, is environmental impact of large-scale computing of any kind, including AI. And so I hope to see that becoming something talked about a little bit more soon.

Rhea Kelly  28:03
Yeah, definitely some very real problems out there that, that tend to get overshadowed by the teaching and, like the plagiarism concerns — those are important, but, but definitely not the most, or the only important thing.

Jenay Robert  28:17
Yeah, absolutely.

Rhea Kelly  28:21
All right. Well, thank you so much for coming on. It was great hearing all about that survey.

Jenay Robert  28:26
Thank you so much. It was great to chat with you. And I'm happy to chat about any research anytime. I'm a huge research nerd. So any research that you want to have a conversation about — I'm there.

Rhea Kelly  28:42
Thank you for joining us. I'm Rhea Kelly, and this was the Campus Technology Insider podcast. You can find us on the major podcast platforms or visit us online at campustechnology.com/podcast. Let us know what you think of this episode and what you'd like to hear in the future. Until next time.

comments powered by Disqus