Campus Technology Insider Podcast February 2024

Listen: Could Institutional Policies on Generative AI Hold Back Its Transformative Potential?

Rhea Kelly  00:08
Hello and welcome to the Campus Technology Insider podcast. I'm Rhea Kelly, editor in chief of Campus Technology, and your host.

David Wiley is well known as the co-founder and chief academic officer of Lumen Learning and a long-time advocate of open educational resources and access to educational opportunity. But if you follow him on LinkedIn or on his Improving Learning blog, it's clear that he also does a lot of thinking and speaking and writing about generative AI. For this episode of the podcast, we spoke about why generative AI is the logical successor to OER, AI's impact on instructional design, exciting AI developments on the horizon, and why it's too early for universities to write policies for generative AI usage. Here's our chat.

Hi, David, welcome to the podcast.

David Wiley  01:03
Thank you so much for inviting me to be here.

Rhea Kelly  01:06
So you are well known for your advocacy on open educational resources, open content. And lately, you've been prolific in blogging and speaking about generative AI. I think the parallels that you've made between those two realms are pretty interesting. I thought we could start there and have you talk a little more about that.

David Wiley  01:26
Sure. Well, let me start by saying thanks again for the invite. You know, in the late 1990s, when I was really starting to work on this work, open content — or what eventually came to be known as OER — was really the best tool available for increasing access to educational opportunity. And so I became a really vocal advocate for using open content and OER to do that. But the end goal was always increasing access to educational opportunity — it wasn't to promote OER in and of itself, if that makes sense. There's a confusion of the means with the ends that happens if you forget that your end goal is trying to increase educational opportunity, and instead think that you're an OER advocate. Because what, what will inevitably happen, and what has happened now, is that eventually there will be some innovation that will come along that provides even greater access to educational opportunity. And you want to be able to kind of seamlessly move into that future where you're advocating for the advance that's going to provide the most access to educational opportunity. You don't want to be stuck in the past advocating for something that used to be the best way to do it, but isn't the best way to do it any longer. So, I mean, in that sense I think of generative AI as being kind of the, the logical successor to OER. So I think that they're connected in the sense that I see them very much from the perspective of using them to increase access to educational opportunity.

I think there are, there are two other connections maybe that are worth mentioning. One is about a decade ago, I created this five Rs framework that a lot of people use to talk about and define what the word "open" in open educational resources means. Those five Rs are retain, reuse, revise, remix, and redistribute. And we won't do, we won't do a whole little tutorial on the five Rs right here. But I think there is something interesting, an interesting connection here, in that you can apply that five Rs framework to model weights, which are kind of like the source code of generative AI models, as well. So if, if you have permission, kind of from a legal standpoint, to download the model weights, which will be retain, revise and remix those model weights via fine tuning or, or, or maybe indirectly through RAG or some other process like that, and to use that updated model in any way that you want to use it and also share your updated model weights with others — then we would say that that generative AI model is open in the same sense that open educational resources are open. So I think there's an interesting connection there for us to think about too. To what extent does it make sense for us to advocate for models being open in a five Rs kind of sense? And there's already a lot of that kind of activity happening on places like Hugging Face, which I kind of struggled to take seriously for a while just based on its name, because it's named for the little hugging face emoji. But it's a huge community where people share open source models, they download them from there, they fine-tune them and make updates to them, and they reshare them there. They compare them to each other. There's kind of a leaderboard there of which open source generative AI model is the kind of most effective one right now. So there is some, some of that activity already going on, and I think that's an interesting connection between pre-generative AI OER and what's happening right now. The second connection has to do with copyright, which is a separate issue that I expect we're going to want to talk about.

Rhea Kelly  05:18
Definitely, because that was exactly where I was gonna go next. I think you have kind of a unique perspective on that issue of copyright, and it's been in the news so much lately with lawsuits and etc. So what do you think is happening, and where is it going?

David Wiley  05:34
I hope what's currently happening is indicative of where it will go. To date, the US Copyright Office has been really consistent in asserting that products created by generative AI tools are not eligible for copyright protection. They can't be copyrighted. It's not that you might choose to release them under an open license the way that you would back in the days of OER. It's just that they're just public domain from the get go. The purpose of copyright, the way it's described in the Constitution, is to provide an incentive for creators to create. And AI doesn't need an incentive. And that's, that's the only reason, stated in the copyright clause of the Constitution, for that specifically enumerated right that Congress has to grant copyrights. So it seems like from my perspective, the copyright office has been making what I think of as being kind of the right calls so far. And I hope that it continues to go that way. If you think about all the products of generative AI not even being eligible for copyright, then it connects back to OER in the sense that anything that Chat GPT, or Claude, or Bard, or DALL-E, or any of these tools create, it's all OER. You can do those five R activities to all of their products.

And so for a couple of decades now, there's been a lot of kind of fretting and hemming and hawing in the OER community about the, what we call the sustainability of OER initiatives. Who's going to create the new OER that's needed? Who's going to go back and do the work of updating and maintaining and improving the OER that somebody created seven years ago? Where's the funding going to come from to support all those people? But all of those, that all changes with generative AI. Before generative AI, we asked who would do that work and how we would incentivize them, Who will create an open textbook about biology, and then five years later, who's going to update it, maintain it, improve it? Now, when you need to know something on any topic in biology, you can just go to the generative AI and ask about it and generally get a really reasonable answer. And if you're using a model that's been specifically crafted to be smarter about biology than, than a general model has been, you can get even better answers. A traditional textbook is kind of a snapshot in time of one person or a small group of people's understanding of a concept. And they've explained it in a specific way, and they've written it down and captured it, so that other people can come along, maybe even after the author is dead, and read that and kind of see where they were coming from, as they described what it meant. That, it really contains a single explanation, a single description, right? Whereas with generative AI, when you ask for an explanation, if it doesn't make sense to you, you can just ask for another. If the example that it gives doesn't resonate with you, you can just get another. If you think you understand but you're not sure, you can ask it to ask you review questions and then have it give you feedback on your answers. There's this, this fundamental difference between kind of pre-generative AI instructional materials and post, and that the one is this kind of static snapshot of understanding, where with generative AI, now the kind of defining characteristic of it is your ability to dialogue with it in an interactive way.

Rhea Kelly  09:14
It kind of sounds like, I mean, do you think there's a place for traditional textbooks still, or are, is generative AI going to kill that format?

David Wiley  09:23
Well, I guess it depends on what you mean by kill. If transforming into a butterfly kills the caterpillar, then, then yes, generative AI will probably kill the traditional textbook, right? But, but it's hard to imagine a medium-term future where every textbook isn't augmented by some kind of generative AI capability that gives students the opportunity to have all the explanations they could ever want, all the different examples on different topics they could want. Maybe I want my examples to be about hiking and basketball and jazz and amateur radio, and you want your examples to be about something else, because that's what's you know, that's what you're into. And to be able to do infinite open-ended review, where I can get specific feedback about my answer, once that capability exists in the world, it's hard to imagine — it's not hard to imagine a short-term future, but it's hard to imagine a medium-term future where that's just not table stakes for any kind of educational materials offering.

Rhea Kelly  10:41
So it's really like a matter of someone who is going to be the author of a textbook, maybe part of the job is training the AI model, so that it is optimized for learning that material.

David Wiley  10:56
Yeah, it's, you know, my, my graduate training is partly in instructional design. So those of us that come from that world have been kind of championing the value of having clear learning objectives associated with, you know, whether it's a lesson that you're doing in class, or it's a chapter that you're writing, or whatever it is you're doing, if you don't have a really clear sense of what the learning goal is for the student, what should they be able to do at the end of this experience, then it's hard to create good assessment. It's hard to create good activities and good content for students to engage with to get them there, if we're not really clear about where there is, right? I'm teaching a class on generative AI in education this semester. And this class, we're looking at a couple of prompt engineering frameworks. And it's been kind of interesting, the degree to which those prompt engineering frameworks mirror some of the frameworks that were developed for writing learning objectives back in the 60s and 70s, right? That each objective, you should think about an objective as having multiple parts, and those parts are this and each of them plays a specific role. And in a learning objective you cover, who's the audience? What's the behavior they should engage in? What conditions should they have to perform it under? And to what degree do you want them to perform it? So for example, maybe I want a third grader to be able to multiply two-digit numbers without a calculator, 90% accuracy. Right? That's the ABCD kind of framework for writing outcomes that Mager developed way back in the day. It turns out that prompt engineering is a lot like that. There are these frameworks that specify what kind of the parts of an effective prompt are, and what needs to go into a prompt. And it seems like there's going to be this kind of, again, a kind of transformation of these learning outcomes into prompts in a way that can drive generative AI to create activities, to create static content, to create assessments, to create feedback, according to that prompt that, that comes through. Back in the day, when you were done creating your learning objectives, then you took a deep breath, because you knew you had to go develop assessments for every single one of those learning objectives. And then once you developed all the assessments, then you had to develop the content and the activities that were going to prepare people to succeed on the assessments, so they could demonstrate to you, to you that they had mastered, you know, the outcome that you're hoping they had mastered. Whereas in the future, it may be that there may be a handoff that happens much more quickly, right? When I have a good learning outcome written, and I've transformed that into a good prompt, it may be that now the next 80% of the work is done by the model. And you'll, you might still have peer review and some of those other kind of quality assurance processes on the back end. But the big chunk of work is going to be done by generative AI.

Rhea Kelly  14:06
Well I have two questions now. But first, can we dive into maybe an example of the prompt engineering? Like one of the things that I've found is that it's helpful to ask the AI to roleplay, so that they, you know, they're acting as, say, an instructional designer. Is that like one of the steps that you've found too? Or, like I want some more details.

David Wiley  14:27
Yeah, another, there are a couple of different frameworks. I'll just pull one up while we're talking here. There is a, I mean, OpenAI has posted a, kind of their own guide to prompt engineering, the way that kind of they think it should be done, which has some great advice like that in it. But there's a really, there was a really great article on Medium a couple of weeks ago, from a woman who won — here I've got it pulled up — the article was called "How I Won Singapore's GPT-4 Prompt Engineering Competition." And here, in here, she talks about what she calls the CO-STAR framework. So CO-STAR stands for context, objective, style, tone, audience, and response. And then in this article, she goes through, you know, for her, what that means. So the context is, you know, you, you are a teaching assistant in an undergraduate statistics course. So, you know, you're setting up that context in some way. And then the tone is that you're going to be very supportive and really encouraging. And the audience is you're talking to first-year undergraduates who've never had, haven't had a math course in several years, because they stopped out of school for a while. The thing that's maybe interesting about these prompt engineering models is this idea of the response at the end, because you can ask the model to respond in different ways. Maybe you want it to respond just with a paragraph of text that a human being could read. But maybe you want it to respond in JSON, or in XML, or in some format that a program could pick up and process and do something else with.

Rhea Kelly  16:14
Do you think generative AI could actually replace, you know, actual human instructional designers?

David Wiley  16:22
Well, you should never say never. But I do think there are important questions in the instructional design process around what should be taught, and what should not be taught. You know, if you're, if you're doing a first semester economics course, you know, what, what topics belong in an introductory economics course. I think there's some of those fundamental questions about what should we teach, what should we omit, what perspectives should we take on it — some of those questions seem like those are values questions that humans are in a better position to answer. But once you make some of those values decisions about, these are the things we should talk about, these are the broad strokes, the ways that we should talk about them, you can hand that off to a human being to write a draft, but you could just as easily hand it over to generative AI to write a draft at that point. It's just a, it's a draft creating machine. But there are a bunch of decisions that have to be made before, you know, before the draft stage of the process.

Rhea Kelly  17:33
So how do you think higher ed institutions should be approaching all of this? You know, I've seen lots of universities announcing that they have developed AI guidelines for teaching and learning. Then you have like Arizona State University recently announced their big partnership with OpenAI. Should more universities be, you know, pursuing partnerships like that? Or how do you think it should play out?

David Wiley  18:02
Well, first, I have to admit that I'm kind of worried that it's too early for us to be writing policies. These tools have been in popular use for about a year, right? We haven't even started to begin the process of imagining their transformative potential. All the ways we talk about using generative AI right now are kind of like talking about horseless carriages, right? We're thinking about the way that we have always done things in the past, and maybe how could generative AI, generative AI make it more efficient, or faster, or cost less. And all of those are great, but they're not transformative — there's just kind of more of the same, but faster, cheaper, better. We need more time to search for and experiment and discover some of these transformative capabilities that these tools definitely have. I mean, think about the mid 90s, when the internet was hitting higher ed. And, and people were saying, "Oh, I know exactly what I could do with the internet." And it was all the things that they had done before. Like, I could distribute my syllabus using the internet. I have to distribute my syllabus, that's the thing that I've always done. This might make it a little faster or easier. Now I don't have to print them. So I can put my syllabus online and distribute it that way. Right? It took some time living with the possibilities of the internet to really start to imagine the kinds of things that we could do. And I would argue that 25 years later, higher ed is still really lagging behind some of the more interesting things we could be doing with the internet in education. But at least we've had, we've had some time to kind of think about that and to explore, and we haven't had policies about "this is how you can use the internet in your teaching." You know, that kind of put you in a box and say this is out of bounds, and this is clearly in bounds. So I just worry about the policies coming along too early, because higher ed is just infamously slow to change. And once the policy is on the books, I feel like it's going to be devilishly hard at some point in the future to revoke that policy or to amend it in some way. Because even two years from now, or three years from now, it'll be the way we've always done it. And so I, I hope, I hope that the policies that are there, or if schools feel like they have to create a policy, I hope they put a time bomb inside it of some kind, that it's only good for two years, it's only good for some period of time, and it has to be rewritten. It can't, it cannot be readopted in its current for at that point. Some, something has to change. Because we just don't know all the things we need to know to make effective policy. It would be too easy to preclude us from doing things that would be really powerful, just because we can't see over the horizon to what they are at this point.

Rhea Kelly  21:22
So any policy should definitely be a living document that kind of gets revisited often.

David Wiley  21:29
Yeah, absolutely. You had mentioned the big partnership announcement between ASU and open AI. I think that that is absolutely a portent of the future. I think from, from the procurement side of things, generative AI is the new LMS. I'm not saying that generative AI will replace the LMS, or that it has the same features of the LMS. That's not what I'm saying. I'm saying from a procurement perspective, 25 years ago, no institution had a learning management system. Today, every single college and university has a learning management system. They figured out how to budget for it, they figured out how to run it, they figured out how to support it. It was, it's too important a piece of infrastructure for them to not have. Now some institutions drug their feet and some were early movers, whatever, every school has an LMS now. Generative AI will be exactly the same. And I think it will even follow the same kind of contours. Like with learning management systems today, there are several big vendors, they compete for contracts, there are these decision-making processes where an LMS selection committee gets spun up, and they review them and they have criteria and they pick one. And then typically what happens is, at least today, is the vendor hosts that in their cloud somewhere. They provide the kind of technical support and hosting. And they give uptime guarantees and all the SLA kind of work that needs to happen around a piece of core technology infrastructure. And I think that will absolutely happen for the majority of colleges and universities with their generative AI tools. However, there are always institutions of a certain kind who don't want to buy things from things from vendors — they want to go build their own, and they want to host their own, and sometimes they want to form big consortia and do that together. You might think about Sakai, in the learning management system space, right? Where schools with a lot of resources and a lot of technical capability and certain outlooks toward the world, they just, they want to make their own and they want to host their own and be in control of their own. And you'll see stuff like that happen in the generative AI space as well. There'll be consortia of these more technical, better resource, resourced schools that will come together, probably not to build foundation models, but probably to work together on fine-tuning models and refining models that they will host and they will run and they will provide to consortium members as a benefit of membership in the consortium. I think if you just look at the last 25 years of the LMS market, I think that is a great roadmap for where generative AI is heading on campus, except I don't think it's going to take as long.

Rhea Kelly  24:39
That's really interesting. But it's basically going to become part of the tech stack that is centrally managed, I suppose, you know, by IT, with all of that governance process in place for selection and implementation and stuff. That kind of sounds like policy to me, you know, if you're going to be doing a technology implementation of that scale, there's going to be boundaries, I guess. Like, how is that going to affect the capacity for innovation?

David Wiley  25:14
Well, remember back in, back in 1998, when schools were adopting Web CT, there was no faculty committee. Like it didn't work the way then that it does now, because it was early days, right? And it was a lot more, I don't know if fast and loose is the right phrase to use. But it was, there's, there were a lot more things that were unknown, and a lot more experimentation happening. A lot of campuses, one college would have adopted Web CT, and another college at the same uni, inside the same university might have adopted Blackboard, somebody else was using ANGEL. Like it wasn't even coordinated on a campus, right? It took a while, it took a decade or more for people to start to think that actually, this is a critical piece of infrastructure. It should not be Moodle on a desktop computer, pushed under a desk in somebody's office, that's running the LMS supporting the entire College of Business. Right? That's how it started during the kind of experimentation phase, but eventually the technology matured, the hosting capabilities that were available matured, people's understanding of how important that infrastructure was developed more. So I think all those things are coming for generative AI, but we're not there yet. It's like 1998 for generative AI right now. There's a lot of experimentation. There's a lot of different things going on. I've got students in my class, some of them are using Chat GPT, some of them are using Claude, some of them are using Bing Chat, and there's different reasons why they've each chosen those. And I've purposefully not tried to kind of force any consolidation, because I don't know. Right? We're all learning these things together and trying to feel our way forward. I think it's too early for, you know, very restrictive policies here.

Rhea Kelly  27:14
How do you balance some of the, like the risk concerns, maybe like privacy risk and things like that, with the, you know, the desire to not constrain the use with guidelines?

David Wiley  27:29
Well, I think you can inform people fully and effectively without constraining their choice. Right? I think, I think really empowering them to exercise their agency is, is a function of making sure that they understand the choices that they're making. So I think it is important to talk about risks, pros, cons, etc. But I think, I think it's too early to cross the line kind of beyond informing and empowering, and into constraining and confining.

Rhea Kelly  28:05
So I wanted to make sure we talked about the equity side of things. Because, you know, like, when you have a huge research university like ASU, obviously they can put a lot of resources toward this exploration and experimentation. But what about small colleges or community colleges? If they can't afford to invest in this essential part of the tech stack, is that an equity gap that's going to impact students?

David Wiley  28:29
Yeah, it would 100% be an equity gap. As with all technology, and actually, as we just saw with OpenAI's Developer Day not that long ago, every year, the models become more powerful and they become less expensive. That's the trend that technology typically takes, right? It gets less expensive and more powerful. I think that trend will eventually make it possible, maybe not for every college to run the latest version of Chat GPT that was just released last week, but there will be, there will absolutely be someone, multiple vendors who are offering affordable models, that have been fine-tuned for higher ed use cases, that will be, that will cost something on the scale of what the LMS costs. Right? Well, that cost has to get down to the place where it's affordable for the institution, no matter which institution it is. Because even, even the smallest colleges and universities — I live in West Virginia, where we have several colleges that have fewer than 2,000 full-time students — they all have a learning management system. You know it, the market will make those available in a way that they can be afforded by those institutions. On the flip side, I think if the LMS is one metaphor, I think another interesting metaphor, as technical infrastructure for society more broadly, for thinking about generative AI, is broadband. I will just give away the whole game and say I very clearly remember my 14.4k modem that I used to connect to the internet back in the day, and how exciting it was when 28.8 was a thing and how exciting it was when 56k was a thing, and internet just kept getting faster and faster. The, eventually society, at least in the United States, we came to this consensus, that access to high speed internet is something that's just really important for everyone to have. It's important for school, it's important for work, it's important for all these reasons. And so today, there are state-level and federal programs that subsidize access to broadband. Because there's a recognition that that's an important thing for every student to be able to do their homework online and for entrepreneurs to start businesses and for people to be able to do remote work or whatever it might be. It's just important enough that we provide subsidies for that. And I think you'll see that generative AI will end up being in the same category. I just, I can't believe that we're that far away from some programs to subsidize access to these tools as well, because obviously you have to have an internet connection to access them, but I think they're arguably more powerful and more important to have access to, maybe even than the internet.

Rhea Kelly  31:32
So I want to end with kind of a looking forward kind of question: What would you say would be the, sort of the most exciting trend in generative AI to watch this year?

David Wiley  31:46
Well, I think, I think there's still so many technical advancements that need to be made, that maybe, maybe they're not the most fun trends to follow, because they are kind of technical, but just, you know, two that come to mind just in the past, quite recently. The first is the kind of model architecture that powers models like Chat GPT is called a transformer. It's very powerful, obviously, in a bunch of ways, because Chat GPT blew all of our minds the first time that we tried it. But there's, just in the last, last little bit, I want to say, two months, there's been another technical architecture called Mamba that's been proposed, that looks to have a lot of the same kind of benefits as the transformer, but to be faster, less expensive to run, cheaper to train, things like that. That could lead to some very exciting things happening in the future. There's a technique for reinforcement learning with human feedback — RLHF is the way that models have, for the, for the last year, the way that we've been kind of fine-tuning models and teaching them to behave more the ways that we want them to behave. It just recently, there's been a new technique proposed called direct preference optimization that looks like it works as well as RLHF, but again, faster, cheaper, easier. So whatever the thing is that comes out next that's powered by Mamba and is kind of fine-tuned using DPO, and whatever these other new techniques are, people might not know about all the things under the hood, but they'll know, Oh, my gosh, this is even better and even faster and even cheaper than Chat GPT was. It's still kind of early days for, I know it's not early days for AI broadly, but for what we think of as being large language models, for example, it's early days. And even though GPT-4 blows all of our minds, it could be so much more powerful. And so I think there's still a lot of kind of fundamental under-the-hood, kind of basic research to go on that will be really exciting this year — and will end up, it will result in new products that all of us get really excited about.

Rhea Kelly  34:14
Thank you for joining us. I'm Rhea Kelly, and this was the Campus Technology Insider podcast. You can find us on the major podcast platforms or visit us online at campustechnology.com/podcast. Let us know what you think of this episode and what you'd like to hear in the future. Until next time.

Featured