Campus Technology Insider Podcast April 2024

Listen: Inside Arizona State University's OpenAI Partnership

Rhea Kelly  00:11
Hello and welcome to the Campus Technology Insider podcast. I'm Rhea Kelly, editor in chief of Campus Technology, and your host. In January, Arizona State University announced a major partnership with OpenAI to explore the potential of ChatGPT in education. For this episode of the podcast, we caught up with ASU CIO Lev Gonick to find out more about that collaboration, how the university is approaching the use of generative AI across campus, and what the key takeaways have been so far. Here's our chat.

Hi Lev, welcome to the podcast.

Lev Gonick:  00:53
Thanks, Rhea.

Rhea Kelly  00:55
So let's start with kind of the beginning. How did the partnership with OpenAI come about? Was it initiated by ASU? Or did OpenAI come knocking?

Lev Gonick  01:06
Well, me and a lot of other people in higher education were doing our level best to try to get some attention from folks in Silicon Valley around all things generative AI. And in fact, I was definitely among those trying to navigate the labyrinth of this new disruptive group of folks in it who had a, you know, a very small startup at the time with no signage on their building. But in the end, through a network of long-standing relationships with folks close to the folks at OpenAI, I managed to basically get an opportunity to introduce Arizona State University to Open I. And it turns out that, in fact, many of the folks that we had, we broke bread together, we had lunch together, many folks actually already knew quite a bit about Arizona State University, our commitment to student inclusion and success, as well as our innovation, sort of, if you will, credit that we had in the innovation space. And from there, we had some very, very exciting visioning projects, and then sort of got down to brass tacks, on sorting through sort of what the partnership, what the elements of the partnership might actually look like.

Rhea Kelly  02:37
So was it kind of like you were knocking on their door, saying, Hey, you guys should be paying attention to higher education?

Lev Gonick  02:45
Yes. I think, again, when all of us, you know, in November, now just about 16 months ago, began seeing ChatGPT being offered, and then quickly thereafter, a number of other large language models, with an opportunity to dialogue with the large language models coming out, we all saw great potential. My pitch directly to the team there is that like OpenAI, ASU has an ambition to leverage technologies at scale in support of our mission, which includes, again, this I would say unprecedented commitment to supporting an inclusive orientation to students and learners and their journeys, and using all the technologies in service of that goal. And that resonated, I mean, I knew it was going to resonate with the team at OpenAI, because, again, I know, I know, what the ,what the nonprofit had sought to do. I know what the research team was working on. And it had similar global ambition to be really a force for advancing humankind. We also, like they, you know, have plenty of folks who think that, you know, there are all kinds of reasons why they, why that might not be the only outcome that using the technology, or in our case, you know, using our approach to learning might unfold. But definitely a case where I made the pitch really around alignment of our vision, our mission, and our commitment to using technologies to drive really a positive outcome.

Rhea Kelly  04:35
Could you kind of outline the basic elements of the partnership?

Lev Gonick  04:40
Yeah, I mean, they're really just threefold. One is, you know, we actually have contracted for licensing of a new product, a product that we had a wee bit to do with, which is called Enterprise ChatGPT. A lot of us back, again, 16 months ago, when ChatGPT, the consumer version, was released, got excited, but then got very concerned that things like the intellectual property of the university, or the privacy and confidentiality for our students, health information, and other things that are sort of part of our ecosystem, all those things needed some assurances, some guardrails that would allow us to leverage the power of the large language models, but in a way that protected the assets of the enterprise, the enterprise in this case being Arizona State University. And so, you know, that, you know, we were the first enterprise university to actually contract licenses in that regards. But in addition to that, we have committed to and are regularly interacting with their technical architects around the things that we need to see happen in higher education. Again, it's not just the things that are now in the enterprise product. There are all kinds of very important things that relate to the research program of the university, in terms of technical workspaces, to how to actually leverage this to roll out to hundreds of thousands of students and, at ASU, almost 12,000 courses. Can you imagine leveraging this in a more intelligent way than simply letting everyone run after and try to develop their own approach to using a GPT for, again, 300 physics courses or, again, 12,000 courses across the institution — there has to be a way to build out enterprise tools. And so again, we're helping in the technical requirements, if you will, for that kind of work. And again, we don't think it's just for higher education. We think it's not only for education, we think, again, for all enterprises that have complex organizational models with lots of products being created, this is going to be important. So we're part of a team that is working to support the technical requirements. And then we have a whole sort of set of efforts to really help in thought leadership with OpenAI, inviting them to the events that we're invited to, as well as to our own events, where we get to, again, outline the kind of aspirations, in our case tied to our mission and to our charter, and others, you know, related to other kind of critical, critical insights, as well as us getting invited to their gatherings, whether those are executive briefings all over the world, or whether or not those are, again, more quiet conversations with their team in San Francisco. Those are all things that are part of the partnership effort. And the truth is we, you know, we've also committed just to say if something really interesting comes up, like we don't want to foreclose the opportunity to explore opportunities. And so for example, in that regard, we didn't anticipate early on working in the health education space, but actually, because ASU is now actually embarking upon the design of ASU Health with two schools related to, you know, the needs that we have here in Arizona, you know, we've, we've engaged OpenAI, who were in turn saying, "Great, we don't yet have a thought partner or design partner for that kind of work. We'd love to continue to build on that." And of course, we brought many other stakeholders to the table, and that work is progressing.

Rhea Kelly  08:31
So I understand that you started with kind of an open call for interested faculty and researchers and staff to submit their ideas for leveraging ChatGPT Enterprise. And I'm curious kind of how you evaluated those submissions. You know, what were you looking for?

Lev Gonick  08:48
Right, so actually, we have completed one full round of challenge grants. What we chose to do, because, again, this is, you know, there are kind of two ways we thought of designing the engagement when it came to distributing licenses. One was like, okay, maybe one way is just to let a thousand flowers bloom and see what happens: We just give licenses to everybody and then we see what happens. I'm a student of, of technology adoption, that's kind of what my actually academic interests are in. And it turns out that there's actually a fairly predictable curve, which is, you know, just generally, a case where there's a lot of enthusiasm early on, there is a valley of, in some ways a valley of despair, things, you know, there's a tail in terms of people using it and then using it consistently, and then, you know, if it works, there's some kind of a plateau. In order to avoid the like, let's just give everybody a license and see what happens, we decided to actually frame what we called impact areas — areas where we were looking to have impact — and then called on the campus community, the staff, the faculty, I'll say a word about the students in a second, the faculty and the staff to actually respond to creative ways in which they could advance their research or their service roles at the university by leveraging those, those licenses to those effect. And what I've done here with my team is we've actually created a, I think first in the nation, but certainly of consequence to us here at ASU, we've developed a dedicated AI acceleration team. So all grant recipients not only got the licenses, they also have internal consulting services, technical, UI/UX, data science, all kinds of, again, important attributes. Because again, this is a case where this is not about hitting a button. This is really what we would simply call a data wrangling challenge, that you need to have, you need to have experts to support certainly areas where you're trying to have more than just a kind of, you know, one-evening experiment in kind of a parlor, a set of parlor tricks that you can do with your ChatGPT. That's sort of certainly been the way that we've done it. And we had nearly 200 great ideas come forward, we supported in that first go-around over, well over 100 of them. Those are underway, we started that, we announced on January the 18th, we released the internal grant on February, challenge grant on February 1, we closed the February 9, we reviewed and got back to folks, and again, here we are less, just about 30 days into the process. And you know, all kinds of great stories. In fact, there's a story that just got released yesterday about work that's going on with one of the recipients of the grant in his English composition class, already all kinds of exciting insights for him and his students that we're sharing more broadly across the campus. And then next week, our second challenge, which happens next week, again, about a two-and-a-half-week turnaround proposal will tee up a whole bunch of new projects. And this time, in addition to our staff and faculty attending to challenge areas, we're inviting all of our students who are interested to also submit either together, together with their research faculty colleagues, or their teachers, or their staff colleagues, responses as well. And that should take us into basically the beginning of the fall semester where we'll have all kinds of new exciting things to share, which I can't share with you just yet.

Rhea Kelly  12:26
It sounds like speed is really important in, you know, because the technology keeps changing. And so you have to get through the proposals and the evaluations and the approvals pretty quickly, or risk, you know, that, things changing before you get a chance to start those projects.

Lev Gonick  12:43
They are, and Rhea, I mean the reality is, again, with the introduction of any new technology, we don't want to turn this into like a high-stakes, high-risk experience. We want to turn this into, you know, really a, you know, what you would want to see at a university, we want to see this as discovery and experimenting, and a chance, if you're going to fail, let's fail quickly and relatively cheaply, to experience what goes on here. And rather than having academic debates about whether it's good or bad, or whether it does or doesn't do this, or whether it's ethical or, like let's just, let's get on with giving a try. We've set up these guardrails that I think are reasonable. We have a faculty ethics committee that is reviewing each of the proposals that are coming through, so this is not the IT group that's actually doing this review. We actually have a whole panel of researchers and administrative staff that are reviewing all the proposals as they come in, and we're turning them around. And we're really kind of using the, our hope is to turn this into rolling work, you know, as the opportunities come forward. And what we're hoping, again, is that we will build momentum in a way that is focused in on these North Star commitments, which, you know, focus in on our, you know, on our commitments for student success, but also, you know, obviously, to advance competitive research grants. I just got a great note from one of our faculty colleagues today saying he's just put in a $20 million NSF grant. The work that we're doing with him in, you know, shows, again, investment of the institution in his research program. Again, at ASU, Rhea, just to let you know, like, we have 20 large language models. The one that we're talking about today, about OpenAI and the enterprise version of ChatGPT, is just one of the 20. This call is, all of this call for participation from across the campus is also affording us a very important opportunity to find the right kinds of technologies to support the research program. This faculty member that I referred to, who has just submitted a $20 million grant to, it turns out that the OpenAI opportunity wasn't a really great fit for a lot of technical, technical reasons. But we took the opportunity to help him and his lab out by tying them actually to some terrific high-performance computing that was local, through a local partnership that we have, around which we've opened a series of large language models to support research-intensive, computationally intensive uses of the, of generative AI. So, again, you know, at ASU, it's not only about scale and about mission, it's also our commitment to what we call principled innovation. We're innovating with a purpose in mind, with a responsibility in mind, and in doing that, you know, trying our best to engage the whole campus community.

Rhea Kelly  15:36
Could you tell me a little bit more about any of the initial projects that have been approved? Like you mentioned that English composition class — what's going on in there?

Lev Gonick  15:45
Well, I mean, what's going on in, again, there's, there's a prehistory to this as well, because at ASU, first-year English Comp is a requirement. It's also very large: We have 20,000 students each year who go through the first-year Comp. So figuring out how you actually support the kind of quality exchange had led to the writing center teams, on multiples of our campuses, to look to redesign the way in which that English Composition class is delivered. For the last couple of years, we've been trying to tackle the redesign challenge, because it's such a scaling issue for us. And what we've done is we've been using a number of different techniques that our faculty have designed, our writing center faculty have designed, that focus in on where it makes sense to use the AI, as it were, writing buddy, and where it makes sense to have human in the middle, in that experience. And so that is part of a, as I say, a multi-year effort in that effort. And part of it is, you know, we know that there is a sensibility around helping students understand things that are important, like grammar. But there's also things that are important that go on in these writing composition classes that relate to helping students understand the importance of finding their voice. And what we've done in some of these classes is we've actually used the AI to actually recommend a series of different alternative ways of constructing paragraphs, letting students choose either their original version, and/or using selectively what the AI is recommending, and then using the time in the classroom, with the faculty, with the professor, to actually go, right, why did you choose this particular construction, either borrowed from or supported by AI, or why did you choose to keep your original voice along the way. And in that conversation, that dialogue is really, really important to, again, students understanding the importance of their voice, and to own their voice in that work, and at the same time, not run away from the opportunity to be leveraging the power of the technology to support creative work. And so that is one of the projects that is underway. I just sort of say, Rhea, for those of your listeners who are interested, all of these stories are available at ai.asu.edu. There are literally dozens of these stories. As the projects unfold in the first wave of grants related to OpenAI, many, many of them will be chronicled and shared on that ai.asu.edu website.

Rhea Kelly  18:57
Oh, that's great. That'll be a fantastic resource for really anybody.

Lev Gonick  19:01
For everybody. I mean, I mean, we, that site is serving, you know, for things like policies, they're serving for things like technology choices, ethics considerations, case studies, stories that are being told, videos of our students and our faculty testimonials in terms of what they're using. We just had a great event here last week, which we called the FOLC Fest, the FOL in FOLC is the Future Of Learning Community, FOLC Fest. And there were many, many presentations by our faculty and our students who've been going through these first sort of early stages. And, you know, the truth is, you know, they were able to share what excited them, what disappointed them, where they think the technology needs to get better. Some of the students, you know, were enthusiastic; others were quite cynical. All the things that you would imagine, but there was, you know, I think a really good community gathering, over 1,000 people came to that event from across ASU. And so you know, I think that's kind of, I think that's the way we roll here at ASU.

Rhea Kelly  20:10
So if you've got hundreds of projects going on on a rolling basis across multiple large language models, how do you evaluate which ones, you know, have potential for broader implementation?

Lev Gonick  20:24
Well, again, those are all done with our partners, again, our academic partners. The whole idea of kind of what works in the classroom is for our academic colleagues, and of course for the provost and her team. We're here largely in service to them. But again, thought partners, design partners, and we have an excellent relationship with the provost and her team in that work, we learned a lot, hopefully we're able to share things that are relevant to their world as well. On the technology front, you know, the commitment that we have here, that I think is informed by our long-standing institutional commitment, is that we're not waiting. I think there is a challenge right now: The world will divide into those who waited for perfect and those who got going when it was good enough. I think there's a debate right now as to whether or not we're at the "good enough" point. ASU is all in on good enough, with all the guardrails that we think are thoughtful, intentional, by our design, and we know we're gonna learn a lot because the technology is changing. The good news is we're in the room, and helping not only OpenAI but a number of our other technology partners, help framing the needs that we have here. And rather than just waiting for things that can be bought on a subscription basis across the entire university, to solve, you know, a challenge with getting through math using AI, or getting through biology using AI, we're actually designing our own sort of approaches to that work and offering an opportunity for startups as well as sort of this, the usual leaders in the space to join us in that journey. So I mean, just as an example, ASU has created a fabulous new way of learning STEM subjects, which is called Dreamscape Learn. It's an immersive, fully virtual reality environment that allows students to actually experience a discovery or an exploration of solving a major, either species-level or global-level crisis, and actually solving as a team, as a, as a group of explorers, solving those issues, also learn the science associated with those issues. That work is, literally has had over, well over 20,000 of our biology students, for example, through that process, and now we're working on how do you introduce AI into that environment, so that every student or at least every student group has in their actual virtual reality experience, essentially, either a, an expert, or a study partner, or a creative partner, who can help prompt them. And again, we, you know, we are tuning these so that it's not about getting the answer, because we won't give the answer, that's how the machines have been, are being tuned. But can actually help to keep students engaged, because we know a couple things about how students learn. Student engagement is one, and students love the virtual reality immersive experience. Keeping them engaged is about prompting them on a regular basis, and so again, the AI is able to serve really as a prompter. And over time, helping students be able to do, actually take more ownership of their own learning journey by them working with their AI tutor to help solve the problems at hand and not just waiting for the tasks to come down from the professor.

Rhea Kelly  24:12
Have there been any early lessons learned so far? Particularly on the technology side, in terms of managing a project like this, or even just specific to the technology itself?

Lev Gonick  24:25
Well, yeah, I mean, here's, here's the one that I, I've spoken a lot to my peers across the country, actually and around the world, is you, we have two choices. We can try to graft on to our existing data teams, our existing whatever teams, our UI/UX, our educational technology teams, you can try to get two or three people who know something about AI and you kind of say good luck. And I think that that's what most universities are going to do, to be honest with you. Or, what we've actually learned and something to share, is that we've actually just created this dedicated AI Acceleration team. And of course, it takes a university perhaps of our size to establish a team of nearly 20 of these professionals to work on it. But I do think that the only way to accelerate through this work is to own more of it as the university, and certainly for our large schools, and those who are going to try to differentiate their offerings in the AI era, dedicated teams are going to be hugely important. And then, again, second lesson learned is immediately, try as quickly as possible to hitch your wagon to the most innovative faculty groups that are there. So for example, next week we will be announcing that ASU will be issuing what is the first degree in the country for AI and entrepreneurship, in our, in our management school at the W.P. Carey management school, School of Business. And so that, that, that is, that then allows us to actually keep that ecosystem and pipeline of development, just not focused in on playfulness, experimenting, but also now realizing that it's in service and support of, you know, essentially, you know, what our faculty and ultimately our student success needs to be geared towards. And so, you know, W.P. Carey has always been one of our great champions in this kind of technology-driven work, and we expect many other schools to follow suit, as well as many other universities try to do so. But the question is, are you as the enterprise technology team positioned to help? Or are you going to basically say, we don't have capacity to do that work, because we're overstretched doing all of the legacy caring and feeding. And I think that's the challenge to technology teams, is, I think you have to very much find a way to, to be of service to where the campus is, but almost always be prepared to support the campus and where it's going.

Rhea Kelly  27:14
That's something I think we don't actually hear about very much, and definitely not enough, that the idea that you're going to have to have a dedicated team for AI innovation support.

Lev Gonick  27:27
There was a time when this idea of what became known, first it was a course management system, and then it became a learning management system. You know, there was one person who was part of a team, who could figure out actually how to do a little bit of line coding, and so that person became, you know, the administrator of the original set of tools that, you know, three faculty were using to create a, you know, a course, using, you know, what we now call a learning management system. Well, that's evolved, on most schools, to teams. Now, the team may not be large, or the team, but it's usually more than just the preserve of one person who, by the way, has six other jobs. That's, that, that, you know, there will be a time, and I think it'll be in the relatively near future, when AI becomes, again, foundational and central to the redesign of much of the way the university works. How we're organized to work is, I think, really important for technology leaders to lean into, rather than to sort of hope that, I'm not sure what, that this is a passing fad, that AI really is actually going to sort of just go by the wayside. I mean, I do think that it's been about 25 years since we've had a significant disrupter to the way we're organized in service of supporting our campuses. You know, if we reflect back as to what changes we needed to make, I think technology leaders would be well, well apprised to actually making a run at figuring out how to prepare to shift and reorganize, to be of service to the university as each university and college tries to meet the opportunities and the challenges of the AI era.

Rhea Kelly  29:17
So what would you say are your long, long-term goals for generative AI? What do you have your sights set on, on that sort of, you know, 10, I don't know, 10-year?

Lev Gonick  29:27
I can't see 10-year, 10 years. I think some people think that, you know, from time to time, I've, I've got an opportunity to offer some vision. I mean, I do think that there is, you know, a very long arc of transformation that is underway. I think those are probably more like in the 50-year kind of time horizons. I do think, you know, our relation, our human relationships to machines are going to, now, by, by focus and intentionality, and by market forces and by technology forces, are going to be the central issue for us to be solving. It's part of the maturity of the technology curve that is now knocking at the door. I don't think it'll be knocking at the door, I think it's going to ultimately be sort of basically breaking down the doors of our institutions and the like. And so certainly, figuring out how we, as humans, organize ourselves, our teams are organized to help students with their work and our research labs with their scientific discovery — I think all of that is going to be in radical change mode for the next, certainly three to five years. And I think that the world will largely, I mean, there is a real gap, a real challenge here, which is, you know, those universities that are able to make the shift I think will be in a different category than those who are unable to make the shift. And I don't mean this to be shrill, but I do mean this to be a clarion call, that be careful how much internal debate you want to have before you start experimenting. Because you're only going to learn through experimenting what you need to do for the next three to five years. And no time is going to be the perfect time. Making sure that you have the right guardrails, security, and basically people becoming knowledgeable through trial and error, is probably what needs to happen in the very near term along the way. But over the long term, you know, there will be a very robust, personalized, customizable tutoring for students and for people, we call it, we call them here at ACU "learners." That is to say, people who will not only be seeing YouTube as a place to learn how to fix your washing machine, but also a built-in tutor that actually can support you as you actually go try to fix your washing machine, or any of a million other ways in which we've now simply become dependent. What did we do before there was a YouTube video on how to fix things? The same kind of conversation is going to be coming up in the very near future about the ways in which AI has helped it. I think search is going to be completely turned upside down, it is already well underway right now. So just the ways we become dependent on search, I think search and generative AI are going to be entering into a very interesting and disruptive moment. Lots of business models are going to get broken and reinvented and new ones can emerge along the way there. I think certainly, all services to students are going to be, largely going to have options for a generative AI experience. We know here at ASU students are voting with their feet. They, there are certain groups of students who absolutely want human touch, and there will always be that here at ASU, but students are very comfortable with, certainly, tier, sorry, triaging the first tier of questions they have interacting with machines. They've been conditioned to do so in their consumer lives along the way. And they're also, to be honest with you, sometimes happier to interface with the machine where they don't have to worry about whether it's, they think it's a silly question, or how many times they need to have the question answered over and over again until they better understand that. You know, those are all things where patience, where machines have unlimited patience and unlimited opportunities to provide examples. Those are all the different ways I think, again, we as humans and machines are going to evolve over time, and it's really all about augmenting our human experience.

Rhea Kelly  33:53
Thank you for joining us. I'm Rhea Kelly, and this was the Campus Technology Insider podcast. You can find us on the major podcast platforms or visit us online at campustechnology.com/podcast. Let us know what you think of this episode and what you'd like to hear in the future. Until next time.

Featured

  • stylized illustration of a global AI treaty signing, featuring diverse human figures seated around a round table

    World Leaders Sign First Global AI Treaty

    The United States, the United Kingdom, the European Union, and several other countries have signed "The Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law," the world's first legally binding treaty aimed at regulating the use of artificial intelligence (AI).

  • AI-inspired background pattern with geometric shapes and fine lines in muted blue and gray on a dark background

    IBM Releases Granite 3.0 Family of Advanced AI Models

    IBM has introduced its most advanced family of AI models to date, Granite 3.0, at its annual TechXchange event. The new models were developed to provide a combination of performance, flexibility, and autonomy that outperforms or matches similarly sized models from leading providers on a range of benchmarks.

  • Abstract geometric shapes, including squares and rectangles, are arranged in a grid-like pattern with connecting lines

    Eclipse Foundation Establishes New Open Source Compliance Initiative

    The Eclipse Foundation has launched the Open Regulatory Compliance Working Group (ORC WG), dedicated to helping the global open source community navigate increasingly complex regulatory landscapes.

  • translucent lock composed of interconnected nodes and circuits at the center

    Cloud Security Alliance: Best Practices for Securing AI Systems

    The Cloud Security Alliance (CSA), a not-for-profit organization whose mission statement is defining and raising awareness of best practices to help ensure a secure cloud computing environment, has released a new report offering guidance on securing systems that leverage large language models (LLMs) to address business challenges.