Campus Technology Insider Podcast April 2023

Listen: How Generative AI Will Enable Personalized Learning Experiences

00:08
Rhea Kelly: Hello and welcome to the Campus Technology Insider podcast. I'm Rhea Kelly, editor in chief of Campus Technology, and your host.

Imagine a learning environment that, much like a Star Trek Holodeck, changes based on a user's individual requirements. It understands the learner's strengths and weaknesses, anticipates next steps, recommends the best learning content, moves at the learner's pace, and removes unnecessary friction within the mechanics of learning. With today's advancements in generative AI, that vision of personalized learning may not be far off from reality. For this episode of the podcast, we spoke with Dr. Kim Round, associate dean of the Western Governors University School of Education, about the potential of technologies like ChatGPT for learning, the need for AI literacy skills, why learning experience designers have a leg up on AI prompt engineering, and more. And get ready for more Star Trek references, because the parallels between AI and Sci Fi are futile to resist. Here's our chat.

Hi, Kim. Welcome to the podcast.

01:21
Kim Round: Hi, Rhea. I'm so happy to be back and chatting with you.

01:25
Kelly: Yeah, I think you're actually our second repeat guest, so one of the first. I love it.

01:31
Round: Oh, my goodness, I feel so honored. And I love what we're talking about today.

01:36
Kelly: Yeah, so I was thinking that with your background in learning experience design, I'd love to hear your take on the potential of technologies like ChatGPT, just from that perspective.

01:48
Round: Yes. Been doing a lot of thinking about it lately. And of course, every time you jump on LinkedIn, or get any notifications about the major publications like Campus Technology, a lot of discussion about it. And so I'm going to preface my comments today with the caveat that the machine is learning while we're learning about the machine. So it's almost like building the plane while we're flying it. But I'm endlessly curious and cautiously optimistic. I think we need to really understand AI's benefits and limitations. People have talked about that, certainly, as we get into the user privacy and information side of things, I know that we have a lot of campus IT people that listen to this podcast, and that that would resonate with, because how machines really collect and curate and learn their way into data can, can pull some ethical concerns. But now I've talked about the risks. Let me talk a little bit about the things I'm excited about. And understanding we're really in the early days in the evolution of generative AI, and, and we as human partners are in early days of learning how to best leverage it, both as educators and as learners. So we've seen some hype cycles come and go around various technologies, but I think this is a really interesting one. I think akin to the web and the Google search engine, in my mind, this has the potential to disrupt, in a good way, learning, and also cause humans to upskill for the workplace. So even though that responsible use of generative AI is still evolving, there's the potential to remove unnecessary friction from the learning experience, so that students can focus on higher-level critical thinking, and can really level the playing field across the board. So let me just share a personal anecdote. And I'm going to date myself a little bit. So back when I was an undergrad, laptops had not yet emerged, the web was not quite available. And I was doing my term papers on an electric typewriter. And how much of my time was really being spent on the mechanics of learning, versus the critical thinking that perhaps my professor envisioned in assigning that, that term paper. And, you know, as I typed into the night, and worried about running out of Wite-Out, and really revisiting my time management skills, I'm not sure I was having the learning experience that my professor envisioned for me. So I think generative AI has the potential to push student learning really higher, moving learners from remembering to understanding, applying, analyzing, evaluating, and creating — that Bloom's Taxonomy piece — and help learners become better researchers, better curators, and better decision-makers. And that really helps move that learning experience from that transactional "sit and get" to being a transformational learning experience. So that's what I'm excited about the potential. And we'll have a lot to watch as we, as things emerge.

05:20
Kelly: Yeah, for sure. It's so interesting to watch everything develop. And I like how you, you referenced the electric typewriter, and do you see ChatGPT as part of the natural evolution from the typewriter through, you know, the computer, you know, yeah.

05:39
Round: I do. And I, and I do feel, again, like, I'm really dating myself in talking about it, but yes, I think it's going to become that critical skill set, right, just how the millennials learned how to use the web and discriminate with information that they were being given and that, that became infused in their learning experiences. I think that this generation and the next generations to come, they're going to be able to show us new and interesting ways to build those, those skill sets. And, yes, I absolutely believe it. I believe it's part of the future of education. I also believe it's the future of work. So there's going to be some really interesting implications there.

06:31
Kelly: In a previous episode of this podcast, I actually spoke with Mark Schneider from the Department of Education's Institute of Education Sciences, we were talking about ChatGPT. And one of the things he was most excited about was AI's potential to personalize learning. And I know that's like a topic that's right up your alley. So could you paint a picture of, of, you know, what is kind of the, the possible? Like, what is, what does an AI-enabled personalized learning experience look like?

07:04
Round: So that was a great episode, by the way Rhea, I really enjoyed it. But I'll give a little bit of background around my passion for personalized learning, because I've been thinking about it for about 30 years, actually. And it really stemmed from, and I think everybody can, can look to their family, look to their friends and see how leveling the playing field and providing equity is one of education's really wicked problems, to use a design thinking term. And so when I think about kind of the journey I went on, I had, 30 years ago, there wasn't a lot known about high-functioning autism, and we were navigating our oldest son's entry into school. And so we knew something was a little different there, but he performed really brilliantly at home. And things were personalized for him there, right? But when you get into a mainstream organization, whether it be K-12, or higher ed, or the workplace, you know, in this case, we became aware that the school saw him differently. And it was truly a mirror of how the mainstream then would see him as well. And, you know, the story ends extraordinarily well, we are, he was very lucky, I was very lucky. But we were able to support him by curating that personal learning experience with tutors and speech therapists and the heroes that really showed up for him. So two issues really came to mind for me during that time: that if we don't personalize, access to opportunity and fulfilling potential is really at risk. It's not just in the first grade, it follows through higher ed and the workplace. That's one of the reasons why I'm so passionate about WGU, is that we, we, our, one of our tenets is that we believe in the inherent worth of every individual and we provide pathways to opportunities. So, and I began to think about what about other people with varied needs, who don't have access to the resources we did? And then of course, educators are working with so many students with different needs at a time, how might we help them. So support for neurodiversity is one issue, cultural approaches, student preference, social emotional learning, diversity, equity inclusion — these mindsets are all necessary to really level the playing field. So for me, when I think about a personalized learning environment, I think about the learning environment as a partner that really seamlessly leverages learner superpowers when scaffolding their needs, so it can anticipate the next step, recommend the best content, move at the learner's pace, supporting those cultural, SEL, DE&I, and neurodiversity pieces, and removing that unnecessary friction in the mechanics of learning, so the student can truly focus on the task at hand. And that interface actually disappears, while the adjustments are happening in the background. So whether that looks like a web, a website that people log into, or they're experiencing a virtual or augmented reality within a classroom, or even, if you're a Star Trek fan, the Holodeck, walking into a room that that changes the environment based on the user's need. That's, that's the vision that I have for a personalized learning environment. I'm sure we'll see some really exciting ones emerge.

10:44
Kelly: And do you think, like, will AI be able to kind of analyze, I guess what, based on the, certain data that's coming out of, you know, of the student's coursework or other indicators, and then, and then shape the learning experience automatically, in a way?

11:03
Round: Yeah. I think it's that, the, it's the marriage, I think, of both AI and learning analytics. That that's a really powerful combination, where the AI can take what the learning analytics already knows, and present and create content, based on what the learner really needs. And that's all happening in the background, unbeknownst, perhaps, to the learner. But we have to do that, we have to do that right. We, we have to make sure that the information we're presenting is accurate.

11:40
Kelly: It's almost like a textbook that reinvents itself for, for, you know, whoever's reading it.

11:45
Round: Absolutely.

11:47
Kelly: Do you think, I mean, this technology is developing so quickly. That idea of a, you know, of a textbook that is geared toward every individual — is that, are we close to that becoming reality? Or is it a long way away? What's your sense of the timeline?

12:06
Round: I don't think it's a long way away. I think we already have the building blocks, and we're refining them. And, you know, forms of this are already in play. What was it, I don't know, seven or eight years ago, there was some really great press coming out about Jill Watson at Georgia Tech. And she was the class chatbot, and then the students found out that she was actually an AI. But we've seen, we've seen some pieces of this emerging already. There, to bring us to the next level, as I said, it's really that marriage, I think of learning analytics and generative AI, and AI is moving so quickly. With only the learning analytics approach, it might not be as easy to present that course content with those multiple personalized pathways. It really takes the AI to come in and curate and present and synthesize. So as generative AI matures, I think it can recommend that content. And, but you know, some of the limitations that we're bumping into, that we need to solve for, is that the accuracy is evolving. And it seems like, as I've worked with it, and read through different resources, that if, you know, if, if we see it as an entity, if it doesn't know something, it seems to make something up. So that's a little bit difficult. AI doesn't understand human beings well, yet. It has to learn more around adapting to the cultural aspects and perhaps, you know, reading levels, or even, you know, where, where somebody is in the world. And, and, again, going back to that Star Trek reference, which I hope many of your listeners will understand, I think a lot about Data in The Next Generation. And the fact that even though he's self-aware — he's an android, he's self aware — he doesn't know what it means to be human. But his programming has pushed him to try. And so he's paired with human partners to try to solve problems. And that team of, say, Data and Picard or Data and the other crew members, that becomes a really powerful combination as we're, as we're solving problems. So there's that. And then I think also as humans, of course, we need to learn how to ask the questions in a right, in the right way, to fully harness the power of the AI partner.

14:50
Kelly: Yeah, well, and the thing about Data was, a recurring theme for him was learning how to use language, right? Like how to understand, like idioms or, or figures of speech, you know, that you can't, that, you know, aren't like…

15:08
Round: Telling a joke!

Kelly: Yeah.

Round: Remember when he tried to learn how to tell a joke?

15:13
Kelly: I do. That was so funny.

Round: The social connection.

Kelly: And so that just brings it, like, it's such a parallel with ChatGPT, because it's all about making the language sound natural, or, you know, just the understanding or communicating in a natural way.

15:31
Round: Yes, absolutely.

15:34
Kelly: How do you think generative AI is changing the way we think about and teach digital literacy? Because I think they're, well, I think that maybe we need a term called AI literacy. Can you kind of talk about how, how that idea of digital literacy might be changing?

15:54
Round: I love thinking about this. And one of my heroes is Dr. Chris Dede from Harvard, and he's also the co-PI for the NSF AI Institute for Adult Learning and Online Education. They're doing some really great work. And the way he puts it is, we need to teach students not to let anyone or anything do their thinking for them. And that's, I think, a huge part of the literacy question. So practically, and probably a little bit obvious to those of us who are exploring AI, we can ask the machine to generate something. But again, we have to look at these outputs with a critical eye. And we need to learn more about prompt engineering, and how to, how to craft the ask, because sometimes the output has limited use. So, but we've, we've pivoted in the past, I mean, when we think about Google, as humans, we needed to learn to interact with search engines and write keyword searches to get the types of results we wanted. There's been professions that have emerged around search engine optimization and reverse engineering. And this, this may very well be a new profession. So um, but one point, I would, I think is really important in how we educate people about AI literacy, in terms of scaffolding from that search engine experience, is that search engines will give us results that we can choose from, whereas an AI is going to make those curation decisions for you. And when it gives us an answer, and sometimes that's accurate, and sometimes it isn't, so. If it's okay, I'll take you through an interaction that I had this morning, because I wanted to test this a little bit. And so I was working with ChatGPT, and I wanted to summarize Dr. Dede's major contributions to the AI research field, so I'm asking the AI about research on it. So I was really clear, I said, okay, five paragraphs, let's target this writing toward higher education information technology professionals. And I noticed that the first iteration, the AI left out the work that he's doing as the, as the PI, major contribution to the institute, then also left out the research and work that's focused on faculty artificial intelligence partners, which would be really interesting, I think, to higher ed IT professionals. So we can talk more about that later. But, so I had to ask follow-up questions. And as a human partner, I need to know to ask those questions. So we can't let AI do our thinking for us. And as generative AI matures and learns, we'll need to keep that in mind.

19:08
Kelly: I feel like crafting the right prompt for ChatGPT, it requires thinking in a particular way that's not necessarily intuitive. Do you have any advice on, like, how to get what you want?

19:23
Round: Yeah, you know, and in this way, I think faculty and learning experience designers and assessment developers may actually have a leg up, because they're already used to spelling out learning objectives or competency descriptions, what they, what the final product really needs to look like. So in my previous example, of, of working, of querying ChatGPT this morning, I noticed I need to be clear and concise, use very specific examples, you know, I want this written at this level, and also understand that I was going to need to be able to give it multiple iterations of feedback to help the AI learn. So I think it's a really different skill set that people are used to doing. We need to be really detailed in those requests to get close to what we want for output. And we may actually see that new profession emerge, that prompt engineer position, or, or prompt reverse engineering position. And, again, I do think that perhaps our faculty and our learning experience designers, our assessment developers, may actually have a little bit of a leg up in this space and have a lot to teach us.

20:47
Kelly: So speaking of learning design, I wonder if you've come across some interesting ways to tap into the benefits of generative AI, like in a class scenario, like for students?

20:59
Round: Yeah, you know, I really like the broader potential of learners potentially engaging with a generative AI as a thought partner — again, that, that whole partnering aspect. And learners will need to know where the AI begins and ends and where they begin and end in that partnership. But, you know, for some more practical kinds of things, we're seeing some interesting things emerge around tutoring for writing, giving feedback on drafts, and brainstorming and generating ideas, helping with citations — a lot of my grad students, they, they would probably appreciate that quite a bit, I know I would have — and guidance on research methods. It's interesting, because I saw an article a few months ago, on, you know, is this the end of the college essay? You know, why, why do we need the college essay anymore if AI can actually write it for you? And I think that essays are not going away. But I think they're pivoting and that writing assignments can leverage the AI partner really thoughtfully. You know, what's the point of that assignment? It's to build potentially critical thinking skills about a discipline, it's to build those communication skills. But you know, some sample assignments may be that you review the accuracy, accuracy of that generated text from the AI, you determine where the sources are to that AI-generated text. And, you know, AI doesn't do a great job around context quite yet. So can the learner provide that context? How does this work in these other areas? So I think that this really blends into the future of work, because we're going to be asked to upskill to do the things that AI does not do well. And this is an example of really approach, infusing that approach into the assessment. So there's other things, you know, around math, practice problems, evidence-based modeling, I was actually playing with that a little bit over the last week or so, asking the AI questions that require synthesis from multiple sources. And so I asked it, you know, how might climate change impact coyotes in New Hampshire in 2030? And asked it to model that. Because I live in New Hampshire in the woods, and there's a lot of coyotes that I get to see. And, you know, how does this differ from other areas in the country? Some of that information was a little vague right now. But you can see how that could mature over time. Just getting a little bit deeper on what it also can do for faculty, because I come from also a faculty mindset. I taught for a lot of years at different higher ed institutions. I taught for seven at Harvard University's Extension School and Brandeis University. And I was always seemingly drawn into tasks that were repetitive, or I wasn't available when a learner needed me. And I would feel badly about that. So an AI assistant can really handle those lower-level tasks while I could focus on the strategic direction and design of the learning experience. So the institute that Dr. Dede is, is helping to lead, they're doing some work on faculty assistants, AI-enabled faculty assistants, tutors, social agents that connect students with study partners, virtual teaching assistants, etc. So that's, that's an exciting evolution for the, for faculty. So just to kind of follow on to that Star Trek connection, I found, I really did think about Data when I started to look at AI. And I saw that Chris Dede's institute actually uses this parallel as well. And he talked about the idea that there's, there's a thought process called reckoning, that's focused on knowledge. And then there's human judgment. So Data is the reckoning. And human judgment is Picard, taking that information and figuring out the context. So AI might find ways for a professor to generate meaningful feedback, but it's the professor who will deliver it in a way that really respects the student's context. And AI doesn't really understand that. So hopefully, that's, those are some ideas.

25:58
Kelly: Yeah. Yeah. Or doesn't understand that yet, I suppose.

26:03
Round: Doesn't, doesn't understand context yet. It's still learning. And I wonder if it will fully understand context, as, as time goes on. But that is a challenge now.

26:20
Kelly: If AI kind of becomes integrated into, you know, a lot of different technology tools that are part of the student experience, how will students need to think about when and when not to use AI? Like, like how do you discern when AI is not the right tool for a particular task?

26:40
Round: Well, they'll need to build their muscle — they'll definitely need to build their AI literacy muscle. I think, Rhea, you and I, let's just coin that term on the podcast: AI literacy. But I think they'll, they'll need to discern when AI is not the right tool for the task. You know, I keep going back to, there's a, there's a quote that I think is really appropriate here. Because as I've navigated my career in higher ed, or, or worked with colleagues and students, even, you know, raised my family, I think about a quote about the difference between knowledge and wisdom that I've always appreciated. And that is, "Knowledge is knowing what to say. Wisdom is knowing when and how to say it." And I think that this is the difference between how people hear the information, may be willing to partner with you, that social piece, toward the greater good. And when it comes down to the competencies that require knowledge and learn, and which learning challenges require wisdom, I think that's where students will need to really discriminate between the two. AI is not a great fit for skills like strategy and social skills and creativity and perhaps empathy right now. Learners will need to increase the literacy, they'll also need to be able to determine how knowledge potentially generated by AI can be one data point that really helps with the wisdom pieces. It's not the only input. So I have a lot of faith in this generation and the generation to come, the next generation. I think just as millennials really learned digital literacy skill sets around the web, I think this generation of learners will become very astute in the differences between knowledge and wisdom. And they'll probably have a lot to teach us as well.

28:48
Kelly: Yeah. So you mentioned Chris Dede's work. Could you share any other resources out there that you find useful about generative AI or teaching with, with AI?

29:01
Round: You know, there's some great academic thought leadership through the folks at Global Research Alliance for AI in Learning and Education, and also the National AI Institute for Adult Learning and Online Education, that's, that's Chris Dede's work. Also at WGU, we have our learning community and we're, we're talking about AI. We also have a Master's in Learning Experience Design that we just launched back in summer, and, you know, that was really rigged to be able to help people navigate these kinds of technological, disruptive events and create those transformational learning experiences. It's really part of our DNA. So feel free to check that out. And we'll, we'll look forward to what's next.

29:58
Kelly: Thank you so much for coming on. I really enjoyed talking through all these things with you.

30:03
Round: Oh, thanks Rhea. It's always a pleasure and lots to, lots to look forward to. And lots to really critically think through too, as we, as we think about the responsible use of AI and, and how we don't let the tool lead the design or the learning experience, but we leverage the tool in the right way in creating those transformational learning experiences.

30:35
Kelly: Thank you for joining us. I'm Rhea Kelly, and this was the Campus Technology Insider podcast. You can find us on the major podcast platforms or visit us online at campustechnology.com/podcast. Let us know what you think of this episode and what you'd like to hear in the future. Until next time.

Featured

  • pattern featuring interconnected lines, nodes, lock icons, and cogwheels

    Red Hat Enterprise Linux 9.5 Expands Automation, Security

    Open source solution provider Red Hat has introduced Red Hat Enterprise Linux (RHEL) 9.5, the latest version of its flagship Linux platform.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Launches AI-Native Networking and Security Management Platform

    Juniper Networks has introduced a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.

  • a digital lock symbol is cracked and breaking apart into dollar signs

    Ransomware Costs Schools Nearly $550,000 per Day of Downtime

    New data from cybersecurity research firm Comparitech quantifies the damage caused by ransomware attacks on educational institutions.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.