7 Questions on Generative AI in Learning Design

Open LMS Adoption and Education Specialist Michael Vaughn on the challenges and possibilities of using artificial intelligence to move teaching and learning forward.

The potential for artificial intelligence tools to speed up course design could be an attractive prospect for overworked faculty and spread-thin instructional designers. Generative AI can shine, for example, in tasks such as reworking assessment question sets, writing course outlines and learning objectives, and generating subtitles for audio and video clips. The key, says Michael Vaughn, adoption and education specialist at learning platform Open LMS, is treating AI like an intern who can be guided and molded along the way, and whose work is then vetted by a human expert.

We spoke with Vaughn about how best to utilize generative AI in learning design, ethical issues to consider, and how to formulate an institution-wide policy that can guide AI use today and in the future.  

The following interview has been edited for length and clarity.

Campus Technology: Could you tell us a little bit about yourself, your role at Open LMS, and your background in higher ed?

Michael Vaughn: As an adoption and education specialist at Open LMS, I work with our clients in training and development capacities. Sometimes that's onboarding, just bringing folks into the system; sometimes that's courses in our academy site; sometimes that is generating new training materials. I have a lot of leeway and freedom in what I work on, which I really appreciate. And so that's where I've been able to dedicate some time and energy toward learning a little bit more about AI, some of the tools and platforms out there, and how we might communicate responsible use of those tools to our clients.

Prior to that, I worked in instructional technology and educational technology for over 15 years. I got my start at Cuyahoga Community College in Cleveland, OH, back in the Blackboard 6 days — I think it was WebCT Vista around that time — doing support and training for folks there with the LMS. I moved over to Kent State University for a spell, where I worked on really small teams to build fully online courses alongside faculty who were serving as subject-matter experts. After that I joined Elon University, where I was an instructional technologist for the better part of a decade and co-founded the university's first makerspace. I also served on the advisory board for the REALISE grant at Radford University, which was a Howard Hughes-funded initiative to promote diversity, equity, and inclusion within the sciences.

CT: Where do you see the biggest potential for the use of technologies like generative AI in learning design?

Vaughn: Where AI really thrives is in automating tasks that can typically be cumbersome, as well as finding patterns in large sets of data that would be difficult for us to uncover quickly. The metaphor that I tend to use with folks is: If you hired someone to come to your house and build a deck, and they showed up with a screwdriver and started putting in all the screws by hand, you would probably be a little annoyed — especially if you're paying them by the hour. It's going to get the job done; it's just going to take a while. No one would look at that carpenter and say that using a drill is somehow inappropriate — it makes perfect sense that you would use a tool that does the exact same thing much faster. It's more efficient, it's a better use of time, and it allows them to focus on other things. You get your deck faster, and they get to move on to another job faster.

That's where I see AI fitting in within the world of higher education: If we're looking at tasks that are historically very time consuming, we can start to use AI and generative AI platforms to dramatically speed up how we do those things. To give a specific example, if I am teaching two sections of the same course, I don't want to have the students in one section take a quiz on Monday and then use the exact same questions for the other section on Tuesday. With generative AI, I could take that question set and use a tool like ChatGPT to reword the questions so that they require a different response, even though they're still testing the same concept or idea. Now I've taken something that historically would have taken me a really long time to do — rework an entire assessment — and achieved it in a matter of minutes through the use of a generative AI tool. And since I am the expert in the subject matter, I can assess the results that the AI is outputting to determine whether or not they're actually accurate and worthwhile.

Another example: I've used ChatGPT generate course and training outlines, and to generate learning objectives that clearly communicate to learners what they're meant to be learning within a specific module. And the interesting thing about it is that you essentially have a co-writer who is never upset with your feedback. When I asked ChatGPT to generate some learning objectives, I had to tell it, "You're on the right track, but those objectives are not measurable in any way, shape, or form. I'm looking for an action verb that will indicate that learners have actually demonstrated that they've learned something." I gave it some examples of what those verbs might look like, and it rewrote the objectives in a way that was very competent. Having an AI companion that knows how to write in multiple formats means I don't necessarily have to spend my time trying to remember, what are the elements of a measurable objective? Are we framing these like smart goals?

CT: How can AI tools be used to help improve accessibility when building course content?

Vaughn: AI transcription and subtitle generation has gotten consistently better. I love to use the transcription feature in a little-known app called Microsoft Word: I can upload an audio or video file into Word, and then it comes up with its best approximation of a transcript. It recognizes multiple speakers — this is pretty common for most tools like this now — and I can copy the transcript right into Word and edit it in an environment that I'm familiar with. And now I have a full-blown transcript, I have the audio or video file as part of the document, I can copy and paste that transcript into a video streaming service, and nearly every video streaming service will automatically turn that transcript into subtitles or captions for my video. It just saves so much time at this point, that it's almost like, why wouldn't you do this?

I've also seen some excellent applications using image recognition software, if I'm having trouble coming up with an alt text description or a caption for an image. It's hard for me, as someone who's always been a sighted person, to think of how to describe an image to someone who cannot see. And having that extra bit of AI — being able to leverage those tools to dramatically reduce the amount of time that it takes to create accessible content by default — it's a wonderful gift. I don't think nearly enough people look at it that way. It's incredible that we're able to do this.

I will end with the caveat that we should not just let the AI do everything. While it's a competent writer, it's not a great writer. I've seen plenty of issues with generated captions and subtitles being accidentally inappropriate or misleading, and that would be very confusing to someone who can't also hear the audio. But you can certainly use it to do the bulk of the work.

CT: What's the best way to engage faculty in using these tools?

Vaughn: It feels like we have this conversation every year with whatever new thing emerges. But AI is one of those ed tech things that's come along that I don't think is a fad. I do think we'll still be talking about this in a year — we'll be talking about this for a long, long time. I don't see it going anywhere.
 
The example I love to give is Instagram, one of the largest social networks on the planet, took two-and-a-half years to reach 100 million active monthly users. And ChatGPT did that in two months. It's one of the most rapidly adopted technologies in history. Two months to get to 100 million active monthly users is incredible growth. I don't see students dropping AI anytime soon. So there is an incentive to learn about these tools, or at least how they work longer term, even if you're not going to use them directly.

I do have a lot of empathy for instructors, though, because as instructional designers, as educational technologists, we seem to come to them every year and say, "Here's another thing you have to learn to use, on top of everything else going on in your discipline within the field of teaching and learning." You're always going to have some folks who are understandably very resistant to any sort of new change. The most realistic approach is to reassure faculty that a) you have tools in place to help address some of their fears about AI, and b) you are present and available to help onboard them into those technologies if and when they're ready.

CT: What are some of the ethical considerations around using generative AI in learning?

Vaughn: With generative AI, the big question is: Did learners actually create this on their own? There are also some larger ethical considerations to take into account if we're going to choose to engage with generative AI tools. Are they accurate? Are they actually giving you responses that are factually correct? There have been many well documented instances of chatbots — I think Google Bard got in some hot water over this — getting even basic math calculations wrong, and they would double down on being wrong. If you have a learner who doesn't know how to critically analyze a response from an AI, like an instructor who is an expert in their field or discipline would, then you are in a spot where the AI is doing more harm than good.

These tools are also trained off a very, very large data sets. And when you engage with these tools, you are training them to become better, you are feeding more data into the system, and very often you are doing it for free or you may even be paying the service for access to it while at the same time providing your free labor to improve that product, which is inevitably going to be turned around and eventually sold somewhere else. So that is certainly a thing, because we do care a lot about intellectual property and copyright concerns, right? You would be really upset if someone took one of your articles and republished it with their name on the byline, with no attribution, and just pretended that they wrote it. The same is true for these generative AI tools.

One prominent example from last year is the mobile app Lensa. The way Lensa works is that you upload a bunch of photos of your face, and it generates these cool AI avatars. I got caught up in it — I thought it was really cool. My face is out there on social media, so I had no problem uploading 20, 25 pictures my face for this application to analyze and generate some avatars for me. A week or two after I did that, it came out that Lensa was being accused — very credibly, by multiple artists — of having stolen their artistic style. And Lensa was not clear about how they trained their AI model and what art they used to do that.

The last thing that I would mention about generative AI tools in particular is that you can usually access them for free, but if you want quality results, you have to pay for access. For example, I subscribe to ChatGPT Plus for $20 a month. In exchange for that $20 I get access to the newest model of GPT, GPT-4, which has passed the bar exam, two of the three GRE exams, multiple AP exams, the written exams to become a sommelier — which is a little ridiculous. It's even met the passing thresholds for most of the USMLE, which is the licensing exam for doctors in the United States. This is an incredibly powerful model I get access to for $20 a month. And not only that, if ChatGPT is overwhelmed by all the free users, the folks who have free accounts lose access to the service temporarily, but I still get access because I'm a Plus subscriber. When I look at this from an equity perspective, we are in a spot where folks who can afford high-quality access to high-quality tools are going to be capable of doing really impressive things — and folks who cannot afford that will not. Equity and access are going to be long-term issues with some of these generative AI tools as they become more and more popular.

CT: What kinds of policies do institutions need to put in place around the use of generative AI?

Vaughn: For the past two months or so I've been working on a generative AI use policy. The idea behind the policy is to clearly communicate to folks what is and is not appropriate, and what is and is not safe. These tools hold incredible promise, and you would be losing a huge competitive advantage if you were to just outright ban everyone from using a tool like ChatGPT. If you put some guidelines in place for your folks to follow, they'll have a much better idea of: How is this used? How can we use it responsibly? That's why I believe in having a policy like this, so that it clearly communicates to folks what is expected.

In terms of overall structure, the policy begins with a scope and some basic definitions of generative AI, along with definitions of what is proprietary information and what should and should not go into an AI model. We're also trying to provide a lot of guidance for what is responsible usage. If I am a staff member at a college or a university, I should not be putting confidential or FERPA-protected information into any generative AI tool, period. There are certainly some exceptions; for example, if you're working with a vendor, and you have protections in place, and you know how that data is going to be used. But otherwise, treat the AI like a human being who doesn't work there. Would you give this information to them? If not, don't give it to the AI either.

It's important to give clear examples of what appropriate applications of AI would be — and also what are the inappropriate applications. I like to include the idea of a transparency clause: When should you be telling people that you're using AI? And also enforcement: What are you going to do when AI is misused? What happens if I break the rules?

The policy should include information on how to obtain training and support. If you're going to let people use these tools, if you're going to provide a policy for appropriate use and inappropriate use, then I think you need to provide a baseline amount of training for folks so that they understand these technologies at a very basic level. How do they work? When you're using them, what does that look like? Alongside that, there are multiple hard conversations that need to be had around the drawbacks of these technologies. The ethics of AI is very complicated.

And finally, we have a review and update section, noting that the policy will be regularly reassessed to make sure that it's still accurate.

CT: With this technology evolving so rapidly, as soon as you lay out a policy it could become out of date within minutes. How can institutions design these policies to be future-ready?

Vaughn: It may be that some of these technology-related policies need to be reviewed more frequently. I know a lot of organizations will look at their policies on an annual basis — I think an AI policy should probably be reviewed every six months. And it might not hurt to look at it every three months or so. I would also recommend that folks cast a broader view when developing a process like this. For example, a broader view that says, "Don't use these tools to harass or bully someone else" — that covers an awful lot of use cases, including some things that might come down the road that we just can't possibly think of yet. And then when you're updating along the way, it might just be the definitions that need addressing as the policy grows and evolves to meet new changes in technology.

Featured

  • abstract image of fragmented, floating geometric shapes with holographic lock icons and encrypted code, set against a dark, glitchy background with intersecting circuits and swirling light trails

    Education Sector a Top Target for Mobile Malware Attacks

    Mobile and IoT/OT cyber threats continue to grow in number and complexity, becoming more targeted and sophisticated, according to a new report from Zscaler.

  • Global AI vibrancy ranking

    United States Leads in Stanford HAI Global AI Ranking

    A new ranking tool from the Stanford Institute for Human-Centered AI (HAI) AI Index puts the United States in the No. 1 spot for global AI leadership.

  • college bookstore with large bookshelves, a small section for sports clothing, a tech corner, and a cashier counter

    Syracuse U Transitions Campus Store Operations to Barnes & Noble College

    Syracuse University has partnered with Barnes & Noble College to manage all course materials, retail, and e-commerce operations for its previously self-operated Campus Store.

  • person signing a bill at a desk with a faint glow around the document. A tablet and laptop are subtly visible in the background, with soft colors and minimal digital elements

    California Governor Signs AI Content Safeguards into Law

    California Governor Gavin Newsom has officially signed off on a series of landmark artificial intelligence bills, signaling the state’s latest efforts to regulate the burgeoning technology, particularly in response to the misuse of sexually explicit deepfakes. The legislation is aimed at mitigating the risks posed by AI-generated content, as concerns grow over the technology's potential to manipulate images, videos, and voices in ways that could cause significant harm.