Open Menu Close Menu

Transcript

Campus Technology Insider Podcast August 2024

Listen: Streamlining Instructional Design with Generative AI

Rhea Kelly  00:07
Hello and welcome to the Campus Technology Insider podcast. I'm Rhea Kelly, editor in chief of Campus Technology, and your host.

As both instructional designers and adjunct faculty members at Northeast Iowa Community College, Shannon Brenner and Jordan O'Connell have a close-up view of generative AI from both perspectives. Gen AI tools have both transformed the course-building process and changed the game for interacting with students and assessing their coursework. For this episode of the podcast, we talked about generative AI's impact on instructional design and teaching, navigating acceptable use of AI, strategies for helping faculty make the most of AI, and more. Here's our chat.

Hi Shannon and Jordan. Welcome to the podcast.

Shannon Brenner  00:58
Thanks for having us.

Jordan O'Connell  01:00
Hi. Thank you.

Rhea Kelly  01:02
So to start off, I think it's always great to have you introduce yourself and kind of talk about your role. I understand you're both instructional designers but also adjunct professors, so I'm very curious about what that's like.

Shannon Brenner  01:17
All right, I'll go first. I'm Shannon Brenner. I am an instructional designer for Northeast Iowa Community College, and as you said, an adjunct faculty member. So in my full-time role as an instructional designer, I help faculty build courses and design courses. We also help them with instructional support, and we do a lot of trainings on instructional technology, AI, things like that, and we help the college with other initiatives as well — assessment and all sorts of other things. As an adjunct instructor, I teach courses in the communications area, primarily composition and also our first-year college experience course.

Jordan O'Connell  02:00
And I work closely with Shannon as an instructional designer at Northeast Iowa Community College as well. I started a year or two before her, kind of in the throes of COVID, and so that's how I was sort of introduced to the work and the importance of the work of instructional design. Longtime instructor, longtime community college instructor. I teach in different disciplines from Shannon: I teach in humanities, history, political science, yeah, in three, three different community colleges over the past 12 or 13 years. I've grown a lot, I like taking on new roles. And the instructional design role is neat because it allows me to sort of see behind the curtain, both of learning management systems like D2L Brightspace, but then I start to apply what I learned and see what others are doing too in their courses, within my own courses. And so it's challenged me to grow as an instructor as well.

Rhea Kelly  02:44
So we're here to talk about generative AI. In a broad way, how has generative AI changed your approach to instructional design?

Shannon Brenner  02:54
How much time do you have? It has changed everything. As far as instructional design goes, it's made a lot of what we do a lot more efficient, right? So when we were building questions. maybe, maybe from an instructor, into our LMS, which is D2L Brightspace, before, one question at a time. Now we are able to do that by having an AI tool create a CSV file that's already formatted correctly for us to import directly into our LMS, or using tools like D2L's Lumi, which allows us to generate quiz questions based on content within the course. So both of those things make it much, much faster and easier to build content in a course, and it's also kind of expanded what we can do. So, for example, sometimes we hire adjuncts at our college, and it might take up until shortly before the semester starts to get them on board. And we are able to, in the background, even without a subject-matter expert readily available, we can start to get some basic content together in a course, even in an area that we may not know much about. And so with the help of open educational resources and AI tools like Lumi, we can get quizzes built and maybe some basic discussion prompts and assignment ideas and things like that ready to go. And then once the instructor gets in, they can customize and tailor to their needs.

Jordan O'Connell  04:22
And we saw very early, I think when we, sort of, ChatGPT was released to the public, I don't know if it was version 3 or 3.5, we sort of recognized the potential here, particularly for our group of faculty. So Shannon was talking about adjuncts. But there's also hundreds, I believe, of concurrent faculty teaching high school classes that are connected and teach the same course, or teach the same course objectives, as full time faculty that we see every day, but they're kind of on an island, and the way that we deliver classes is decentralized in that way. But this AI tools provide us a means of kind of pulling everyone together and pulling their best ideas together and building courses that will work for those late hires that we're kind of, that we're currently dealing with here in late August, right, at our community college. Shannon and I probably answered the phone 100 times this morning with different questions, issues with some of those courses. So the technology is such that we can make good on like long-term plans we've had to create courses that can serve a broader group of people, that people can just step into, but we haven't really had the technology, tools, or time to do that. But because tools like D2L Lumi expedite this, suddenly things are possible. Our time goes further. We can do a lot more with the, the time that we have with, with different groups of faculty.

Shannon Brenner  05:33
Yeah, and we actually, part of our role is to provide LMS support to students and faculty. And AI has allowed us to sort of offload some of that to chatbots, and, you know, some, some things like that. So some of the easily answered questions we can have available on a chatbot, and then that frees up more of our time to do some more in-depth work.

Rhea Kelly  05:56
When you say, "This allows us to do more," what, what does "more" mean? Like what, what are you spending your, your newly freed up time on?

Shannon Brenner  06:07
Well, I think some of that time is spent being able to kind of dig in a little bit more with some of our faculty. So I'll go back to the adjuncts hired, well, not necessarily even the adjuncts recently hired, but some of our disciplines, because we're a community college there are things like agriculture or welding or other career and technology fields. And those instructors come from industry, and so they are great at what they do in industry, but they have no training, necessarily, in teaching. And so it really frees up our time to help them become better teachers and help them create materials like slideshows and lecture notes and, you know, key questions and things like that that they can put into their courses. We don't have to teach them how to do that stuff. We can just do it for them. So I think that's one way that we have been able to kind of better utilize our time.

Jordan O'Connell  06:57
Right, and so to carry forward that example, so if we would spend a month working kind of offline, asynchronously, with an adjunct or even a full-time instructor who's trying to do this at night, we're exchanging e-mails, we're trying to build simple question libraries that will be secure, that will help them achieve their assessment aims. But instead of spending that month designing the perfect final exam, we create the perfect final exam in 1/10 the amount of time, right, in partnership with the instructor, and then we start to talk about ways in which they can tackle other things they'd like to do in the classroom but don't know how — in terms of active learning, flipped classroom, recording lectures, where we get to move forward past these introductory technical stages, where we're just basically teaching people how to use the LMS over and over again, or helping them, hand-holding them through it.

Rhea Kelly  07:44
And how has the response from faculty been? If you, like, when you hand them over a list of quiz questions created by AI, are they impressed by it? Or, you know, how much sort of back and forth is necessary to vet those, the output of the AI?

Jordan O'Connell  08:00
That's been interesting. And some of our faculty, we've had, you know, again, since late 2022 we've been actively ringing the alarm bell, talking to faculty about this, talking to college leadership, saying a change is coming, right? We can, we can try to stay ahead of this and figure this out, or we can let this happen to us. And so we very much tried to stay in that forefront. And so we introduced these tools as quickly as we could to faculty, and some of the faculty that are still very much trying to find their way through what this means for their teaching, their style of instruction, have been ones who have been very grateful that this tool is there when they need it, in a pinch. And so they're trying to reconcile those two realities: What does this mean for the way I've always taught my course for 15 or 20 years, and how I think I'm going to have to change eventually, because now students can just use a chatbot to answer my questions. But also this tool is incredible, and it saved me a ton of time. And so they're squaring those two realities. So to answer your question more directly, Rhea, they like it when it's there. I have a group of CNA faculty right now going question by question through what was generated. But it was incredible. The CNA course I was developing would not have gotten done in time for a variety of reasons, due to some, you know, instructor health issues and things like that, but for D2L Lumi, that was available to help me generate those questions. And now faculty are reviewing those questions before we pilot the course. So they're very receptive. And they're, it's, it's obvious to those instructors how valuable it is, and they're still in control. I think that's a critical piece, right? They are still the ones going through the question library and vetting all those questions. They're still the decider. They're still the instructor of that course. But content is made available. Instead of being provided directly from a publisher, the same group of 40 exam questions, I can generate 200 exam questions that are just as good, just as aligned to the curriculum, literally aligned to the Bloom's verb, with D2L Lumi. So, in fact, I think there are a better set of questions I can generate and that they can review.

Rhea Kelly  09:51
Do you have any pointers on kind of getting the most out of AI when you're using it for these instructional design purposes? You know, what's the best way to engineer your prompts?

Shannon Brenner  10:02
That's a good question. Let me answer that a little bit more broadly before we get into prompt engineering. But I think that when using AI tools, it works best if we have open educational resources to work with, because then we don't have copyright concerns. So that's one thing, that's one tip, I guess, for instructional design or designers, is to really advocate. We'd already been advocating for using OER, right, for the student benefits, but being able to use them really increases your opportunities and the capabilities of AI tools. I also think using multiple tools and kind of knowing the strengths of each one and the limitations of each is really helpful. So for example, I had a faculty member e-mail the other day asking if I could help him with a slideshow. He's an agriculture instructor, and he just doesn't have time to work with the technology. So he sent me two articles that he'd been using to base his slideshow off of and I was able to put those into ChatGPT and have, have it make a summary of slides, a series of slides with summary bullet points, and then I took that and put it into another tool called Brisk, which was able to create a Google slideshow with images based on that content. From there we could put that, those slideshows into D2L Lumi and generate quiz questions or discussion prompts or assignment prompts from that lecture slideshow. So being able to kind of leverage all of those tools together is really how you get the most out of them.

Jordan O'Connell  11:29
And that's the kind of thing you can do, but an instructor won't necessarily know how to do or would take a long time to learn. So that's where we can really help. I think not being overwhelmed by the number of AI tools out there is somewhat of a challenge. One of the first lists we put together I know had like 50 vetted tools that we had sort of done deep reviews on, and now there's obviously tens of thousands of these tools. So going with a couple tried-and-true tools, and not trying to find other options that might do something slightly different, but just getting comfortable something like a ChatGPT, a Claude, and if your school has Lumi, that can be really, really powerful. And it does expedite question building in the question library, assignment creation, discussion creation. Those tools are just fast enough to not be annoying as an instructional designer, they're just fast enough to keep pace with us, which is pretty great.

Shannon Brenner  12:12
So in terms of engineering prompts, I think it comes down to being very direct with the AI tool. I almost like to liken it to talking to a five year old. You know, you have to be very direct and sort of very concrete with what you want and what you say, and just be willing to refine, keep refining until you get what you need. And then when you do get what you need, to save those prompts, because the next time you have a conversation, even with the same AI tool, it may not answer it in the right way, or may not produce the same content the next time, it may make mistakes. And so being able to go back to those prompts that worked previously is really helpful.

Rhea Kelly  12:52
So we've been talking from an instructional design perspective. Are there ways that generative AI has changed your teaching from, like, you know, the faculty perspective?

Jordan O'Connell  13:02
I'll jump in on this right away. I instantly started to realize that my students had access to it, and I could potentially take advantage of this. As a longer term, I'm very much thinking about the ways in which the course objectives that I write, that I'm actively involved in, that as a humanities group, we're always sort of working on and wanting to improve, how those Blooms verbs sort of need to change, because students have access to information so readily, that's mostly pretty darn good, from these AI chatbots. In some way, shape, or form, I have to reconcile with that as an instructor, and I realized that pretty early. So I have not been, number one, in terms of artificial intelligence, and Shannon's done the same, we've started to ask students to record in the online space, and so we're using different assessment methods to account for this new world we find ourselves in. And so students are looking at their camera and just riffing about US history or government or the Electoral College with me, and it's fun. Like my class reviews are way up. Students like my class more. I'm leaving them video feedback, which is awesome. So it sparked this movement for me toward this sort of alternative version of oral exams in the online space that students have gotten comfortable with. And I've dabbled with video. I've long had my own lectures in my classes, and I've always asked students to kind of record an intro video or kind of a stump speech at the end of my government class since 2020. But I've seen the ways in which, partly because of, I think, COVID lockdowns, things like that, students have gotten more and more comfortable jumping onto a video tool, recording a thing, and submitting it. I think that's, that's the reality of, at least for our community college learners here in Iowa. So yeah, it's changed the way I assess, students are having a better experience, I'm leaving all this video feedback, which students are really liking too. I put chatbots in my own classes, which are super simple to keep updated, so I'll get these messages from students sometimes 11 p.m. saying, hey, what's the late work policy, or what's going on? And then five minutes later, oh, never mind, I asked the chatbot. But I'm using Dante for that, Dante AI, and there's a number of instructors, there's, you go to any conference, tons of instructors are doing this. It's really, really neat. So I think it can really empower better instruction, our students can have access to information. It's a research tool, right? It's another, it's another way they can learn information, and so it has to be accounted for. It can expedite their own learning, it can enhance their own learning, it can expand their own learning. We should try to take advantage of it to some extent. And we're having lots of internal debates at the college about, it's asking, it's forcing us to ask really philosophical questions about, how do people learn best? How do our students learn best in this program? We're asking these questions we've always made assumptions about, at least in my time in higher ed, that are long overdue, and I don't exactly know what the answer will be in the end to all these questions, but I'm really glad we're asking these questions and we're exploring,

Shannon Brenner  15:36
For me, a lot, I'll echo a lot of the same things in, particularly in the college experience course that I teach, I've moved to a lot of audio/video in terms of discussions and assignments, and asking students to kind of reflect. So even if there's a written assignment, I'll ask them to record something to go along with it, just to talk about the, you know, the, the assignment process, or how it connects back to their own experiences. And so I found that to be really useful. Since I've moved to audio/video and kind of forcing students to defend their choices in audio and video, I have not really had any issues with students using AI in that particular course. I've also, and I'm trying to work on getting this added to all of our college experience courses, because they reach almost all of our students, is to add an AI literacy unit to that course, because I think it's really important for our students to understand what it is, how it works, what its limitations are, and how it might apply to their industries in the future. We're going to have to move to a point where we teach students in our courses how to use AI in a way that will help them when they get into the, into their fields, right? So that's kind of in the back of my mind always as well. In terms of composition, a little bit, a little bit more of a challenge, right? Because students have to write, and we're supposed to be teaching them these skills. But I've used AI to generate sample pieces for students to critique, which has been really useful. So I have students really dig in and find all the flaws in it, but I don't tell them where it's from until after they've done the exercise. And they're always surprised to hear that it's from AI. And so that's really useful, because when students generate something with AI, they think it's good. They tend to think because the, it's got big, big words in it, and the sentence structure looks really good, that it's going to meet the needs of the assignment. And when they really dig in, they realize that there's no substance and that it doesn't actually connect back to our learning materials, and it doesn't do what I've asked it to do in the assignment prompt. So that's been a learning experience for them. And I also like the fact that I can generate sample content for them to critique, and they don't feel like they are hurting someone's feelings. So sometimes I do tell them it's AI generated early on, and it works better than, say, a peer review, or it's a good way to kind of introduce them to peer review before they actually look at what a peer has written, right? They understand how to provide feedback in a way that's constructive and won't hurt someone's feelings. So I've used it both of those ways, and I use it sometimes to help provide feedback on student writing. I tell them this because I think transparency is important, but it's another time-saving tool. It allows me to spend less time providing inline feedback on things like sentence structure, and then I can focus more on recording feedback to them and talking more about the content of the piece, and, you know, whether it, whether it's clear that they're meeting learning objectives, which I think is more important and a better use of my time, and a better use of their time as well.

Rhea Kelly  18:30
Are there any challenges that you've run into along the way, in incorporating generative AI into your work?

Jordan O'Connell  18:38
I feel like the world's our oyster now with these tools, right? And they're low-cost, they're free, incredible, as long as you're paying attention and using them responsibly. Actually, one of the first things Shannon and I did, we have, Northeast Iowa Community College, we have a quality course design site. It's a Google site. Anyone could go to it. We wrote a lot about AI initially, because we needed to really wrap our heads around it. We wrote about the ethical considerations. We actually tried to consider the student perspective, staff perspective, and faculty perspective, because we needed to sort of understand these dynamics on some basic level. So we really had to wrestle with those things first. But once we did that work, I think our fear of what the technology was or would become was really muted, because we again, we realized that we were still in control. We were the ones making the choices. I think students, one challenge I'm seeing is obviously students are trying to use it, they've tried to use Wikipedia, 20 years ago I was trying to use Google probably, right, but I was, actually it was drilled into me not to do that, right, by a lot of faculty. And I said, to echo Shannon's point, I think we have a huge responsibility to talk to students about responsible use of these tools at every opportunity we can, in the college experience course and elsewhere too, everywhere we can. But students do feel like if they've generated, a lot of students will make this argument, that because they are a part of the, they crafted the prompt, you know, they can…. They haven't done that work that I described of thinking through the ethical sort of ramifications of what they're doing, and if they feel like they did contribute to part of it, sometimes they can feel like they contributed to all of it. Or if they put a little bit of themselves into what they did with AI, it's all of them. And that to critique that, or to suggest that that's not the outcome that you're looking for, students don't know exactly, they don't have a framework for how to deal with that, because they feel like they were involved, right? It feels like googling to them, which is a part of the research process sort of already, to some extent. So we, I think we are going to have to make that case to the students about why, why and how, again, to use these tools responsibly. It won't come easy or natural to them, and so we have to, I think we have to be leaders, and again, talk to them constantly about it. That's the number one challenge I see, is students trying to use it, even reading what I'm telling them to do, but again, not really thinking through, or really, again, philosophically thinking through, what's me, what's the computer? And where's the bright line between these two? It's super gray, even from my perspective. But for them, it's really tempting to say, "This is me, you know, I worked on this." This is my, that's my number one challenge, I would say.

Shannon Brenner  20:58
I would say, I'll speak from the instructional design standpoint, you know, we still have some faculty who are, I think, more afraid of AI that, you know, at this point than anything else, and who kind of really just want us to give them a tool that will help them detect, 100% certainty, whether someone in their courses has used AI so that they can give them a zero and move on. And so it's a challenge sometimes to convince some of those faculty that that tool doesn't exist and probably never will exist, because as the detection tools get better, the AI generators get better too, right? So it's just probably not in the cards. And so we are trying to kind of get our faculty, some of them, to think differently about what they're doing, and to think, and to understand that if they have students using AI in their courses on particular assignments, maybe those assignments need to be the thing that's reimagined, right, and maybe not the detection tools. And so we can, trying to work towards more authentic assessment and more audio and video and, you know, just getting, getting to the root of, what is the learning objective that you're trying to assess here? And is that prompt, if it's that easy to replicate it through AI or any other means, is it really doing the thing that it's set out to do anyway?

Jordan O'Connell  22:13
Right. And we knew it, we knew it 10 years ago, when, when students could Google their way to some of these answers. But now it's real. And so faculty really have to make a choice. Do I change or do I try to force my students to change? And that's where a lot of our faculty are currently.

Rhea Kelly  22:27
So you mentioned trying to make sure that every student can get some AI literacy training. And I'm curious, has Northeast Iowa established, like, official policies around things like that, or around, you know, acceptable use of AI, or is it more about providing the resources and information?

Jordan O'Connell  22:48
We have, we actually just launched, sort of at the direction of our interim president, Dave Dahms, we have an Artificial Intelligence Alliance, which I think is a sort of a neat approach. So we're kind of waiting from, potential guidance from the state and the community colleges across Iowa, but in the interim, we have sort of a small group that reports to the president's cabinet that's kind of entertained questions like this. And so we have not created an AI policy, but we went back and looked at like our academic freedom policy, academic integrity policy, processes, and a couple of others too, to kind of make sure that we're kind of prepared for what's coming. So Shannon and I worked real hard last semester to really update that academic integrity piece with all of our, with a ton of faculty in that group too, multiple, multiple drafts, to lay out very cleanly like what was still in their control and how they, you know, what they could do in their syllabi and through course policies to kind of prepare, take advantage of AI or help students learn without it, would kind of be the approach. So no top-down policy, but a nice framework in place and a group in place that's good doing constant professional developments, right, reaching out to faculty, triaging when we do get those tickets coming in, saying, "Help, I have a student who's using AI and won't stop," right? So we have a ready group of people, kind of ready to talk to those faculty and even staff too when these things come up, to help guide them through that process. It's course specific, context specific, and discipline specific, right?

Shannon Brenner  24:08
Yeah, I don't think I have anything else to add to that other than that we did add, I think, one line to our academic integrity language that goes in, out in all syllabi, that just indicates that unauthorized use of AI without citation is considered plagiarism. And so at least there's that, that line to protect faculty if students are using it in, in a way that is unethical or not allowed in the course. But then faculty are strongly encouraged to create their own AI policies for their specific courses and put those, there's a specific section in our syllabi for course policies, and they, they can put it there.

Rhea Kelly  24:44
So in terms of faculty training, I know you've talked about what sounds like one-on-one help for faculty, but are you doing any other sort of more formal training, you know, events or things like that, for faculty too?

Shannon Brenner  25:00
Yes, absolutely. So we just finished up our convocation week where we had our first training of the year on AI, which was mandatory for all of our full-time faculty. So, so we at least got in front of everybody there and talked about some of these basics. But we have scheduled sessions on some individual AI topics all throughout the academic year. We do variety of sessions: Sometimes they're live over Zoom, sometimes they're recorded sessions that we put on our YouTube channel. Sometimes they are, you know, hands-on in-person support, if we do maybe a sort of a workshop, or where faculty can come in and start building their AI policy, or start playing with a tool, and we'll be there to kind of help them through it. So we offer trainings all the time, you know, and faculty are busy, so they come when they are able to. But we also have a professional development space where we can post recorded trainings, and then our faculty can go watch them on their own time and get continuing education credits for those. So, you know, it's sort of a number of different avenues, but we, we put a lot of trainings out there for them. We also do a monthly newsletter where we post tips and new tools and suggested resources and things like that on AI and other technology.

Jordan O'Connell  26:14
And for like, right now, we're trying to roll out Google Gemini to all of our faculty at the college, everyone at the college, and so that's a thing we'll continue to kind of roll out, make available through newsletters, updates. I think, you know, if anyone's listening in an instructional design capacity, or anyone, deans, anyone working with faculty wanting to make sure they get introduced or understand sort of the gravity of these tools, I think the smartest thing we did was last fall, getting on that calendar, and we've done it now three semesters in a row, where we have taken at least an hour with everyone there and said, like, this is the reality. This is what your students can do now, right? You've got to think through these questions. Who has good ideas? Roundtables, small group breakout rooms, but we, we forced everyone to look directly at the bright sun, right, and sort of wrestle with what that meant. And so we're going to keep doing that. We'll keep building on it, but we're also sort of assuming that, it's funny, I will, I will encounter sometimes students, sometimes faculty who just haven't encountered these tools or used them yet. So that's going to keep happening, and we still have to kind of keep going back to those basics and reminding everyone of what this means. Even if you haven't seen it personally, you've got to, you've got to account for it as an instructor.

Rhea Kelly  27:19
Those people must be living under a rock at this point I think.

Jordan O'Connell  27:22
I hear it every couple weeks like, oh, okay, you might wanna check that out. But okay.

Rhea Kelly  27:28
So one last question. What do you wish more faculty or administrators would understand about generative AI?

Shannon Brenner  27:38
Okay, so I think it would be useful if everyone understood that in a lot of ways generative AI is an equity tool and an accessibility tool. So for example, in a composition course, allowing students to use AI to help them with their sentence structure and grammar and the mechanics of writing levels of playing field, right? So students who may have weaker skills or who may have a learning disability, dyslexia, something like that, can produce writing that has consistent mechanics, the same as anyone else, and then we can dig more into the content and their critical thinking skills, right? So for things like that, and also in terms of accessibility, students being able to do things like have an AI tool record notes and take a transcript of a lecture or summarize lecture notes into practice questions to make a study guide, you know, there are all kinds of ways that students can use AI tools to help level their playing field, and that it's not just for students, of course, with it, with accessibility needs, but they are one of the groups that I think can really benefit from being able to use them. So, you know, it may be scary for some instructors in terms of student use, but the possibilities and the benefits, I think, far outweigh any of the any of the concerns.

Jordan O'Connell  28:59
I think, I think we tend to rest on our laurels, right, when we get in any role for too long. And I think this really lights a fire under all of us working in higher ed specifically, that's the only world I know, and it's a research tool, and it might be as significant as the printing press. It might be that significant to the learning, research, communication process. I would echo Shannon's point about, points about accessibility, access, incredibly powerful. If we sort of try to choose to ignore it, students might, that might become their teacher, right? That's kind of the future, potentially, unless we can be additive, unless we can, and I believe that we really can, right? I believe humans can do things that these AI tools can't do. So we should try to use them and bring all the benefits that these tools bring, and then bring that to our classrooms to the best of our ability. And we have to stay in it even when it's uncomfortable. Keep talking to people. I think that's the important part. Keep talking, thinking, talk to your colleagues about what they're doing, seek out others. I think one thing we did that made a lot of sense early and continues to, I sign up for every AI related session I possibly can and just listen, just to hear those ideas, right? Where is this going? So stay plugged in. I'm super optimistic about what this means for student learning outcomes and the ways in which we can teach our students better, and our students can learn better with these tools.

Rhea Kelly  30:11
Thank you for joining us. I'm Rhea Kelly, and this was the Campus Technology Insider podcast. You can find us on the major podcast platforms or visit us online at campustechnology.com/podcast. Let us know what you think of this episode and what you'd like to hear in the future. Until next time.

comments powered by Disqus