Campus Technology Insider Podcast January 2024

Listen: The State of AI in Education

Rhea Kelly  00:08
Hello and welcome to the Campus Technology Insider podcast. I'm Rhea Kelly, editor in chief of Campus Technology, and your host.

Last November I had the privilege of moderating a panel on the state of AI in education at Tech Tactics in Education 2023, a new conference for education IT leaders produced by Campus Technology and our sister publication THE Journal. We had a wonderful conversation ranging from basic definitions, to concerns, limitations, and opportunities, as well as leadership and culture change. I'm excited to share a recording of the discussion here — it's full of insights for anyone grappling with the impact of AI and generative AI on campus.

Tech Tactics in Education took place November 7-9 in Orlando, Florida. For more information, visit techtacticsineducation.com.

So we are here to talk everything AI in education. And I know it's impossible to do the topic justice in 50 minutes, but we are going to try. So first we have Noble Ackerson, CTO of American Board of Design and Research, Dave Weil VP and CIO for IT and analytics at Ithaca College, and Howard Holton, CTO and analyst at GigaOm. So Noble, maybe we'll start with you. Tell us what, what, where your interest in AI comes from.

Noble Ackerson  01:44
My interest in AI comes from my product background. So for about 25 years, I've been in skunkworks labs and worked in sort of the emergent technology space. And one thing that I've seen over and over is solutions looking for problems to solve. And with my product background, I try to sort of correct the ship in any opportunity that I can. In the case of machine learning, it was mostly because as a product leader, you know, both quantitative and qualitative data, sort of with my superpowers, sort of drove decisions for the business as to what we build and why we should build it and why, you know, what happens if we didn't build it now, or why we should wait, that kind of stuff. And using advanced analytics became a tool that I use over time. It just became, you know, rather than just use leveraged lagging data insights to tell me what to do, we sort, I sort of honed my craft and learned a little bit of, you know, traditional machine learning to, to better understand leading data, predicting, classifying, or, you know, using recommender systems, to help me do my job. And, and that sort of one thing led to another, I am now, you know, my why or my focus is to help organizations sort of deliver some of these tools responsibly. And you know, all of the complexity that that comes with.

Rhea Kelly  03:12
And Dave, how about you?

David Weil  03:15
Thank you. So I oversee IT for, and analytics for the entire institution. We're a very centralized organization. And even though I do have a technical background, I'm really interested in how our technology helps our institutions. How does it help our students succeed? And also as important, how can it help our institutions function more effectively and more efficiently? And so that's always been the driver for everything that I've done in my entire career. And I see AI, it's a, it's an accelerant. It's an opportunity to really enhance and energize, re-energize the work that we do. And so I'm really interested in how we can leverage this to help our institutions work better.

Rhea Kelly  04:03
And Howard?

Howard Holton  04:05
So I come from the private sector, and, and my focus is kind of, kind of two pieces, right? One, I'm a lifelong learner. And so I'm constantly looking for things that will me better, that will make me faster, that will make me more efficient, that will help continue my education. And one of the things that intrigued me about AI and machine learning really early on is, at first it was, "This is fantastic. This is really fast. This has the ability to analyze tremendous amounts of data and stop a lot of the decision-making inside of organizations, and I can be a little bit more authoritative." And then I realized it's just another form of statistics, and you know, there's white lies and big black lies and then there are statistics. And what I realized was the real trick is answering, is asking the question, not the model or the technology. And so helping businesses figure out how do we phrase a question properly so that we get an answer that we can actually take action on and will, will lead us somewhere has kind of become the, the interesting piece for me. And and I mean, this is a force multiplier. And I like the accelerant, right? Because on one hand, properly utilized, an accelerant carries us to space. Improperly utilized, it creates an enormous fire. Right? And so that's really kind of where I'm at with it.

Rhea Kelly  05:20
I like that analogy. So, our panel thought we should define some terms to start with, and maybe I'll throw this to you, Howard. You know, can you kind of give a basic definition of, you know, AI, generative AI, LLMs? Like, what, what is that all? Where should we set a baseline for this?

Howard Holton  05:41
Yes. So artificial intelligence is using computers and math to analyze data and make decisions. And that is truly what it is. It's incredibly complex, but, but that's basically what we're trying to do, right? Generative AI is using a specific type of artificial intelligence that understands language, understands language, can determine intent, and uses that intent to generate a response. Large language models are, those responses are language. So, so you know, in most of them, as we think of them, they're really good at English, as an example. And we're now adding several, there's about six languages that they're very good at. But generative AI can do things like 3d modeling, right? Design a component that does these things, and you will get a reasonable CAD file that you can start with for 3d printing and rapid prototyping. The things that we can do with generative AI, we've barely scratched the surface of, right? Today we see things like ChatGPT that are that are large language models, we see DALL-E, right, that generates images, we see Copilot that generates software and code, but the kind of the sky is the limit as we get better and better and better at understanding intent. And then using that intent to feed into other systems that can create and generate a response. And, and really, they still require humans. They are not human replacement systems, they don't work that way. And they're not really designed that way. They're designed to make us faster and more efficient, not replace us. Assuming we're, you know, reasoning, logical, thinking humans. Some people are sheep. So I don't know what to do about that.

Rhea Kelly  07:29
One of the terms I like is the, being the human partner to AI. The AI needs the human partner to really be, I guess, the most effective.

Howard Holton  07:42
Yeah, it's a great personal assistant. Right? So if you need an, if there are things that you do that would be better done by someone, by an assistant, AI is like, it's great to think of that. But like I said earlier, right, think of it as a conversation. It's not a search engine. Right? If you just tell your assistant, "Go book me a flight to the UK," they will book you a great flight to the UK. It won't be on the airline that you want, it may not take off from the airport that you want, it may not be priced accordingly, it may not have the time consideration, and you're likely to be at a bulkhead, by the bathroom. Right? But if you have a conversation, you're going to get the result that you want. And AI is similar that way.

David Weil  08:21
Building off of what you said, just to sort of restate it slightly differently, if we're defining AI, I think it's really important that AI does not equal ChatGPT and Bing. AI is much greater than that. And I think that when we have these conversations back at our institutions with leadership or others, a lot of people are thinking AI equals ChatGPT. And it's not. And in fact, I, I would venture to say the way that most people will be interacting with AI is not through one of those, but actually embedded AI within applications that we're using to do sub tasks. And I think that's one way that we need to make sure we're framing this conversation.

Howard Holton  09:03
Yeah, I mean, at this point, everyone has interacted with AI. Everyone. Everyone in the room, everyone in the world has interacted with AI in some way. Right? The fact is, most of it has been passive to most of us. If you don't work in the field, if you don't work in the industry, if you're not actively developing these things and managing these tools, AI has impacted your life in a million ways, many of which are bad. Like Target was caught using artificial intelligence years ago to identify patterns of people that were pregnant that didn't know they were pregnant and advertise pregnancy-related things to them. There are, there were really awful, unethical things that have been done with AI all the way up to now, right? Banks approving loans based on AI selection that had horrible evil awful biases built in. Right? We can use AI to predict whether you'll default on loans. Anyone want to guess what the number one indicator that you're going to default on a loan? You've paid a bail bondsman recently.

Rhea Kelly  10:10
Noble, how did those definitions resonate with you?

Noble Ackerson  10:13
I look at it, I look at AI, so AI to me, for the longest — I lost this battle — was a marketing term. It was a way for me, then, to explain how we transformed data. And so the way I explained it was, you know, as humans, we learn from past experience. You know, a baby's walking, falls down, hits their head, they know not to, you know, shuffle their feet past that rock. AI is essentially the same thing. Instead of past experience, it's data. It's past data. And so, or real-time data, which is essentially a snapshot of the past, depending on how you look at it. And so when we start abstracting what possibilities, or how you can use this data transformation, we start looking at different ways to teach a machine based on its, on our past experience. So AI is us. It is an inter, it's sarcastic in its way, it repeats what we do, it repeats our biases, as you as you rightly put it. AI is us. My daughter tries to fight Midjourney, a stable diffusion based generative AI, by asking for an anime or Pokemon that looks like her. She does not look Asian, she does not look white. However, I observe silently that when she types in "beautiful lady standing by a pole," it returns a list of options, she'll click on the option that looks closely to her definition of beautiful, at a given point in time. Sometimes it doesn't look like her. But what she doesn't realize is that by selecting that thing, she's fed, you know, something confirming that this is a beautiful lady, and then she gets upset when she's trying to make…. Actually, a funny story on that. She's trying to make a Pokemon that looks like her, so she wants a black Eevee. If you know Pokemon, you know what an Eevee is. And we got a warning. It's like, you know, offensive, da da da da. And so we said BIPOC, and then it worked. Right? So long, long and short is that AI is just a data transformation. You get data, you swirl it around, and data scientists say, "Did it generalize? Does it look like the real world? No." So if the data is dirty, if the data is biased, unfortunately, that's bias. That, that's how AI works.

Rhea Kelly  13:03
So this kind of goes to what Dave was saying about people conflating the concept of generative AI and AI. Because for the past year, we've heard a lot about AI in teaching and learning, and you know, questions about plagiarism or things like that. So can you all talk about other ways that AI is going to impact education, and kind of think beyond teaching and learning — administrative, IT? Dave, I thought I'd throw that one to you first.

David Weil  13:32
Sure. Thank you. So I think we as, you know, institutional leadership, when we think about AI, a lot of our attention for the past year has been on its impact in teaching and learning. That's all been all the headlines. It's been, you know, the, the anxiety on campus from faculty and students. But I think that's just the tip of the iceberg. I really think the, in some respects, that's interesting. And equally interesting is its impact on all the administrative functions at the institution. I think we're already seeing companies that are marketing to admissions teams for AI that will help them assess applications, that will read it, the applications, it'll score the applications. There's other companies that are marketing to philanthropy and engagement teams or our marketing communication teams for customized messaging that they would do for, to outreach to alumni or prospective students. I also think that, you know, if you look at the announcements from Microsoft for the Copilot, those will have a big impact, I think, on a number of the functions that people will be doing for financials. So I think, you know, really, there's an opportunity to look at the impact of this across the institution. One thing we're doing at Ithaca College is starting in December, my deputy CIO and I are meeting with every vice president. And we are going to have a customized presentation for them to show some examples from their space that we've read about, or we've heard about, just to start planting seeds to say, "These are some things we think you should be thinking about." We also will be talking about definitions, here, to give people a sense of that. And then we're sort of going to walk them through thinking about it in three categories: culture, workforce, and technology. So culture: What's the impact on their organization in terms of how they do their work and their norms there? Workforce: What are the skills that their staff will need? In the future, you know, a year from now, two years from now, what will this technology free up that will allow them to focus on other value-added propositions? And then the technology: What are the applications that they should be looking for? What's the data that those applications will need access to? And things like that. And so really, I really think that that's the next wave of conversations that needs to be happening on our campuses.

Noble Ackerson  15:51
Yeah, just to double tap on that. On the process side, you talked about the technology, the people. On the process side, what I'm sort of seeing a lot of are these two, two domains. Things that an organization can do to sort of automate certain decisions. And so we're talking about, you know, certain repetitive tasks that may be low risk, that don't particularly need a human in the loop, but may need a human on the loop to sort of help self-correct and further improve the models behind the scenes. And then more augmentative, this is what I actually see a lot more of a solution. So a lot of Microsoft's offerings with Copilot is more augmentative, in that, I mean, you have solutions that assist. And I, actually, another way to answer it, sort of bind this to the first question is: tools for thought, right? That's literally generative AI, I just thought about that, we should put that on a t-shirt or something. It's just a tool for thought, right? Like so for, from a process standpoint, what are the existing tools to augment within our organization? Rather than sort of thinking of building a whole new thing with generative AI, we've got a bunch of feedback from students, and how do we use some of these tools to either fast-track, like you said, our path to decision, or at least help basically decision-optimize our way to do it? And what I just want to sort of say is, all that said, generative AI isn't always the answer to that. Sometimes a traditional model, linear regression, like, you know, just simple math or statistics could solve the problem. But it all starts, starts with, you know, you know, leadership sort of looking through, what are the core problems that we need to look at, that we perhaps have to wait until some of our vendors incorporate? Or what are some things that we know would cause a little bit more complexity if we were to customize a vendor's offering, and just augment by building ourselves? And that's a hard decision for any leader to do, because there's going to be tech debt or sunk costs, and hard mistakes made along the way.

Rhea Kelly  18:24
Howard, what, what do you think is involved in that decision, of whether or not generative AI is the right solution or something else?

Howard Holton  18:34
I'm glad to kind of move on because they did a really good job of covering everything, but it brought up a few thoughts. The first is always use this the simplest thing to solve a problem but no simpler, kind of thing. If you can make a decision with four input points, you don't need AI. If you have 400, you need AI. Right? Human beings can't take in 400 inpoints, inputs to make a decision, but AI can. Right? It's very, it's even very hard to do that with just a standard statistical modeling. But if it's four, let's not overcomplicate it. AI is still a black box, right? I can audit the decisions that are made from four. Also, generative AI is not actually an application. ChatGPT is an application because they have taken generative AI and given it all the things that make it an application. Right? So generative AI, generative AI is effectively you interacting directly with AI for the first time. It's why it's part of the, the culture and our conversation right now. Right? But, but what it is is an intent engine, more than anything else, in a way that nothing's been successful at being an intent engine up to this point. There's all kinds of things that do a really good job of set of stage two. But since humans communicate in ways that only make sense to humans really, up to this point, and generative AI is the time where we can say we've actually made a computer understand, understand, the intent in what a human is saying, the ability to use that as, as a tool in a long line of tools, to get to an output that's simply better than we could otherwise, is the way to think about it. It is just a tool, it is just a component by itself. Other than kind of having something to bounce ideas off of, and, and like I was saying earlier, right, get, get, get kind of outlines, you really need something around it, right? You need information, you need systems, you need other pieces of automation, whether those are AI or not AI, to really take advantage of what generative AI can do. And generative AI is just going to try to understand the intent behind Standard English, right? So you don't have to think like a programmer or developer to get anywhere.

David Weil  20:57
Can I build on that? And I don't know whether you all agree with this statement, I'm sort of playing around with it. Generative AI or AI will not make a good programmer great. However, it will allow a great programmer to focus on the skills and the value that they add as the great programmer, because it can supplement and take care of a lot of the rote stuff.

Howard Holton  21:25
No, that's, that's very true. Not only will it not make a good programmer great, but it will make everyone a programmer.

David Weil  21:31
Yes.

Howard Holton  21:32
Right?

David Weil  21:32
Yeah.

Howard Holton  21:33
Like, like, if you think about it as a democratization tool, it's spectacular as a democratization tool. Making you great is never going to come from a tool.

David Weil  21:45
That's right.

Howard Holton  21:46
It's going to come from your approach, it's going to come from your knowledge, it's going to come from your inquisitive nature, it's going to come from your critical thinking skills, right? AI is never going to tell you, "Hey, I get that you told me to make this program, but have you actually thought about how anyone's gonna use it, or why they would, or if it achieves the purpose?" Those are all the things that start to make great programmers, right, is the ability to understand why I would do something and why I would do it that way. And to be iterative enough not to be attached to the fact that you, you had an idea that sounded good to you, but when you released it into the world, the world went, "Ha ha ha ha, that sucks. We'd rather do this." AI is not going to help with that. But that's what makes a good programmer.

David Weil  22:25
So if you buy into that, then the corollary is, I think AI actually builds the case for higher education. Because we have to develop these critical thinking skills in people. And that's something that I know when, you know, all this first started and, you know, came out, and everyone's like, "Oh my gosh, what does that mean for higher education?" I think it actually makes the case for it.

Howard Holton  22:47
It makes the case that higher education needs to change so it stays relevant, and then it will be very relevant. But I would argue higher education does not do a fantastic job of creating lifelong learners, it does not do a fantastic job of creating critical thinkers. I'm not saying that that's a universal statement. There are professors that do a good job at that, there are programs that do a good job of that. But higher education as a whole, I would argue, has failed to do that today for students. And let's keep in mind, we have the most disenfranchised generation that's ever existed in our history, that does not share the value chain of prior generations, that is looking at higher education going, "This is no longer a guarantee of a job or a career and it cost $140,000." Creating lifelong learners creates lifelong employed people. It creates people that are in demand. But you have to have teachers and professors and faculty that 100% buy in, where that is the mission. Not, can you turn in a paper that has this many words that checks these boxes.

David Weil  23:49
It sets the path for what higher education needs to accomplish.

Rhea Kelly  23:55
So Dave, I like how you were mentioning the shifts in culture, workforce, and technology. I wanted to dive into the culture piece because higher ed is notoriously slow in terms of culture change or change in general. So I'm wondering, whose responsibility is it on a campus to kind of drive the culture change that is needed to embrace AI or, or use AI to its best potential?

David Weil  24:27
Don't you have another question on there?

Howard Holton  24:29
I was gonna say, not it.

David Weil  24:32
No, it's, it's a great question. And I think that each institution is gonna have its own personality. So in some respects, it's the person willing to step up and see that there's a need for that culture to change. And then sort of you lead from where you are. I do believe that to change culture, it has to be an intentional act. You have to be, you know, really thinking about it. And you have to work at it through repeated conversations, examples showing, you know, messaging. I think the example that I mentioned earlier that my deputy CIO and I, we are meeting with every vice president, and we're going over things, that's an attempt to adjust or shift culture. It's messaging from the top, from the president, in our case. It's also, at the same time that we're meeting with the vice president, other members of our staff are, we're having demos, and we're creating a, you know, a lab where people could go in and play around with the technology and stuff. So you're doing it from all different ends. And then you look at trying to have some successes that you can point at. But in terms of whose responsibility it is, it's a really, it's a good question, because, you know, this all happened fast, right? You know, even a year ago, we wouldn't have been having this conversation. I mean, AI existed, but you know, it's been really almost 12 months to the day.

Howard Holton  25:48
A year ago, it would have been, "Hey, I heard this thing that might happen sometime."

David Weil  25:54
That's right. So, you know, we were I was sitting around and thinking, you know, my vice president colleagues are not necessarily embracing this the way that I think they should be. So to your point, so that's where I was like, okay, I was waiting for them. They didn't. So now I am. And so I, you know, I think again, it's sort of leading from where you are.

Rhea Kelly  26:12
So it sounds like it's falling to IT by default, or?

Howard Holton  26:16
It doesn't work when that happens. That's all I want to say. Like when we look at culture change, forget about AI, just any culture change, it absolutely must and is the responsibility of leadership, top leadership. If the CEO doesn't buy in, it's going to fail. It's going to fail. Everybody else can buy in, all the CEO has to do is go out, or the president has to do is go out, make one statement in opposition of the culture, and it falls apart like that. So I want to say the kind of pithy, "It's everyone's responsibility to change." But if you don't have buy-in at the absolute top — and reinforced, committed buy-in, not, not just the kind of, I've sent an e-mail, isn't that good enough, but, but they firmly believe it and they're the advocate for it — it will unquestionably fail. And I think that's part of the problem. I will say the other part of the problem is it's really hard to change a culture when you are highly democratized. Right? Because what ends up happening is everyone kind of has a want, and an ask, and a complaint. Right? And, and before long, you end up settling on, well, this is all we could agree on and get done. And that's also a challenge. I'm not saying voices don't need to be heard. Everyone needs to be heard. But ultimately, it can't be everyone's responsibility to make a decision. Right? Or you do end up kind of driving back to the mean again. And that's not how we proceed as organizations, that, that's not how we grow. Right? We need, we're still tribal, we still need leaders to lead. Leaders at some point also have to accept the responsibility of failure and just get over themselves.

Noble Ackerson  27:54
Absolutely. It falls to leadership. And when I do meet with boards in the other side of the world, and leadership is looking at me like, what do we do? How do we know what decision is the right one? That's the missing part, right? Like it's so much pressure. What our team comes in to say is, you do not have to be a soothsayer. And so you go down, back down to the, you know, the different parts of the organization, survey, audit, collect as much data as possible, and try to gather as much, as much insights that you can get to make good, informed decisions. And guess what, sometimes a machine learning model can help with that too.

Howard Holton  28:42
Well, the worst decision is no decision.

Noble Ackerson  28:44
Right.

Howard Holton  28:45
That's the hard thing to get over. It's like, how do we know we're gonna make the right decision? Okay, cool. I don't, but I can tell you 100% you're making the wrong one right now. Right? Okay. So we need information to make a good decision. Great, well, here's the seven different ways we can get better information so you can make a decision. I would rather you walked out into traffic, and read tea leaves, and made a decision, than not, continue to, you know, continue to not make a decision.

David Weil  29:09
I do want to challenge what you said, though, about, that, the has to come from the top. I don't necessarily agree with that. I think it depends on what your objective is. If it's to turn the whole institution in a different direction, yes, that has to come, you know, from the top leadership. But you can make a big, you know, I'm not at all, you know, advocating that our entire institution suddenly becomes artificial intelligence driven on everything it does. No. It's a tool. It's a tool, just like we have all these other tools out there. And I think those tools can be adopted in various places throughout the organization. And, you know, you'll have some organizations, some areas that will be further along than others. They'll be able to show how it's changing how they're doing work or how it's adding efficiency. Some will fail as well. And so I think it's a collection of things, and you know, what is the overall objective that we're trying to achieve at our institutions. Sure, from the top has to be at least embracing acceptance of change or something there. But I think you can actually create significant culture change throughout the organization other ways too.

Howard Holton  30:12
But if the president came out two weeks later and said, "I'm opposed to AI, this is evil, this is wrong, no institution should adopt it," you've now created that rift again. Right? So, so sure, they don't have to be the one teaching AI. But they have to be supportive of all of that change, and an advocate for that change. "We don't know what's right, we don't know what's perfect, right? You know, look to David, he's got to, he and his team have a really good plan, here's seven places it's been successful." You still have to have that support. All too often, they're not actually supportive. And even unwillingly, they make a statement that's in opposite, opposition of the change and, and it just crumbles and falls away. Right? So, so, you know, even, even when it's a small change, you still have to have buy-in at that level, and you have to have, everybody's kind of got to be on the same team.

David Weil  31:04
Depends on the institution.

Rhea Kelly  31:05
I think like, like anything else in change management in higher education, you're gonna need your champions. And those can come at a lot of different levels, is what I'm hearing from you all. I wanted to make sure we talk about the product side, because another thing I've noticed over the past year is that every single ed tech company out there is trying to market the fact that they have some sort of AI. So I was wondering if you could all talk about how this is changing how higher education should evaluate their product choices, and what to, what to, you know, considerations. And Noble, I feel like this is right up your alley, so I'm gonna throw that one to you.

Noble Ackerson  31:45
I just did a whole talk on that. But if the objective function of your organization is to, say, improve the efficiency of x, or help increase student success by y, then that, you know, your evaluation tactics should sort of be anchored on that objective function, right? You have your metrics that are going to sort of guide that as well. So earlier, I talked about a couple, there was a question that was asked, can't remember who asked it, that made me sort of enumerate a couple types of archetypes of customers and where they are and how they sort of deal with this madness, this deluge of "We need AI." The first one is the enterprising, you know, companies or organizations that have gone out, tried it, realized really quick that their fancy demo is brittle in production, and they say, "This is moving too fast. We cannot keep up with this. No, like we're not doing this. We're just going to wait." Number two are the ones that go, experienced number one, and then they go, "Wait a second, why did we go through all of this? AWS Bedrock, GCP, Google Cloud Platform, Microsoft Azure, Open AI, Salesforce does all this for us at the application layer. And so let's build on top of what's already there." Stand on the shoulders of giants, as it were. And the evaluation tactics, again, sort of bind to your objective function. So if it is, "We have a lot of data, and we want to make decisions," to get there step one is, "Do we have the right data? So let's fix that first." And there's a third group that are, that just don't learn. Right? It's just a patchwork of tech debt that is a model that obviously shouldn't be modeled. And their evaluation, don't really bind to any set of goals, would be it student success, be it operational efficiency, whatever that is. It's all over the place. And so just to sum up, if you were to sort of break it up into, "Do we have the right data? Do we have the, you know, the confidence to deliver? Do we have, have we, do we understand the core problems and needs that require us to invest in this thing now, or is it better to hold and wait for Microsoft Copilot to help solve that problem for us? That may be the best thing to do.

Howard Holton  34:44
Yeah, I would say, start with, did they ever ask if they should, rather than just if they could? Like right now, VCs will hand you, like they just back up a truck and load money if you say "I've got a new generative AI" insert adjective noun adjective noun, right? And so you've got a ton of companies that they had a product, they weren't really getting much funding, and they tacked on generative AI so they could go through another round of funding. Right? So, so it kind of goes back to what I said at the beginning, where what I found out with AI was the trick's not in the data. Data is a massive requirement. The trick's not in, does it get an insight. The trick is actually in the question. And so when you're looking at these things and they go, "We have generative AI built in," cool, why? Right? What do you think a better understanding of the English language is going to solve that you couldn't solve otherwise? And oftentimes, what they'll do is a little bit of song and dance, they'll throw some marketecture up on a slide, and they'll try to baffle you with buzzwords. Just walk out of the room. Right? They don't know either. They're, they're really, they've just captured some VC funding, right? But then you're gonna have companies that go, "Well, what we found was in cybersecurity, everyone bubbles up alerts, it's hard to tell the noise. And we said, hey, this is a 99% flaw, it's related to CVE-9703-23, and that didn't mean anything. And so what we did instead was we changed it so that you get a generative AI output that's much easier for teams to read and understand exactly where they need to target." Oh, that seems like a good use of it. Or, "We have a, we're a low-code platform. And there's a learning curve to a low-code platform. And so what we did was we fed generative AI our documentation, and allow you to ask how to do things. And then what we found was, we could actually turn that around and turn your prompt and the response into code samples, and then automatically integrate the code samples." Oh, that seems like actually a good use. You kind of thought about, where are the challenges your users have? How do we address those challenges so that they can get to what they should be buying the platform for in the first place? Those are reasonable uses. Also, keep in mind we're at the absolute bare-bottom infancy of how we're going to see this technology used in the future. Just AI as a whole, generative AI, has opened the door for conversations that people were closed to before. Just by the fact that we are now interacting with AI in a massive scale for the first time, has opened the door for kind of a reinvigoration of the meta topic that is AI, right? And we're really at the infancy. So we're really going to see some strong movement, both directions, to marketecture, as well as to really good use cases that it's again, it's kind of beyond the horizon of what we can see today.

David Weil  37:47
Really, I agree with everything that's been said. Just want to add two things: one, that we are at just the beginning, which also means be careful where you place your bets. There's a lot of companies that want our money. And that goes to my second point: The costs for this thing are all over the map. We're seeing examples where some companies are jacking up prices 10, 20, 40, 50, 100%, based upon the fact that well, we now have generative AI or we have this capability. This is a gold rush. And we don't know the market has not settled at all in terms of what the true value and cost is going to be for these. Proceed with caution. It doesn't mean don't proceed, but I would be cautious about long-term investment.

Howard Holton  38:34
I was gonna say, short-term contracts are your friend right now. Right? Short-term investments, it's going to cost you more in the short term, but in the long term, it gives you the ability to pivot to something that's a little smarter. Additionally, ChatGPT just changed their pricing, and it's effectively a 2.3x blended reduction in cost. The vendors that had integrated ChatGPT have not, I mean, it's one day old, we're not going to, they're not going to correct that cost, not for quite a while. Right? Short-term gives you the ability to take advantage of the change in economics, long-term does not. You tie into a contract, you're in the contract. Right? So give yourself the out, give yourself the flexibility. We're going to see a lot of disruption, we're going to see a lot of change. If you can look at shorter than 12 months, adoption cycles are hard and everything, maybe think about month to month, maybe think about quarter, quarter to quarter, right? Make them work for it.

Rhea Kelly  39:30
Thank you for joining us. I'm Rhea Kelly, and this was the Campus Technology Insider podcast. You can find us on the major podcast platforms or visit us online at campustechnology.com/podcast. Let us know what you think of this episode and what you'd like to hear in the future. Until next time.

Featured

  • AI-inspired background pattern with geometric shapes and fine lines in muted blue and gray on a dark background

    IBM Releases Granite 3.0 Family of Advanced AI Models

    IBM has introduced its most advanced family of AI models to date, Granite 3.0, at its annual TechXchange event. The new models were developed to provide a combination of performance, flexibility, and autonomy that outperforms or matches similarly sized models from leading providers on a range of benchmarks.

  • Two autonomous AI figures performing tasks in a tech environment; one interacts with floating holographic screens, while the other manipulates digital components

    Agentic AI Named Top Tech Trend for 2025

    Agentic AI will be the top tech trend for 2025, according to research firm Gartner. The term describes autonomous machine "agents" that move beyond query-and-response generative chatbots to do enterprise-related tasks without human guidance.

  • minimalist person icons connected by straight lines and circles

    Microsoft Revamps Loop Collaboration Platform

    Microsoft Loop, the online collaborative platform in Microsoft 365, is getting a number of new features and an overall redesign.

  • AI-themed background with sparse circuit lines and minimal geometric shapes

    New Copilot Studio Feature to Introduce AI Agent Building Tools

    Microsoft has announced plans to roll out a public preview of a new feature within Copilot Studio, allowing users to create autonomous AI "agents" designed to handle routine tasks.