Campus Technology Insider Podcast December 2025
Epsiode: Making It Easy: How Agentic AI Enables Seamless Digital Experiences at USF
Host: Rhea Kelly, editor in chief, Campus Technology
Guest: Sidney Fernandes, CIO and VP of digital experiences, University of South Florida
Episode Overview
In this episode, Rhea Kelly and Sidney Fernandes discuss the impact of agentic AI in higher education, focusing on USF's initiatives to make technology and workflows user-friendly. Topics include the formation of the Bellini College of Artificial Intelligence, Cybersecurity and Computing, strategies for integrating AI to improve student and faculty experiences, and the importance of security, compliance, and data governance in AI deployment. In addition, they explore how USF leverages student ambassadors to scale AI literacy and solutions across its campuses.
Key Questions & Takeaways
What does "making IT easy" mean at the University of South Florida?
USF has shifted its IT strategy from managing systems to deliberately designing end-to-end digital experiences for students, faculty, and staff across the entire lifecycle — from pre-arrival through graduation and beyond.
Why did USF embrace AI early, and what principles guided that decision?
USF chose to be AI-forward, embracing generative AI while being transparent about its limitations, risks, and trade-offs, and focusing on education and safe use rather than restriction or enforcement-first approaches.
How is agentic AI changing USF's approach to digital services?
Agentic AI is being positioned as an assistive layer within existing workflows, helping people do their work more efficiently rather than replacing roles or operating outside established systems.
How is USF scaling AI adoption and enablement across campus?
USF uses a mix of broad training efforts and student-led enablement, pairing trained student ambassadors with departments to help scale AI literacy and practical use at low cost.
What lessons has USF learned about governance and risk with agentic AI?
Strong data governance, security, and compliance are critical, along with clear communication that agentic AI is probabilistic and fallible, requiring the right use cases and safeguards to maintain trust and credibility.
Topic Index
00:00 Introduction and Guest Welcome
00:30 Sydney Fernandez's Background and Role at USF
01:10 USF's Technology Strategy: Making It Easy
02:44 Digital Experiences at USF
05:33 AI Initiatives and Early Goals
13:32 Scaling AI with Student Ambassadors
23:44 Challenges and Lessons Learned
26:08 Future of AI at USF
29:19 Conclusion and Farewell
Transcript
Rhea Kelly 00:00
Hello and welcome to the Campus Technology Insider podcast. I'm Rhea Kelly, editor in chief of Campus Technology, and your host. And I'm here with Sidney Fernandes, CIO and VP of Digital Experiences at the University of South Florida, to talk about the impact of agentic AI in higher education. Sidney, welcome to the podcast!
Sidney Fernandes 00:27
Thank you, Rhea. It's great to be here, and thank you for having me.
Rhea Kelly 00:30
First off, just tell me a little bit about yourself, your background and your role at USF.
Sidney Fernandes 00:36
Sure, thank you. So I'm Sidney Fernandes. I'm the Chief Information Officer and Vice President for Digital Experiences here at the University of South Florida. USF, for those that don't know, is located in the Tampa Bay area. We have multiple campuses in Tampa, St. Pete, Sarasota. We also have a medical school downtown. So, so, so fairly large, covering, covering quite a bit of the Tampa Bay region. My role as CIO is overseeing technology across USF. We call it OneUSF. And what we've done recently is looked at, how can we make technology easy? So our moniker is "Make IT Easy." And so everything we do in our strategic planning, or in our releases, is focused on making things easy for our clients. And we deliberately picked the word "client" because we wanted to have the folks that we interact with think of us as someone they can consult with and have solutions, versus we hated the word "users," "customers" wasn't something that we were as thrilled about, and "clients" seem to be the appropriate thing to say. And so if I use the word clients as I, as I talk about, the folks that we serve, it's a deliberate kind of decision to say this is, these are clients that we serve. We cover, we are, we are now, just a little about USF, we are a member of the AAU, very recently, one of the youngest members, youngest members of the AAU, and have significant research portfolios, as well, as I mentioned, the medical school. So lots going on at USF, we're in the middle of building a new stadium, and relevant to this podcast, we're going to have one of the first colleges dedicated to AI and cyber security, which is a named college. We will have the Bellini College of Cybersecurity and AI that, that is now, that is brand new, and hopefully we'll have a building soon.
Rhea Kelly 02:32
I like the department of making it easy.
Sidney Fernandes 02:35
Making it easy, yeah.
Rhea Kelly 02:36
I've heard make, you know, having IT be the department of yes.
Sidney Fernandes 02:39
Yeah.
Rhea Kelly 02:40
But I haven't heard anyone say making it easy. Yeah. So what about the digital experiences part of your title? Like, what is that all about?
Sidney Fernandes 02:48
So one of the things we have thought in terms of the CIO position is that the CIO, when you look at the CIO before, the CIO is really focused on what I'll call the blocking and tackling at a university. And as that position has evolved, particularly in the age of AI, but even before that, during COVID, it's more about enabling the experience of the faculty, the students and the staff at the university, versus just making sure, you know, things are working from a bits and bytes perspective. That experience, you know, goes the entire life cycle of, of our clients. So a student's experience is not just are the websites working or am I, is everything working in the classroom. It's the entire experience from the time they set foot on campus to the time they graduate. In fact, it starts, it begins, actually, way before they set foot on campus. And so what we want to focus on at USF is, what is their digital experience for the entire life cycle, and how can we make that as easy as possible? Kind of you're going to hear a lot about the ease of use, and we've taken deliberate action in terms of understanding the customer experience from an ease-of-use perspective. One of the things we asked our students, for example, is, you know, what are some of the things that you find most difficult? And no surprises, and I'm sure most CIOs will agree, they said wireless. Because, you know, wireless is one of those things that they, that when they're new students, because we are focused on new students, that they struggle with. And so we looked at how we could make that easier. We also took deliberate action on, we looked at the journey at a specific point in time. For example, when, the start of the semester is always a difficult time for our faculty, because there are new classrooms, their experience is not what they would consider optimal, and you can see it in our tickets. And so we decided that we were going to focus on the experience, the first two weeks of class experience for our faculty. And then, how could we make it easy for them when they first came in? And then took several deliberate steps, both technical steps, you can, you know, make the technology easier. So this, made sure that all of the classrooms had the exact same panel in it. There was a help button that they could click, and a student would show up at the classroom in five minutes. But also non-technical: We send them surveys in the summer before the start of the semester, saying, What are you planning to teach? Which classrooms? You need any special software? So all of this in a very deliberate strategy to make things easier for our faculty. So that's kind of where this experience perspective comes in. It's not looking at them as individual projects, but broadly, what is the experience of a student? What is the experience of faculty? What is the experience of the teams that serve our faculty and students look like? So that's where it comes from.
Rhea Kelly 05:33
When and why did USF embark on your AI journey? I'm wondering if making things easy was a part of that.
Sidney Fernandes 05:43
One hundred percent. So it was interesting, right? And I think, again, I don't think USF is unique, or my role is unique. The role of the Chief Information Officer, I think changed dramatically post-COVID. And I think that was a pivot point in which institutions found that the technology, the partnership in technology between the IT team, however they structure it, the academic side and the business side, had to be really, really tight. AI just exercised those muscles. In fact, the first presentation I gave about AI, I think, was in February, soon after GPT came out, right? And I think it was, I think December was when the first GPT came out. I was invited to the cabinet meeting, and they said, tell us about it. And I said, well, think of this just like as disruptive as COVID, just not immediate. It's going to be several cycles of disruption and, but it's going to be bigger because COVID ended, this will not end. And so when the president heard that, she said, we need to form a generative AI strategy group. And so I partnered with the provost, and what he said at the time was, Sidney, let's not create, you know, this large group. Let's create a group of geeks and collaborators, he used these words, and we can quickly react. And so, you know, in August 2023 we formed this gen AI strat group that had a lot of faculty in there that were really interested. We had some that were interested in the ethics. There were some that were interested in just the technology, and then there were some that were interested in really the applicability in the classroom. And the head of the Faculty Senate at the time was very interested in, well, she was, she was an educator. And so that's when we actually started getting a group together that looked at and said, okay, what do we do, very, what do we need to do really, really fast? And what we did really, really fast, is create this website called genai.usf.edu and that was just a vehicle for us to put the information out there, as we saw it, in terms of guiding our faculty on how they could use generative AI in the classroom, while at the same time balancing it out with very real fears that they had around plagiarism and cheating and students using AI for the wrong purposes. So we tried to balance that out and really put, put tools in front of the faculty and our, and our teams that they could, you know, essentially ease their lives overall and, and also maybe help our students learn better. So that was kind of the ethos around how we began.
Rhea Kelly 08:22
I love the group of geeks. I think that needs to be on a t-shirt. How did you define sort of early goals with AI? It sounds like one of your early priorities was just getting information out there, and, but how did you sort of figure out where you wanted to go?
Sidney Fernandes 08:41
So there were, there were two things that we looked at. One was what was going to be our stance on AI in the, university-wide, and then what toolsets and enablers were we going to provide to the teams. And what our outcome was, and what the, what the provost and president, kind of their early guidance was that we were going to be AI-forward. And what that meant is we were going to embrace it, but at the same time embracing it meant also embracing the warts that came with the early development of AI. So we were going to make sure we were listening to the, to the people who were concerned about it, because our early feeling was that we should not get into, you know, a gun fight, basically in terms of saying we're going to have plagiarism detectors, we're going to have all of these tools that were going to stifle AI. But rather, we wanted to make sure that the community was educated on the fact that AI can hallucinate, on the fact that AI is probabilistic, on the fact that when you're using AI tools, if you use the tools within our tenant, for example, that you could use it in a way that was safe and secure. Not to say that you couldn't use AI outside in terms of the commercial offerings, but then you should assume, for example, one of the early things we said is, if you're putting your, your data in a commercially available bot, then assume you're posting on social media. Kind of to make those easy connections to people, whereas if you're using it on our secure AI that, that was in house within USF, you were going to be more secure. But also understanding the trade-offs that, that with security and all of these checks and balances that we had put in, it may not be as commercially viable as something like ChatGPT or Gemini or even Microsoft's Copilots that were commercial. So we tried to make sure we balanced that out. And the other decision that we made was that we would go the entire spectrum of AI at USF, in the sense, when it comes to generative AI, basically, the way we looked at it overall was there was going to be AI within commercially available applications, right? So there was going to be, you know, things like Copilot that will evolve, because those early days we were just starting the Microsoft Copilot. There was going to be AI embedded in other applications that were going to be used every day. Then there would be a way to extend generative AI models, and in some cases, we might build our own. And so at USF, we said, we're not going to just pick one horse and say, okay, we're going to focus all of our attention here. We would focus most of our attention on what we would call commercially available products, but then we also wanted to make the space for our faculty members that wanted to either extend models or build new models, for them to be able to do that. And so what we did early on was say, we're embracing all of these, but everyone needs to understand the cost of each of those, and then for operations, we said, focus on commercially available. For research, we're going to give you infrastructure that will allow you to build your own models or extend models that you have. So that was kind of the strategy that we had early on.
Rhea Kelly 11:57
Has your approach changed as the technology has changed, particularly with the advent of agentic AI.
Sidney Fernandes 12:05
I think the approach has evolved. I think the, if you look at our original intent, which was, how can we make things easier for faculty, staff, and students, that approach, we've been laser-focused on that approach. What has changed now is as we've moved into this agentic space, we now have even more opportunities to make things easy, but it also comes with understanding that these agents should not be looked at as replacements for the people doing the work. Rather, they should be looked at as assisting with the work within their flow of work. And there's, I use the "within the flow of work" very specifically, because what we've been able to do, or what we've been in the process of doing, is explaining to people that the agents that they are being provided, whether they're being provided within the, within the ecosystem, or they're building their own, should be used within their flow of work, not instead of. And I think that has been something that we've really focused on. What has also changed, I think, because the technology has evolved so rapidly, is our approach to enablement of our clients. And so we've now started to realize the scale, that scaling this, this type of AI and agentic adoption, requires really deliberate thoughts around the enablement process. I'll give an example. We recently began to use students. We have this partnership with Microsoft, where they train student ambassadors at USF, so students, it's kind of like a matchmaking situation, where different departments say, I would like help with using the agents in Microsoft, as well as the overall Microsoft toolsets, as well as the Copilots. And then we have students who tell us they're really interested in learning more about it. So the students go through kind of rigorous training on how to use it, including understanding the ethics of AI and then, you know, how it can hallucinate. And then they are paired up with offices that have requested these students, and those students work with these offices, and they create solutions for them, whether it's using agents or whether it's just showing them how to more effectively use Copilot. And that program has been super successful, so much so that these alumni students decided to stay with us for the next semester and kind of mentor the new incoming students that are coming in. So we now, at very low cost, have figured out a way to scale the enablement of our, you know of our faculty, of our students, and of our staff, right? One example where the students were enabling students was we had one of these student ambassadors train our first-year medical students on how they could get the most out of AI during their medical school. So, so if you think about scale and how you enable it, this was kind of a program that we kind of put together, that we are seeing a lot of benefits to, and now other schools are actually talking to us about how they can do something similar. Because it's actually not that high of an investment to get something like that off the ground. So that's been a change in approach, like thinking about scaling is really something top of mind.
Rhea Kelly 15:18
Yeah, that's funny. I was just going to ask, you know, what the impact is on IT, when all of a sudden you, you know, you're having to set up more AI agents, you know for the …. So the answer is students. Get students to do it.
Sidney Fernandes 15:34
The answer is students. And the other, I think that's really important, is that you shouldn't look at AI as a new program in IT. That I think is a strategic mistake. If you say, okay, you know, we're going to have a new AI group that's going to be creating AI solutions, there's two problems with that. One, AI exists now in almost every solution and every team that's working within IT and outside, and so everyone needs to be AI literate and understand that agentic can help them in their day-to-day work. And the second is, you don't want to create a class system where you have one group that's, you know, the legacy technology group, and another group that's doing this cool new AI work, and they are looking to kind of replace some of the legacy because, quote, "It was so slow." Instead, if the folks that are working on, you know, low-code applications, the folks that are working in, on the network, the folks that are working on service desk solutions, all understand the basics of agentic, then they can add agentic solutions within their flow of work. And they already have. So we now have asked all of our IT employees to take courses that we prescribed for them that make them literate in everything to do with generative AI, and then prove to us that they're using it in their day-to-day activities. And everyone's doing that, across the board, from the folks on the help desk to, you know, the high-end folks that are doing work, like, you know, creating RAG databases, etc. We said everyone has to do it. And so, so that has really helped with people coming up with AI solutions, where we were, like, surprised, right? So, so that, because they now understand that they can use it in their flow of work. The other thing that's happening, quite frankly, is the big box sellers of software now have agentic AI built into their solutions. Where, we use Oracle. Oracle has agentic built in. We use Atlassian for our help desk. There's an agent there that you can configure. We use Appian for our local development, there's agents there. And of course Microsoft has, we use Microsoft because we're a big Microsoft shop, and Microsoft has a lot of agents built into their Copilot platform, as well as we heavily utilize their Azure foundry system. So, so it's, so it's everywhere. So saying that only a few IT groups doing it, I think is a mistake. So instead, we just provide solutions across the board and really encourage our teams to use it whenever possible.
Rhea Kelly 18:10
How do you encourage users or clients to kind of think outside the box about what, how AI might help provide solutions for them? Because, like, that's, that's kind of a roadblock for me, is, like, you know, not realizing what I could be doing with AI.
Sidney Fernandes 18:28
This is where our students have been so helpful with this. Like we, so we do, we do the usual things that I'm sure a lot of universities are doing. I'm pretty sure we're not unique. So we, so we have affinity groups. So we have, you know, this affinity group for Copilot users. So we have this Coffee and Copilot that we've now been doing for almost two years. We create, we had seminars and webinars that we provide to our teams. And during those webinars, we partnered with Microsoft, where they send their experts to do a webinar on all the toolsets that are available, and they do the art of the possible. Here are some of the things that you can do. All of those, I call them wholesale efforts, okay? Like, it's kind of like trying to get a lot of people. We have found that the retail efforts, using these students, have been just as effective. So, so we use the wholesale effort to kind of pitch the idea, right? And people start thinking about it, and, and then the Coffee and Copilots are kind of a little more retail, because then you have, you know, people sharing their own stories of how they can do better. And then sending these students out into the, into the community, essentially to show people how they can multiply, multiply their own everyday lives, like we had somebody who was doing inventory that had to combine three spreadsheets. They were able to do it with Copilot, and they sent us a note that said, oh, what used to take hours and hours now takes minutes. We show the administrators in our executive offices how they can use the Copilot within Teams to summarize, make meeting notes, all of those things, and it changes their lives. We had a UCM group have students show them how, once they created an article, how they could use Copilot not to create these articles, but to create formats that were suitable for many, many different media postings using a singular copy, copy written, that somebody had written and accelerate their ability to go across various platforms, and they created an agent for that. So the big thing at universities is making sure that the community understands that IT is there to enable and support, versus saying we're here to tell you how to do things. And who better to understand the pain that people feel every day is when students say, okay, for a month, we're just going to observe your pain and do things. So we've done that. That's kind of what I would call at the retail level. And then we have also gone and tried to solve big problems. In the sense we've gone to shared services offices like Finance, where we created travel bots. We went to Athletics and created bots for them, for, for, you know, the students, when they will have questions about athletics. Similarly, for Student Health Services. And when we do that, we do the, we'll do it for you, we'll work with you, or we'll teach you how to do it, kind of process. So that helps with the scaling as well. We have a very robust client technologist program where we are, where our goal is to enable client technology. We don't call them shadow IT. We prefer to refer them as client technologists, because they can, if they partner with us, really use the technology platforms that we have to, to, you know, accelerate their own transformation. So that's, that's kind of how we've looked at it and tried to scale and have people use it more.
Rhea Kelly 21:46
I have never heard anyone embrace shadow IT in that way.
Sidney Fernandes 21:50
It is really important. So, so this all started, actually, before AI. What we felt at USF was, we're fairly centralized, right? But fairly centralized doesn't mean that aren't, there aren't technology groups that are doing things particularly for the business. And what we found is, if we embraced them and provided them with the platforms, we started with, with our analytics. We specifically decided that we were not going to be in the business of creating dashboards and reports anymore. Instead, we would be in the business of creating data sets and enabling people to use those data sets. We actually encourage people to hire people that could write reports in Power BI and use our data sets. And once people kind of got into that flow, they suddenly started realizing, oh, I can now, you know, do things much, much faster, and IT is not the bottleneck. And we could really focus on creating good data sets. You take that same approach, and we use that approach for solution development, where we have an enablement team that enables you to create some of your Power App applications and put them out there, as long as they don't impact, like, large numbers, number of people. And so the same approach with agenetic AI, right? So you provide people with platforms that are safe, and we are spending most of our time on the safety and compliance and the widget creation work, so that folks within, you know, outside of IT, can some, some time solution. We're also now starting to think about, can we create public-facing APIs for folks to do even more sophisticated thing with agents, but then the APIs would be within our API management module, so we still get observability, we still get the security, but people are able to create. So that's kind of the approach of embracing, you know, this, this new world where things are changing very fast.
Rhea Kelly 23:44
Have there been any big challenges that you have some lessons learned from that you could share?
Sidney Fernandes 23:51
The big lesson learned, I think, is, don't underestimate security, compliance, and safety. We have spent, I think, a fair amount of time talking about that, is the security. This, one is the cybersecurity, but the other thing is the security of your data. Your AI is only good as the data that you're feeding it, and making sure that your clients understand that the data going into the AI is as critical. So this whole aspect that I talked about before of having data sets is even more important. You then have to layer on products that can give you, that, that can make sure that the data that's being used is, is the right data that somebody has access to, right? So data loss prevention is a big deal when you're talking about AI. You want to make sure you have platforms that assess that. A good governance strategy, which actually is very akin to having a good data governance strategy, because an AI governance strategy is very close to a data governance strategy, and making sure that you have your data labeled in a good data governance approach, I think, is critical. And then the other thing that, you know, I would advise people is to make sure you embrace the fact that AI is fallible. The hallucinations and the fallibility of AI, especially for the proponents of AI, is actually even more important than just saying that it's going to solve all of your problems, because you lose credibility very quickly when you pretend like AI is going to do something that it can't. There's a difference in when you create a solution using an agentic framework versus creating a solution using your regular programming framework. The programming framework is deterministic, you know if then this is going to happen. Agentic framework is probabilistic, and making sure that while it's powerful, folks understand that difference, and making sure that you're picking the right solution for the right job, I think, is something that everyone has to, you know, critically understand. And you can mitigate all of that by, you know, creating purpose-built small language model agents and things that, but you have to take that mindset when you're creating those solutions.
Rhea Kelly 26:08
So what's on the horizon, you know, in terms of AI at USF?
Sidney Fernandes 26:13
So I want to mention this. USF, from an academic perspective, is embracing AI and cybersecurity through this creation of this new Bellini College of Cybersecurity and AI, which is already up and running. And, and this college is really unique because it's going to be a hub and spoke model, where this college will be kind of the academic hub for all of the other colleges to embrace AI. So that's going to be the next big thing that's, which is already happening at USF, and we hope that that's going to really push how we, how USF embraces AI from an academic standpoint, whether it's research, whether it's teaching, so that's going to be a big area of gravity. From an IT perspective, our goal and vision is, how can we make the client experience as easy as possible, as USF goes for these big things, right? And so can we build, build purpose-built agents? Can we partner with our clients to say, here are some of the things you don't have to do now, because agents can take care of it for you. Is kind of where our focus will be. And then the other big focus is, in the solutions that we have right now, the, the, whether it's the ERP or whether it's our low-code platforms, how can we accelerate speed to value by using agents to now reduce the amount of time it takes for us to deliver solutions? The big thing is not to look at specific projects, so to speak, but to look at speed to value from a delivery perspective, and then the overall experience becoming easier to the clients. And that's, that's, that's kind of what we'll be focused on. In various, if you want specific examples, our plan is really to focus on much more agentic frameworks within our local platforms. We're going to start to push, push that real hard, because we can now orchestrate agents using, using these platforms, and then provide solutions where end users can have, you know, kind of specific agents that are now chained together to kind of start to talk to each other, to be able to do things, so that they are aware, right? We've created standard standalone agents. Now, can we make them aware of each other, so now we can have an orchestration across multiple agents. So that, so that's really what we're looking forward to doing in the next year or two.
Rhea Kelly 28:28
It sounds like exciting, exciting stuff. That new college should provide you a nice pipeline of student workers as well.
Sidney Fernandes 28:35
Right. Exactly. And I think that's, that's what's exciting for us at a university, right? We have, we have students that are way smarter than us when they come in, and way smarter about using technology. And having them advise us, in some cases, on how, what we should do next, I think, has been critical. That's one of the things where, you know, we've also started thinking about, is should we have a, you know, kind of a strategy group of students that can help us with how AI can be used at the university. And they wouldn't be like, just advisory. They would actually be advising on strategy, because they come up with new ideas. The Student Ambassador Program has taught us so much, to be honest.
Rhea Kelly 29:12
I think that those students would love the geek t-shirts.
Sidney Fernandes 29:16
Yeah, we should do that.
Rhea Kelly 29:19
All right. Well, thank you so much for coming on.
Sidney Fernandes 29:21
Thank you. It was a pleasure talking.
Rhea Kelly 29:27
Thank you for joining us. I'm Rhea Kelly, and this was the Campus Technology Insider podcast. You can find us on the major podcast platforms or visit us online at campustechnology.com/podcast. Let us know what you think of this episode and what you'd like to hear in the future. Until next time.