Creating Guidelines for the Use of Gen AI Across Campus

The University of Kentucky has taken a transdisciplinary approach to developing guidelines and recommendations around generative AI, incorporating input from stakeholders across all areas of the institution. Here, the director of UK's Center for the Enhancement of Learning and Teaching breaks down the structure and thinking behind that process.

Last year, the University of Kentucky announced the formation of a new task force to study generative AI and make recommendations for its responsible use. Dubbed UK ADVANCE (Advancing Data utilization for Value in Academia for National and Campuswide Excellence), the committee brings together experts from all over campus to provide ongoing guidance on use of the technology in teaching and learning, research, and more. The group has published guidelines for faculty and researchers, with plans to update the recommendations as the technology evolves. We sat down with Trey Conatser, director of the Center for the Enhancement of Learning and Teaching and co-chair of UK ADVANCE, to find out more.

Campus Technology: How did the UK ADVANCE committee come about?

Trey Conatser: When ChatGPT was first publicly released in November 2022, one of the things we at the Center for the Enhancement of Learning and Teaching noticed over that winter break was a sudden rise in chatter about this new technology and what it might mean for teaching and learning. Some of those early concerns were of course around academic integrity, but we also saw in it something that could profoundly change the behaviors of writing and learning. So in January 2023 we started having town halls and workshops and different kinds of trainings around generative AI in the classroom space. We were thinking about it broadly, across all the different areas of study, the professions and disciplines, from the more liberal arts side of education to the professional schools and to the STEM classes, because this is a phenomenon that manifests in many different ways across our work life, our education life, even our personal life.

The president asked our provost to form a university-level task force to address the rise of generative AI, because it was clear that this wasn't going to be one of those passing fads or trends or flashes in the pan. The idea behind the task force was that it would be transdisciplinary in nature. We involved all stakeholders from across campus — including students, staff, faculty, and administrators from a wide range of areas of expertise — to address the problem at hand. We have folks from informatics and computer science but also philosophy, communication, writing, and leadership studies, as well as representation from IT, Legal, PR and Communications — all these areas that that are touched by AI.

The charge was to make recommendations and provide guidance and advice around what we should be doing as an institution with respect to this technology, knowing that there would be some rapid developments that we might have to respond to with a sense of alacrity. Something new comes out, and then all of a sudden, we might have to rethink things and respond to those moments as well. There were lots of conversations around navigating the difficulties of making global statements at an institution as big and diverse as UK — you can't have a simple rule about generative AI that equitably serves all areas of the university. So we were navigating this need for flexibility, but also the need to give some concrete and actionable guidance around the technology.

That resulted in a set of instructional guidelines that we released in August of 2023 and updated in December of 2023. We're also looking at guidelines for researchers at UK, and we're currently in the process of working with our colleagues in the healthcare enterprise, UK Healthcare, to comb through the additional complexities of this technology in clinical care and to offer guidance and recommendations around those issues.

CT: How did you drill down to exactly who needed to be on the task force?

Conatser: That's a good question, because we're a big institution. There are a couple of layers to it. Number one is working with the leadership across the university in a way that embraces our shared governance. We worked with our faculty senate to make sure that we had leadership from that body represented on UK ADVANCE; we worked with our leadership in the colleges to identify the experts on this, whether they're a computer scientist who does work in large language models in our College of Engineering, or somebody who works in digital writing studies who thinks about automated writing and electronic environments. We also combed through our own database of publications and research through our research office to make sure that we have a full sense of who's doing work that touches artificial intelligence. And when we first launched the ADVANCE team, we very publicly had an e-mail address to be in dialogue with folks who would write in and say, "I think so-and-so would be a great addition to the team," or "I'm actually really interested in being a part of this." We would discuss every e-mail as a team, any folks that contacted us, and say, "We think that this person would really represent a critical area of stakeholders at the university that maybe isn't on the team yet."

The size of the ADVANCE team reflects this inclusivity to the point that we are unusually large as a university-level task force. We have over 30 people associated with this team. And that means our meetings are really interesting, because we're getting input from people with a lot of different experiences. Our meetings happen every two weeks over Zoom, and the spirit of the group is that it's an open forum. We have certain agenda items, but we always have time for people to bring up issues that we might need to think about, whether it's resources or training or new developments in AI.

CT: How does the work get done in such a large group? What is the structure like?

Conatser: Depending on the project, we'll identify a small number of people to serve as the leads. That tends to fall along the lines of both academic and institutional expertise. For example, for writing the instructional guidelines, I was one of the primaries on that one because I direct our teaching and learning center. For the research guidelines, I worked with the executive director of our Office of Research Integrity, because that's her provenance here at UK. And we're currently working with a couple of the chief information officers in our UK Healthcare system to work with the clinical care guidelines.

The leads will take ideas and input as we iterate on the drafts of these guidelines, and will keep circulating new drafts among the whole team, get more feedback in that loop, and then iterate some more. Once we get to a point as a team where we're comfortable to start shopping it out, we'll send a draft of the guidelines to different areas. We don't publish it yet — we want to make sure that all our stakeholders get input. So for example, our college leadership will send it to our faculty senate, we'll send it to our Legal office, we'll send it anywhere that would have some useful feedback. We want to be transparent and inclusive, so it's not a top-down kind of thing, but rather the product of a large, representative, transdisciplinary body that's gone through a lot of iterations already.

CT: How did you break down the key areas that the guidelines should cover?

Conatser: One of the first conversations that we had as the ADVANCE team, and one that we have on a regular basis, is that these guidelines are specifically addressing generative AI. Whenever we start sliding into other language like AI and machine learning, we start to talk about a broader category of technologies that researchers, healthcare systems, healthcare providers, etc. have been using for a long time to do the work that they do. Trying to come in and write guidelines around that becomes a very different scenario. So we've been very specific that ADVANCE is focusing on generative AI guidance, and we are careful to define what that is in our guidelines so that everyone knows exactly what we're talking about.

Generative AI is manifesting in lots of different ways; the tools proliferate, and it seems that the issues proliferate as well. One of the ways that we have enumerated the areas that we need to address is to get stakeholder input. For the research guidelines, for example, we drew heavily from the questions that office was already receiving from campus stakeholders. For the instructional guidelines, we had roughly five, six months' worth of interactions with faculty that our teaching center had done already, in holding workshops or one-on-one consultations or just communicating about the technology. We drew from those to determine the most urgent questions and needs that that our faculty were articulating. And we broke those down in terms of guidance around: a) course policies for generative AI, and b) what things faculty need to know about the technology, because it's a firehose — there's a lot you could learn. How can we filter all of that to the critical things that faculty need to know around concerns and opportunities and what it can do for teaching and learning? Thirdly, we developed guidance around how to incorporate it into the curriculum with assignments. Whether you want those assignments to encourage AI or discourage AI, how do we design those in a way that's student-centered, that's still good practice in terms of the learning that's supposed to happen in the class? What kinds of principles should we follow there? So those are the three main areas: course policies, things to be aware of, and how do we incorporate it into our classes.

CT: So by listening to the questions that faculty and researchers have, they're telling you what guidelines they need, and that gives you a great place to start. But is it possible to develop guidelines fast enough to keep up with changes in the technology?

Conatser: It does change pretty quickly, doesn't it? In our guidelines, no matter what the guidelines are about, we're writing in there that we plan on continuously reviewing the state of the field and making any adjustments to the guidelines as necessary. The guidelines represent our best understanding at the time, and we are committed to refreshing them on a regular basis.

For the instructional guidelines, our refresh cycle has been per semester. We look at all of our recommendations and the statements that we're making about generative AI, compare them to any newer or more emergent areas of understanding, experience, and research, and we say, does this still reflect our understanding? Is this still accurate? Do we still think this is a good recommendation? Between the August and December guidelines, we didn't do an about-face for any of the recommendations, but we did find a need to clarify or review the research on a few things. For example, our initial recommendation on AI detectors was that we don't think they're very reliable. We wanted to look at the newest research about them and see, are they any better now? Is there better research now that might complicate our previous statements? We ended up finding research that led us to take even firmer language around detectors the second time around.

One thing that I think is unique about our recommendations is how evidence-based our work has been. We're being transparent about our references for the guidelines and what's informed our decision-making. For this to be a persuasive endeavor, it needs to be as rigorous as any kind of scholarly activity that you would expect at a university like UK.

CT: How do you balance creating guidelines that are comprehensive yet concise enough that people will actually read them?

Conatser: We've kept the guidelines relatively short, making sure to provide enough elaboration so that it's not without context and it's not without actionable specificity. But what we've also learned is that the guidelines represent one step among many at our institution. Rather than just saying, "Here are the guidelines, go forth and use them," it's important to have an institutional mechanism to socialize those guidelines in different areas of the university. For the instructional guidelines, for example, that's become part of the culture of our teaching centers, workshops, and consultations and programming with faculty. When you have groups that can take the guidelines and the use them actively in whatever kinds of trainings or professional activities they do, that keeps socializing them and making them accessible to faculty.

Because generative AI impacts each person differently, and each field and discipline and area of study differently, we found we need to have dialogue with people. The guidelines are meant to be used by people from a wide range of areas, and then in dialogue, we can work with those people. We can talk about what constitutes a good policy for generative AI in their specific course. How does that manifest for you? Let's talk about your area of study. What does learning look like in your class? What are the goals that you have for student learning? How do we set you up to succeed in a world where this technology now exists?

CT: What's your approach to faculty training on generative AI?

Conatser: Our teaching center enjoys a unique level of connection and collaboration across our entire campus. There's not a single college or unit that we don't work with here at UK, despite the large size of our university. And our teaching center is a voluntary unit: It's not mandatory for anyone to work with us, and that's a critical part of our success and our citizenship at the university and our ability to be colleagues with faculty, both intellectually and organizationally. We make sure to be clear with people that our teaching center and the ADVANCE team are ongoing spaces where they can find community and advice and assistance and camaraderie.

Our trainings on generative AI have by far been our most well-attended trainings and workshops over the last year. We've done in-person trainings and we've done online. We do some sessions that are more of an introduction to generative AI: What is it? How does it work? What does this mean for higher education? We'll have sessions that focus specifically on writing assignments and generative AI, or sessions that focus on assignments that aren't writing-based. We'll have sessions that focus more on course policy and academic honesty. And then we'll have other formats: We've had play sessions, for example, where the objective is not to learn a great deal of conceptual information, but rather to play around with the technology with some guided help and start to get more of a sense of efficacy at using the tools. We've found that once people use the tools and gain first-hand knowledge of how they work and what they can do, they feel a lot more comfortable addressing with students what these technologies are and what usage is going to be appropriate for that course.

One of our campus events that involves students, community members, etc., is called the Curiosity Fair. There are a bunch of different stations on interesting things in different disciplines, just to get people enthusiastic about learning — and we had a station on generative AI. We had different computers set up with some big monitors, and we had an activity for students, faculty, staff, and some community members to play with image-based generative AI. We started with one image, and across the four hours of the event, the point was to iterate upon that image as much as possible to make it the best possible image by the end of the night. People would look at the image, type in a prompt to try to make it better, and reflect on the output. What changed and why? Was it surprising?

This got into some really deep conversations about prompting and the kinds of data that the generator was trained on. And it got at that idea of developing critical AI literacies. Regardless of what discipline you're teaching in and regardless of what your course policy is around generative AI, that overriding goal for students, and all of us really, is the development of critical AI literacies. In other words, the increased understanding of how the technology works, but also how it's being deployed, who has developed it, and what are the issues, challenges, or concerns there. How does that impact our sense of this technology and the way that we use it? Can responsible use mitigate those risks or not? The development of those critical AI literacies is the undercurrent of all our trainings, workshops, documents, and recommendations, because generative AI is akin to the rise of the internet in the 90s: It's a new technology that disrupts our notion of what knowledge is, where it is, and how we develop it. And our overarching goal over time, particularly as an institution of higher education, is to hone in on how we can get those skills around critical literacies and uses, so that we can adapt ourselves over time as the technology changes — and still be capable of engaging with it as lifelong learners in ways that are responsible, appropriate, and effective.

Featured

  • a glowing gaming controller, a digital tree structure, and an open book

    Report: Use of Game Engines Expands Beyond Gaming

    Game development technology is increasingly being utilized beyond its traditional gaming roots, according to the recently released annual "State of Game Development" report from development and DevOps solutions provider Perforce Software.

  • abstract representation of equity at the core of AI

    Why Equity Must Be a Core Part of the Conversation About AI

    AI is an immensely powerful tool that can provide customized support for students with diverse learning needs, tailoring educational experiences to meet student’s individual needs more effectively. However, significant disparities in AI access and digital literacy skills prevent many of these same students from fully leveraging its benefits.

  • Man wearing headset working on a computer

    Internet2: Network Routing Security and RPKI Adoption in Research and Education

    We ask James Deaton, vice president of network services, about Internet2's initiatives and leadership efforts to promote routing security and RPKI adoption in research and higher education networks.

  • network of transparent cloud icons, each containing a security symbol like a lock or shield

    Okta, OpenID Foundation Propose New Identity Security Standard

    Okta and the OpenID Foundation have announced the formation of the IPSIE Working Group — with the acronym standing for Interoperability Profiling for Secure Identity in the Enterprise — dedicated to a new identity security standard for Software-as-a-Service (SaaS) applications.