Who's on First: Defining Institutional Roles in the Age of AI

Higher education is no stranger to disruptive technologies, and artificial intelligence is the latest to transform teaching, learning, and research. In order to thrive in this new age, institutions must plan, collaborate, and communicate an AI strategy for stakeholders across the campus.

The artificial intelligence powering OpenAI's now infamous ChatGPT natural language processing tool is not the first technology to reshape higher education, and, if we're being honest, it won't be the last. Most recently, digital transformation evoked its own brand of trepidation and chaos. Ultimately, however, digitalization gave rise to innovative student-centered distance learning practices, creative modalities for demonstrating mastery, and progressive hybrid instructional models. These new technologies have forever transformed the way institutions teach, research, and prepare students for life beyond education. Technology is disruptive, yet institutions survive — even thrive. Why? Because we are relators, critical thinkers, communicators, and eternal optimists. These are precisely the superpowers we must call forth now, in this AI age.

GPT Has Entered the Chat

While the disruptive magnitude of artificial intelligence in higher education cannot be overestimated, it is equally unavoidable. Artificial intelligence, or "AI," describes a computer's ability to mimic human thinking by drawing insightful conclusions based on massive data sets. Generative AI is a type of artificial intelligence used to create unique content, like responsive essays, programming code, musical compositions, or artistic images — the same products instructors often ask students to generate as evidence of learning. Fortunately, higher education has a knack for reimagining itself in the face of breathtaking change. From 2020 forward, digital transformation swept through every facet of higher education. Slowly but surely, faculty and students mastered distance learning models. Then, leaning into disruption, capitalized by embracing innovative methods of instruction, learning, and collaboration. We are, after all, in the business of transformation, and while global change starts with education, educational leadership begins with a plan. To cultivate an environment in which effective teaching, authentic learning, and visionary research remain priorities, institutions should consider (1) identifying key stakeholders affected by AI and their respective roles and responsibilities, (2) emphasizing strategic collaboration, and (3) working together to disseminate cohesive messaging related to AI technologies. 

Plan Diversely

Because the ramifications of AI are so widespread, it's challenging to determine which segments of an institution should contribute to drafting cohesive university policy, formulating strategic plans, and identifying communication goals. The departments and divisions involved in these decisions may vary from institution to institution; however, key stakeholders consistently represent specific areas of expertise. Institutional administration, faculty leadership, academic affairs, university communications, student government, and information technology services can each play a significant role in supporting the university community. With so many divisions, departments, and teams involved, defining roles and responsibilities is paramount to effective strategic collaboration. Dynamic partnerships between the professionals who guide curriculum and instruction, oversee academic integrity standards, communicate on behalf of the university, and supply information technology services can lead to the creation of trusted resources and foster stability for instructional faculty, academic staff, and students. 

Collaborate Inclusively

With key stakeholder roles and responsibilities established, collaboration becomes the next priority. Strategic approaches to expanding AI technologies may range from monitoring new technological developments and researching innovative teaching practices to drafting syllabus statements and revising academic integrity policies. Successful collaboration requires a shared understanding of each group's capabilities and limitations. Information technology services, for instance, may not be in the best position to provide instructional guidance, but a team of diversely experienced IT members can offer holistic approaches and help address some of the academic challenges generative AI creates when designing authentic learning experiences. A common vision for acceptable uses of AI technologies provides students with a greater level of consistency, and the emphasis remains on authentic learning rather than on the inevitable cat-and-mouse game of generative AI detection and evasion. Collaborative strategic planning also leads to consistent, cohesive, and reliable messaging.  

Communicate Cohesively  

The sheer volume of digital information published about AI technology in higher education is staggering. From articles and discussion boards to videos and online courses, predictions about current and future educational impacts of AI and its progeny abound. For faculty, support staff, and students involved in the day-to-day business of teaching and learning, discerning fact from fiction in online sources presents an insurmountable task. Nevertheless, it's understandable to seek out a foundational understanding of artificial intelligence, explore reasonable predictions about next iterations, and draw our own conclusions. A cohesive communication strategy, formulated to further institutional goals, therefore includes a curated list of industry-standard resources and frequently asked questions. Using these references, all stakeholders have access to reliable information, reinforcing the institution's commitment to learning through original thought and critical analysis. Unified and coordinated messaging supports clarity and negates confusion. Collaborative decision-making about the tone, content, objectives, and timelines for all messaging sustains common understandings and expectations across the institution. 

Transform Together 

It makes sense for higher education to provide the center stage on which this very visible struggle between human intelligence and artificial intelligence will play out. Our institutions have always served as microcosms of change. Like the disruptions spawned by graphing calculators, the internet, spell checkers, open content sources, and distance learning, generative AI is the latest in a long line of growth opportunities for higher education. Fortunately for us, change is part of who we are, and embracing transformation is what we do.  

About the Author

Dr. Kimberly Conner is a publications writer for Texas State University’s Division of Information Technology. Conner reports on how the IT Division’s initiatives both support the success of university stakeholders and advance innovation in higher education.

Featured

  • pattern featuring interconnected lines, nodes, lock icons, and cogwheels

    Red Hat Enterprise Linux 9.5 Expands Automation, Security

    Open source solution provider Red Hat has introduced Red Hat Enterprise Linux (RHEL) 9.5, the latest version of its flagship Linux platform.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Launches AI-Native Networking and Security Management Platform

    Juniper Networks has introduced a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.

  • a digital lock symbol is cracked and breaking apart into dollar signs

    Ransomware Costs Schools Nearly $550,000 per Day of Downtime

    New data from cybersecurity research firm Comparitech quantifies the damage caused by ransomware attacks on educational institutions.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.