4 AI Imperatives for Higher Education in 2024

How will artificial intelligence impact colleges and universities this year? We asked AI and higher education leaders for their predictions and thoughts on the most important issues to consider as the technology evolves and adoption expands. Here's what they told us.

1) Responsible AI Will Be Critical as Complex Issues Persist

In 2024, AI in education will continue evolving with new architectures and transformer models like GPT-5 and Gemini 2 leading the way. However, the complexity of developing robust GenAI solutions might slow the adoption of open source models in ed tech. Institutions will navigate between adopting AI assistants within existing applications, leveraging GUI-based low-code platforms, utilizing API connections to proprietary models, and building custom stacks for enhanced privacy.

The most crucial considerations for education institutions will be the accessibility and ethical implementation of these technologies. Adoption will vary, with AI assistants and low-code platforms likely gaining traction for their ease of integration and user-friendliness.

However, the need for data privacy and custom solutions might drive some toward locally hosted models or API-based connections to sophisticated services. Regardless of the approach, integrating responsible AI practices — like improving source attribution, debiasing datasets, and ensuring privacy — will be vital. These measures are not just ethical imperatives but also crucial for maintaining trust and efficacy in educational environments.

As we move into 2024, education institutions must weigh the promise of AI against these practical and ethical considerations, ensuring that the technology they adopt not only enhances learning but also aligns with the core values of education.

— Noble Ackerson, CTO, American Board of Design and Research

Higher education will continue to engage some very complex and unresolved questions that generative AI raises. There are legal questions pertaining to intellectual property, not only in terms of how generative AI models were trained but also, and perhaps more importantly for research and other creative activities, how the use of generative AI may impact the ownership of the products of our work. It will also be paramount for educational institutions to think deeply about the issues around both bias and inaccuracies that emerge from how generative AI's models are developed and how they work to produce output. In all of our disciplines and professions, we are still coming to terms with what responsible use of generative AI is, especially in ways that empower people as decision-making agents.

— Trey Conatser, Ph.D., director, Center for the Enhancement of Learning and Teaching, University of Kentucky

2) Institutions Must Take AI Skills Training Seriously

There's a crying need for faculty and staff professional development about generative AI. The topic is complicated and fast moving. Already the people I know who are seriously offering such support are massively overscheduled. Digital materials are popular. Books are lagging but will gradually surface. I hope we see more academics lead more professional development offerings.

For an academic institution to take emerging AI seriously it might have to set up a new body. Present organizational nodes are not necessarily a good fit. For example, a computer science department can be of great help in explaining the technology, but might not have a lot of experiencing in supporting non-CS teaching with AI. Campus IT will probably be overwhelmed already, and might not have the academic cloud needed to win the attention of some faculty and staff. Perhaps a committee or team is a better idea, with members drawn from a heterogeneous mix of the community. Not to be too alarmist, but we might learn from how some institutions set up emergency committees to handle COVID in 2020, bringing together diverse subject matter experts, stakeholders, and operational leaders. If a campus population comes to see AI as a serious threat, this might be a useful model.

This is the heroic age of generative AI, as it were, with major developments under way and many changes happening quickly. Things will settle down in a bit, most likely, as new technologies become productive level services and as the big money and governments start corralling AI for their ends, at least until the next wave hits. By this I mean colleges, universities, and individual academics have the opportunity to exert influence on the field while it's still fluid. As customers, as partners, as intellectuals we can engage with the AI efforts. The engagement can take various forms, including creating open source projects, negotiating better service contracts with providers, lobbying for regulations, and issuing public scholarship. I hope campuses can grasp and support such work.

— Bryan Alexander, futurist and convener of the Future Trends Forum (excerpted with permission from "From 2023 to 2024 in AI, part 2")

3) Advancing AI Literacy Will Empower Innovation in Teaching and Learning

Within the next year, higher education will undergo a pivotal shift from enhancing digital fluency to advancing AI literacy and empowerment. This change is vital to align with the increasing presence of AI, transforming it from a mere tool into an integral part of academic and creative work.

The first phase is fostering AI literacy, where institutions will enrich curricula with AI principles, applications, and ethical considerations. This ensures the campus community is not only proficient in using AI but also in critically evaluating its impact and implications. Educators will recalibrate teaching to emphasize human insights and skills not replicable by AI, while preserving intellectual autonomy.

The progression toward AI empowerment will see institutions enabling innovative uses of AI in personalizing learning, advancing research, and enhancing administrative efficiency. This broader incorporation will transition AI from a complex computational entity to a partner in academia's collaborative fabric.

Realizing this vision necessitates ethical guidelines, strategic educational approaches, and fortified secure digital infrastructures. The goal for the forthcoming year is comprehensive AI integration that enriches human capability and reflects academic integrity.

— Kim Round, Ph.D., founding partner, Instructioneers LxD

4) AI Will Expand Across Teaching and Learning and Beyond

Generative AI will appear in curricula across the disciplines. While the first year of generative AI saw many educators experiment with this new technology within existing curricular frameworks, we will now see more formalized curricula focusing on AI broadly, and generative AI specifically. And this won't be confined to computer science departments; we'll see it in all disciplines and professions given the wide-ranging the implications of this technology. This will reflect how generative AI is a transdisciplinary issue par excellence, and how one of the most important goals for higher education broadly will be the development of critical literacies and transferrable skills around AI.

Colleges and universities will also begin to integrate generative AI technologies into their everyday operations and administration. We didn't see this happen immediately when Gen AI tools became publicly available because they aren't usually compliant with the data privacy and data use requirements in higher education — FERPA, HIPAA, research integrity, and so on. But as both institutions and vendors work to establish ways to license and use Gen AI technologies securely, we'll see them take hold in two ways. First, Gen AI will be integrated into specific aspects such as student support, and communications. Second, colleges and universities will work to establish secure access to their own foundation models for general use by all institutional stakeholders including students, staff, and faculty. To do all of this, institutions may need to evaluate how they can sustainably make these transformations, especially in terms of budget and data/computational infrastructure.

— Trey Conatser, Ph.D., director, Center for the Enhancement of Learning and Teaching, University of Kentucky

In 2024, higher education needs to take a holistic look at the positive impacts AI can have across the entire campus. This year will see a broadening of conversations about the impact of AI beyond teaching and learning to look at how AI can enhance the student experience, provide operational efficiencies, improve research, and perform other aspects of our missions. Careful thought will need to be given to how our curriculums will evolve to prepare students to enter a workplace where AI is pervasive. We also need to take a deeper look at the skills and roles needed from our own employees and necessary changes to policies and governance to provide guiderails and structures for the adoption and use of AI.

AI holds great promise for transforming the way students will obtain services by leveraging AI's ability to understand natural language and provide smart automated responses and actions based upon a wide variety of information we know about each student. Schools that spend 2024 developing a unified strategy to leverage AI to reduce the friction our students encounter in obtaining services and provide actionable insights and interventions will have an advantage in being able to adopt and benefit from these tools as they mature.

Institutions will also need to develop ways to understand the ROI and total cost involved in AI solutions, while navigating a rapidly evolving set of AI tools being developed by a wide variety of companies attempting to gain a piece of this marketspace.

The service improvements, insights, and efficiencies that this new generation of AI tools will provide will set a new bar for student expectations for how they interact and receive services from their schools and enable new services and efficiencies. 2024 is the year to be planning and organizing to take full advantage of them.

— David Weil, vice president, Information Technology & Analytics, Ithaca College

Featured

  • pattern featuring interconnected lines, nodes, lock icons, and cogwheels

    Red Hat Enterprise Linux 9.5 Expands Automation, Security

    Open source solution provider Red Hat has introduced Red Hat Enterprise Linux (RHEL) 9.5, the latest version of its flagship Linux platform.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Launches AI-Native Networking and Security Management Platform

    Juniper Networks has introduced a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.

  • a digital lock symbol is cracked and breaking apart into dollar signs

    Ransomware Costs Schools Nearly $550,000 per Day of Downtime

    New data from cybersecurity research firm Comparitech quantifies the damage caused by ransomware attacks on educational institutions.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.