Ditch the DIY Approach to AI on Campus

Although it's starting to pick up steam, artificial intelligence (AI) adoption by education institutions still lags behind other industries. According to one recent survey, this can be largely attributed to a lack of expertise and strategy among faculty and administrators. The technology can be daunting, and when combined with concerns about academic integrity, lack of security and the tendency of AI to "hallucinate" or give faulty answers, AI faces significant headwinds at universities and colleges.

However, just like in any other industry, institutions that do not adopt AI will quickly fall behind. Large language models (LLM) are here to stay, and are rapidly becoming part of the toolkit used in STEM work, such as coding, data analysis, engineering, biomedical research, and other fields. On the administrative side, AI can help reduce cost and improve the student experience with IT help desk issues, registration and class schedules, and other parts of day-to-day operations. It's vital that educational institutions seize this opportunity to provide a safe and secure AI experience for students, faculty, and staff.

The question is, how can educational institutions do this systematically, securely, cost-effectively, and efficiently?

Building a Secure Yet Scalable AI Solution Fit for Your Institution

Many institutions are hesitant to use some of the more prevalent AI tools, such as ChatGPT, due to how the training models operate. These tools pose significant security challenges to university and researchers alike, since all input data becomes part of the dataset from which the AI can pull. This presents a risk that proprietary or private data will be inadvertently made public through the LLM, creating challenges from both a compliance and an IP standpoint.

However, building your own LLM from scratch is prohibitively expensive for all but the wealthiest corporations or research labs, as it requires vast GPU server farms and millions of dollars of compute time to train the model. It's a non-starter for most colleges or universities, particularly those under budget and staffing constraints.

Utilizing open source LLMs, such as Granite, Llama, or Mistral AI, is a viable third way for universities and colleges to build a scalable AI solution that can be tailored to an institution's needs. While training and building an LLM model is expensive, the models themselves are lightweight once they are deployed and they can be run off of a single server or endpoint on the network. These solutions can then be customized to a university's needs with varying degrees of guardrails put in place about how external information can be accessed, who has access, and how information can be stored and made available.

It's not enough to keep information secure. Colleges and universities must be assured that the information they're receiving from the AI is trustworthy and accurate. Retrieval augmented generation (RAG), an approach that uses your own custom data to confine the results of open source models, can help. It can be customized to alleviate concerns about data being leaked or misused, and can create new user experiences based on per-app or per-user profile prompting. Implementing an open source project on an internal server with limited or no external access and built out with RAG entry points and other augmentations can be done in just a few weeks.

Single Point or Platform AI?

Choosing an open source AI solution will likely come down to either a single-point or platform approach. The choice depends on your priorities.

A single point deployment is useful if you're just getting started on your AI journey. It consists of having a solitary LLM run on an internal server where it may help with tasks such as IT support desk, student registration or student support services. It's efficient and allows IT teams to dip their toe into the world of AI, get comfortable, and scale from there.

A platform approach, which makes the full power of AI available to your students, staff, and faculty, is a longer-term solution. With this approach, multiple AI models or RAG entry points are used to support various use cases in a more operational approach. The models can be tuned to different datasets and the needs of different departments, teams, or classes. A variety of AI-enabled applications and models can be deployed, such as chatbots, educational tools, and content recommendation engines with behaviors based on the end user's profile.

A platform approach provides far more flexibility for integrating with other apps and software a department might be using. It will give you more flexibility, as the LLMs are not only pre-engineered to support different functions, but can also be tuned to be more creative or conservative with their answers depending on your organization's needs.

The University of Michigan is a great example of how to employ a full-platform approach successfully. Their deployment is grounded in four principles: security (all data that belongs to the university stays there, without being potentially exposed to outside parties); privacy (all data is private and black boxed); accessibility (all models are accessibility compliant); and equity (information is available to staff, students, and professors, for any purposes).

Making AI Work for Your Organization

AI is here to stay, it's readily available, and it's likely most of your students, faculty, and staff are already using it in some capacity. If your institution does not provide it to students and faculty, they're going to get it somewhere else. The best way to ensure it's being used appropriately is to cost-effectively create your own sanctioned AI systems that are accurate, secure, and trusted.

Fortunately, there are ways to do that without breaking your school's budget or hiring teams of AI experts. Open source has made AI less daunting to implement and customize for every education institution. It's just a matter of choosing the right option to fit your organization's current and future needs.

Featured

  • pattern featuring interconnected lines, nodes, lock icons, and cogwheels

    Red Hat Enterprise Linux 9.5 Expands Automation, Security

    Open source solution provider Red Hat has introduced Red Hat Enterprise Linux (RHEL) 9.5, the latest version of its flagship Linux platform.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Launches AI-Native Networking and Security Management Platform

    Juniper Networks has introduced a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.

  • a digital lock symbol is cracked and breaking apart into dollar signs

    Ransomware Costs Schools Nearly $550,000 per Day of Downtime

    New data from cybersecurity research firm Comparitech quantifies the damage caused by ransomware attacks on educational institutions.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.