Anthology Offers Framework for AI Policy and Implementation

Anthology has created a new resource for institutions developing policies around the ethical use of AI. The AI Policy Framework offers guidance on identifying stakeholders, defining institutional priorities, establishing a governance model, driving policy adoption, and more.

The document drills down into governance, teaching and learning, operational and administrative aspects, copyright and intellectual property, research, academic dishonesty, policy updates, and consequences of non-compliance, the company explained in a news announcement. Resources include questions to guide stakeholder discussions and help define policy positions, suggested elements to address in any AI program, and key points for implementation.

The document is part of Anthology's Trustworthy AI program, which has established seven core principles aligned with the NIST AI Risk Management Framework, the EU Artificial Intelligence Act, and the OECD Principles of Corporate Governance:

  • Fairness: Minimizing harmful bias in AI systems.
  • Reliability: Taking measures to ensure the output of AI systems is valid and reliable.
  • Humans in Control: Ensuring humans ultimately make decisions that have legal or otherwise significant impact.
  • Transparency and Explainability: Explaining to users when AI systems are used, how the AI systems work, and help users interpret and appropriately use the output of the AI systems.
  • Privacy, Security and Safety: AI systems should be secure, safe, and privacy friendly.
  • Value alignment: AI systems should be aligned to human values, in particular those of our clients and users.
  • Accountability: Ensuring there is clear accountability regarding the trustworthy use of AI systems within Anthology as well as between Anthology, its clients, and its providers of AI systems.

The AI Policy Framework is built on these principles, the company said, to provide a "good starting point for higher education institutions who are interested in developing and adopting specific policies and programs on the ethical use of AI within their institution."

"Higher education faced a transformative moment as generative AI exploded on the scene with ChatGPT. As a result, many institutions raced to create policies largely focused on how to control its use without giving much consideration to how to harness its power, " commented Bruce Dahlgren, CEO of Anthology, in a statement. "We believe that once you put the right guardrails in place, attention will quickly shift to how to leverage AI to drive student success, support operational excellence, and gain institutional efficiencies. As the leader in this space, we have a responsibility to help our customers balance the risks and rewards."

The AI Policy Framework is openly available on the Anthology site.

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • interconnected cloud icons with glowing lines on a gradient blue backdrop

    Report: Cloud Certifications Bring Biggest Salary Payoff

    It pays to be conversant in cloud, according to a new study from Skillsoft The company's annual IT skills and salary survey report found that the top three certifications resulting in the highest payoffs salarywise are for skills in the cloud, specifically related to Amazon Web Services (AWS), Google Cloud, and Nutanix.

  • a hobbyist in casual clothes holds a hammer and a toolbox, building a DIY structure that symbolizes an AI model

    Ditch the DIY Approach to AI on Campus

    Institutions that do not adopt AI will quickly fall behind. The question is, how can colleges and universities do this systematically, securely, cost-effectively, and efficiently?

  • minimalist geometric grid pattern of blue, gray, and white squares and rectangles

    Windows Server 2025 Release Offers Cloud, Security, and AI Capabilities

    Microsoft has announced the general availability of Windows Server 2025. The release will enable organizations to deploy applications on-premises, in hybrid setups, or fully in the cloud, the company said.

  • digital brain made of blue circuitry on the left and a shield with a glowing lock on the right, set against a dark background with fading binary code

    AI Dominates Key Technologies and Practices in Cybersecurity and Privacy

    AI governance, AI-enabled workforce expansion, and AI-supported cybersecurity training are three of the six key technologies and practices anticipated to have a significant impact on the future of cybersecurity and privacy in higher education, according to the latest Cybersecurity and Privacy edition of the Educause Horizon Report.