Anthology Offers Framework for AI Policy and Implementation

Anthology has created a new resource for institutions developing policies around the ethical use of AI. The AI Policy Framework offers guidance on identifying stakeholders, defining institutional priorities, establishing a governance model, driving policy adoption, and more.

The document drills down into governance, teaching and learning, operational and administrative aspects, copyright and intellectual property, research, academic dishonesty, policy updates, and consequences of non-compliance, the company explained in a news announcement. Resources include questions to guide stakeholder discussions and help define policy positions, suggested elements to address in any AI program, and key points for implementation.

The document is part of Anthology's Trustworthy AI program, which has established seven core principles aligned with the NIST AI Risk Management Framework, the EU Artificial Intelligence Act, and the OECD Principles of Corporate Governance:

  • Fairness: Minimizing harmful bias in AI systems.
  • Reliability: Taking measures to ensure the output of AI systems is valid and reliable.
  • Humans in Control: Ensuring humans ultimately make decisions that have legal or otherwise significant impact.
  • Transparency and Explainability: Explaining to users when AI systems are used, how the AI systems work, and help users interpret and appropriately use the output of the AI systems.
  • Privacy, Security and Safety: AI systems should be secure, safe, and privacy friendly.
  • Value alignment: AI systems should be aligned to human values, in particular those of our clients and users.
  • Accountability: Ensuring there is clear accountability regarding the trustworthy use of AI systems within Anthology as well as between Anthology, its clients, and its providers of AI systems.

The AI Policy Framework is built on these principles, the company said, to provide a "good starting point for higher education institutions who are interested in developing and adopting specific policies and programs on the ethical use of AI within their institution."

"Higher education faced a transformative moment as generative AI exploded on the scene with ChatGPT. As a result, many institutions raced to create policies largely focused on how to control its use without giving much consideration to how to harness its power, " commented Bruce Dahlgren, CEO of Anthology, in a statement. "We believe that once you put the right guardrails in place, attention will quickly shift to how to leverage AI to drive student success, support operational excellence, and gain institutional efficiencies. As the leader in this space, we have a responsibility to help our customers balance the risks and rewards."

The AI Policy Framework is openly available on the Anthology site.

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • AI robot with cybersecurity symbol on its chest

    Microsoft Adds New Agentic AI Tools to Security Copilot

    Microsoft has announced a major expansion of its AI-powered cybersecurity platform, introducing a suite of autonomous agents to help organizations counter rising threats and manage the growing complexity of cloud and AI security.

  • modern college building with circuit and brain motifs

    Anthropic Launches Claude for Education

    Anthropic has announced a version of its Claude AI assistant tailored for higher education institutions. Claude for Education "gives academic institutions secure, reliable AI access for their entire community," the company said, to enable colleges and universities to develop and implement AI-enabled approaches across teaching, learning, and administration.

  • central cloud platform connected to various AI icons—including a brain, robot, and network nodes

    Linux Foundation to Host Protocol for AI Agent Interoperability

    The Linux Foundation has announced it will host the Agent2Agent (A2A) protocol project, an open standard originally developed by Google to support secure communication and interoperability among AI agents.

  • open laptop in a college classroom with holographic AI icons like a brain and data charts rising from the screen

    4 Ways Universities Are Using Google AI Tools for Learning and Administration

    In a recent blog post, Google shared an array of education customer stories, showcasing ways institutions are using AI tools like Gemini and NotebookLM to transform both learning and administrative tasks.