Anthology Offers Framework for AI Policy and Implementation

Anthology has created a new resource for institutions developing policies around the ethical use of AI. The AI Policy Framework offers guidance on identifying stakeholders, defining institutional priorities, establishing a governance model, driving policy adoption, and more.

The document drills down into governance, teaching and learning, operational and administrative aspects, copyright and intellectual property, research, academic dishonesty, policy updates, and consequences of non-compliance, the company explained in a news announcement. Resources include questions to guide stakeholder discussions and help define policy positions, suggested elements to address in any AI program, and key points for implementation.

The document is part of Anthology's Trustworthy AI program, which has established seven core principles aligned with the NIST AI Risk Management Framework, the EU Artificial Intelligence Act, and the OECD Principles of Corporate Governance:

  • Fairness: Minimizing harmful bias in AI systems.
  • Reliability: Taking measures to ensure the output of AI systems is valid and reliable.
  • Humans in Control: Ensuring humans ultimately make decisions that have legal or otherwise significant impact.
  • Transparency and Explainability: Explaining to users when AI systems are used, how the AI systems work, and help users interpret and appropriately use the output of the AI systems.
  • Privacy, Security and Safety: AI systems should be secure, safe, and privacy friendly.
  • Value alignment: AI systems should be aligned to human values, in particular those of our clients and users.
  • Accountability: Ensuring there is clear accountability regarding the trustworthy use of AI systems within Anthology as well as between Anthology, its clients, and its providers of AI systems.

The AI Policy Framework is built on these principles, the company said, to provide a "good starting point for higher education institutions who are interested in developing and adopting specific policies and programs on the ethical use of AI within their institution."

"Higher education faced a transformative moment as generative AI exploded on the scene with ChatGPT. As a result, many institutions raced to create policies largely focused on how to control its use without giving much consideration to how to harness its power, " commented Bruce Dahlgren, CEO of Anthology, in a statement. "We believe that once you put the right guardrails in place, attention will quickly shift to how to leverage AI to drive student success, support operational excellence, and gain institutional efficiencies. As the leader in this space, we have a responsibility to help our customers balance the risks and rewards."

The AI Policy Framework is openly available on the Anthology site.

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • The AI Show

    Register for Free to Attend the World's Greatest Show for All Things AI in EDU

    The AI Show @ ASU+GSV, held April 5–7, 2025, at the San Diego Convention Center, is a free event designed to help educators, students, and parents navigate AI's role in education. Featuring hands-on workshops, AI-powered networking, live demos from 125+ EdTech exhibitors, and keynote speakers like Colin Kaepernick and Stevie Van Zandt, the event offers practical insights into AI-driven teaching, learning, and career opportunities. Attendees will gain actionable strategies to integrate AI into classrooms while exploring innovations that promote equity, accessibility, and student success.

  • cloud, database stack, computer screen, binary code, and flowcharts interconnected by lines and arrows

    Salesforce to Acquire Data Management Firm Informatica

    Salesforce has announced plans to acquire data management company Informatica for $8 billion. The deal is aimed at strengthening Salesforce's AI foundation and expanding its enterprise data capabilities.

  • stylized AI code and a neural network symbol, paired with glitching code and a red warning triangle

    New Anthropic AI Models Demonstrate Coding Prowess, Behavior Risks

    Anthropic has released Claude Opus 4 and Claude Sonnet 4, its most advanced artificial intelligence models to date, boasting a significant leap in autonomous coding capabilities while simultaneously revealing troubling tendencies toward self-preservation that include attempted blackmail.

  • NVIDIA DGX line

    NVIDIA Intros Personal AI Supercomputers

    NVIDIA has introduced a new lineup of AI-powered computing solutions designed to accelerate enterprise workloads.