Anthology Offers Framework for AI Policy and Implementation

Anthology has created a new resource for institutions developing policies around the ethical use of AI. The AI Policy Framework offers guidance on identifying stakeholders, defining institutional priorities, establishing a governance model, driving policy adoption, and more.

The document drills down into governance, teaching and learning, operational and administrative aspects, copyright and intellectual property, research, academic dishonesty, policy updates, and consequences of non-compliance, the company explained in a news announcement. Resources include questions to guide stakeholder discussions and help define policy positions, suggested elements to address in any AI program, and key points for implementation.

The document is part of Anthology's Trustworthy AI program, which has established seven core principles aligned with the NIST AI Risk Management Framework, the EU Artificial Intelligence Act, and the OECD Principles of Corporate Governance:

  • Fairness: Minimizing harmful bias in AI systems.
  • Reliability: Taking measures to ensure the output of AI systems is valid and reliable.
  • Humans in Control: Ensuring humans ultimately make decisions that have legal or otherwise significant impact.
  • Transparency and Explainability: Explaining to users when AI systems are used, how the AI systems work, and help users interpret and appropriately use the output of the AI systems.
  • Privacy, Security and Safety: AI systems should be secure, safe, and privacy friendly.
  • Value alignment: AI systems should be aligned to human values, in particular those of our clients and users.
  • Accountability: Ensuring there is clear accountability regarding the trustworthy use of AI systems within Anthology as well as between Anthology, its clients, and its providers of AI systems.

The AI Policy Framework is built on these principles, the company said, to provide a "good starting point for higher education institutions who are interested in developing and adopting specific policies and programs on the ethical use of AI within their institution."

"Higher education faced a transformative moment as generative AI exploded on the scene with ChatGPT. As a result, many institutions raced to create policies largely focused on how to control its use without giving much consideration to how to harness its power, " commented Bruce Dahlgren, CEO of Anthology, in a statement. "We believe that once you put the right guardrails in place, attention will quickly shift to how to leverage AI to drive student success, support operational excellence, and gain institutional efficiencies. As the leader in this space, we have a responsibility to help our customers balance the risks and rewards."

The AI Policy Framework is openly available on the Anthology site.

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • Abstract geometric shapes, including squares and rectangles, are arranged in a grid-like pattern with connecting lines

    Eclipse Foundation Establishes New Open Source Compliance Initiative

    The Eclipse Foundation has launched the Open Regulatory Compliance Working Group (ORC WG), dedicated to helping the global open source community navigate increasingly complex regulatory landscapes.

  • man with clipboard using an instrument to take a measurement of a cloud

    Internet2 Kicks Off 2025 with a Major Cloud Scorecard Update

    The latest release on Internet2's Cloud Scorecard Finder website previews new features that include dynamic selection criteria and options to explore multiple solutions side-by-side. More updates are planned in the new year.

  • person signing a bill at a desk with a faint glow around the document. A tablet and laptop are subtly visible in the background, with soft colors and minimal digital elements

    California Governor Signs AI Content Safeguards into Law

    California Governor Gavin Newsom has officially signed off on a series of landmark artificial intelligence bills, signaling the state’s latest efforts to regulate the burgeoning technology, particularly in response to the misuse of sexually explicit deepfakes. The legislation is aimed at mitigating the risks posed by AI-generated content, as concerns grow over the technology's potential to manipulate images, videos, and voices in ways that could cause significant harm.

  • Campus Technology Product Award

    Call for Entries: 2024 Campus Technology Product Awards

    The entry period for the 2024 Campus Technology Product Awards is now open.