Anthology Offers Framework for AI Policy and Implementation

Anthology has created a new resource for institutions developing policies around the ethical use of AI. The AI Policy Framework offers guidance on identifying stakeholders, defining institutional priorities, establishing a governance model, driving policy adoption, and more.

The document drills down into governance, teaching and learning, operational and administrative aspects, copyright and intellectual property, research, academic dishonesty, policy updates, and consequences of non-compliance, the company explained in a news announcement. Resources include questions to guide stakeholder discussions and help define policy positions, suggested elements to address in any AI program, and key points for implementation.

The document is part of Anthology's Trustworthy AI program, which has established seven core principles aligned with the NIST AI Risk Management Framework, the EU Artificial Intelligence Act, and the OECD Principles of Corporate Governance:

  • Fairness: Minimizing harmful bias in AI systems.
  • Reliability: Taking measures to ensure the output of AI systems is valid and reliable.
  • Humans in Control: Ensuring humans ultimately make decisions that have legal or otherwise significant impact.
  • Transparency and Explainability: Explaining to users when AI systems are used, how the AI systems work, and help users interpret and appropriately use the output of the AI systems.
  • Privacy, Security and Safety: AI systems should be secure, safe, and privacy friendly.
  • Value alignment: AI systems should be aligned to human values, in particular those of our clients and users.
  • Accountability: Ensuring there is clear accountability regarding the trustworthy use of AI systems within Anthology as well as between Anthology, its clients, and its providers of AI systems.

The AI Policy Framework is built on these principles, the company said, to provide a "good starting point for higher education institutions who are interested in developing and adopting specific policies and programs on the ethical use of AI within their institution."

"Higher education faced a transformative moment as generative AI exploded on the scene with ChatGPT. As a result, many institutions raced to create policies largely focused on how to control its use without giving much consideration to how to harness its power, " commented Bruce Dahlgren, CEO of Anthology, in a statement. "We believe that once you put the right guardrails in place, attention will quickly shift to how to leverage AI to drive student success, support operational excellence, and gain institutional efficiencies. As the leader in this space, we have a responsibility to help our customers balance the risks and rewards."

The AI Policy Framework is openly available on the Anthology site.

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at rkelly@1105media.com.

Featured

  • computer with a red warning icon on its screen, surrounded by digital grids, glowing neural network patterns, and a holographic brain

    Report Highlights Security Risks of Open Source AI

    In these days of rampant ransomware and other cybersecurity exploits, security is paramount to both proprietary and open source AI approaches — and here the open source movement might be susceptible to some inherent drawbacks, such as use of possibly insecure code from unknown sources.

  • interconnected glowing nodes and circuits in blue and green, forming a neural network on a dark background with a futuristic design

    Tech Giants Launch $100 Billion AI Infrastructure Network Project

    OpenAI, SoftBank, and Oracle have unveiled a new venture, Stargate, through which they aim to build a massive AI infrastructure network across the United States. The initiative, which was announced at the White House with President Donald Trump, has been described as the "largest AI infrastructure project in history."

  • augmented reality goggles on a desk in a dark, shut-down production lab with neon accents and scattered tools

    Microsoft Stepping Away from HoloLens Mixed Reality Hardware

    Microsoft has confirmed that its HoloLens mixed reality hardware efforts have officially come to an end.

  • glowing digital document floats above a laptop, surrounded by soft, flowing tech-inspired lines and geometric shapes in shades of blue and white

    Boston U Expands AllCampus Partnership with New Non-Credit Certificate Programs

    Boston University Metropolitan College's Center for Professional Education has expanded its relationship with online program management provider AllCampus. The agreement will extend support for BU's existing online Paralegal Studies Program and add new non-credit certificates in financial planning, professional fundraising, and genealogical studies.