Microsoft Announces New Generative AI Copyright Commitment

In response to concerns from Microsoft customers using its generative AI tools, Microsoft has announced it is extending its commitment to assume responsibility for copyright challenge legal risks, as long as customers use its built-in "guardrails" to prevent copyright infringement.

The new statement extends its "existing intellectual property indemnity support to commercial Copilot services and builds on our previous AI Customer Commitments," the company said, elaborating that "if a third party sues a commercial customer for copyright infringement for using Microsoft's Copilots or the output they generate, we will defend the customer and pay the amount of any adverse judgments or settlements that result from the lawsuit, as long as the customer used the guardrails and content filters we have built into our products."

Such guardrails consist of tech that will "reduce the likelihood that Copilots return infringing content," Microsoft said. The tech includes "classifiers, metaprompts, content filtering, and operational monitoring and abuse detection, including that which potentially infringes third-party content."

Microsoft said it believes in standing behind users of its products, but is also "sensitive to the concerns of authors" over copyright infringement. It calls itself "bullish on the benefits of AI," but also "clear-eyed about the challenges and risks associated with it, including protecting creative works."

Commercial services affected include Bing Chat Enterprise; Microsoft 365 Copilot services such as Word, Excel, PowerPoint, GitHub Copilot; and more that use generative AI.

There are caveats, however. Customers "must not attempt to generate infringing materials, including not providing input to a Copilot service that the customer does not have appropriate rights to use," and the company warns that its stance has not changed that it "does not claim any intellectual property rights in the outputs of its Copilot services."

Microsoft said it expects that other issues will arise in the continued journey of using AI, and new legal questions and challenges will need to be addressed going forward. To that end, it reaffirmed its commitment to "help manage these risks by listening to and working with others in the tech sector, authors and artists and their representatives, government officials, the academic community, and civil society."

To read more about the Copilot Copyright Commitment, visit this Microsoft blog page.

About the Author

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.

Featured

  • pattern featuring interconnected lines, nodes, lock icons, and cogwheels

    Red Hat Enterprise Linux 9.5 Expands Automation, Security

    Open source solution provider Red Hat has introduced Red Hat Enterprise Linux (RHEL) 9.5, the latest version of its flagship Linux platform.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Launches AI-Native Networking and Security Management Platform

    Juniper Networks has introduced a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.

  • a digital lock symbol is cracked and breaking apart into dollar signs

    Ransomware Costs Schools Nearly $550,000 per Day of Downtime

    New data from cybersecurity research firm Comparitech quantifies the damage caused by ransomware attacks on educational institutions.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.