D2L Expands Generative AI Beta Program

D2L has expanded its generative AI program with the ability to generate practice and quiz questions based on current course content. The ongoing beta program is being tested by select D2L Brightspace users.

The beta testing program uses content created in Brightspace and allows teachers to view the AI practice and quiz questions before making them available to students, giving them more safety and control oversight, D2L said. North American testing began in September 2023 and will continue through summer 2024.

The beta program is based on D2L's "Responsible AI Principles" document. It gives guidelines for using AI safely and responsibly to generate course materials, in these areas:

  • Privacy: customers' control of their own personal data and automated decisions based on it;
  • Bias and non-discrimination: AI design, development, and use that is fair and avoids harm to users;
  • Security and robustness: practices that test AI systems for reliability, security, and avoidance of harm;
  • Transparency: easy-to-understand AI outputs and disclosures about when, where, and how AI is used; and
  • Accountability: practices that are answerable to all stakeholders and promote safe and responsible use of AI.

"Over the past decade, D2L has been a leader in successfully integrating authentic AI and machine learning capabilities into our products," said Stephen Laster, D2L president. "This automated question generation capability can make it easier for instructors to assess learners in the moment. It is the initial step in expanding our product roadmap with cutting-edge generative AI to help change the way the world learns."

For more information and to try the practice questions beta program, visit the Brightspace Creator+ page.

About the Author

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.

Featured

  • data professionals in a meeting

    Data Fluency as a Strategic Imperative

    As an institution's highest level of data capabilities, data fluency taps into the agency of technical experts who work together with top-level institutional leadership on issues of strategic importance.

  • stylized AI code and a neural network symbol, paired with glitching code and a red warning triangle

    New Anthropic AI Models Demonstrate Coding Prowess, Behavior Risks

    Anthropic has released Claude Opus 4 and Claude Sonnet 4, its most advanced artificial intelligence models to date, boasting a significant leap in autonomous coding capabilities while simultaneously revealing troubling tendencies toward self-preservation that include attempted blackmail.

  • university building with classical architecture is partially overlaid by a glowing digital brain graphic

    NSF Invests $100 Million in National AI Research Institutes

    The National Science Foundation has announced a $100 million investment in National Artificial Intelligence Research Institutes, part of a broader White House strategy to maintain American leadership as competition with China intensifies.

  • black analog alarm clock sits in front of a digital background featuring a glowing padlock symbol and cybersecurity icons

    The Clock Is Ticking: Higher Education's Big Push Toward CMMC Compliance

    With the United States Department of Defense's Cybersecurity Maturity Model Certification 2.0 framework entering Phase II on Dec. 16, 2025, institutions must develop a cybersecurity posture that's resilient, defensible, and flexible enough to keep up with an evolving threat landscape.