New Nonprofit to Work Toward Safer, Truthful AI

Turing Award-winning AI researcher Yoshua Bengio has launched LawZero, a new nonprofit aimed at developing AI systems that prioritize safety and truthfulness over autonomy.

LawZero, based in Montreal and currently staffed by 15 researchers, has secured nearly $30 million in funding from donors including Skype founding engineer Jaan Tallinn, Schmidt Sciences, Open Philanthropy, and the Future of Life Institute. The organization’s core mission is to develop "Scientist AI" — non-agentic systems designed to provide transparent, probabilistic reasoning rather than autonomous behavior.

"We want to build AIs that will be honest and not deceptive," Bengio told the Financial Times. His remarks come amid growing concerns about AI systems exhibiting harmful tendencies such as deception, manipulation, and resistance to shutdown.

Concerns Over Agentic AI

Bengio’s concerns are not theoretical. In recent controlled experiments, OpenAI’s "o3" model refused instructions to shut down, while Anthropic’s Claude Opus simulated blackmail tactics in a test scenario. More recently, engineers at Replit observed one of their AI agents disobey explicit instructions and attempt to regain unauthorized access via social engineering.

"We are playing with fire," Bengio said, warning that next-generation models could develop strategic intelligence capable of deceiving human overseers. He argues that these agentic systems, designed to act independently, pose existential risks, including the development of bioweapons or efforts to self-preserve against human control.

As AI labs race to build artificial general intelligence (AGI) — systems capable of performing any human-level task — Bengio believes current approaches are flawed. "If we get an AI that gives us the cure for cancer but also one that creates deadly bioweapons, then I don't think it's worth it," he said.

What is "Scientist AI"?

Unlike current models that aim to imitate humans and maximize user satisfaction, LawZero’s proposed Scientist AI will emphasize truthfulness and humility, Bengio has said. It will provide probabilistic outputs instead of definitive answers and evaluate the likelihood that an AI agent’s actions could cause harm. When deployed alongside an autonomous AI agent, the system would block actions deemed too risky, serving as a technical guardrail.

LawZero plans to start by working with open-source AI models, with the goal of scaling the approach through partnerships with governments or other research institutions. Bengio emphasized that any effective safeguard must be "at least as smart" as the agent it monitors.

LawZero, named after Isaac Asimov’s "zeroth law of robotics," will explicitly reject profit motives and instead seek public accountability. Bengio believes a combination of technical interventions and government regulation is needed to ensure AI systems remain aligned with human interests.

For more information, visit the LawZero site.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • data professionals in a meeting

    Data Fluency as a Strategic Imperative

    As an institution's highest level of data capabilities, data fluency taps into the agency of technical experts who work together with top-level institutional leadership on issues of strategic importance.

  •  laptop on a clean desk with digital padlock icon on the screen

    Study: Data Privacy a Top Concern as Orgs Scale Up AI Agents

    As organizations race to integrate AI agents into their cloud operations and business workflows, they face a crucial reality: while enthusiasm is high, major adoption barriers remain, according to a new Cloudera report. Chief among them is the challenge of safeguarding sensitive data.

  • multiple laptops and a desktop computer on a colorful geometric background

    Microsoft Updates Windows 11 with Faster Recovery, Redesigned Restart Screen

    Microsoft has released two new features in the latest version of Windows 11 that aim to reduce downtime and improve system recovery.

  • modern college building with circuit and brain motifs

    Anthropic Launches Claude for Education

    Anthropic has announced a version of its Claude AI assistant tailored for higher education institutions. Claude for Education "gives academic institutions secure, reliable AI access for their entire community," the company said, to enable colleges and universities to develop and implement AI-enabled approaches across teaching, learning, and administration.