Murati testifies under oath that Altman misled her about OpenAI safety
OpenAI's former CTO testified in Musk v. Altman that the CEO falsely assured her the legal department had validated an AI model's safety standards.
The Musk v. Altman lawsuit produced one of its most awkward moments for OpenAI this week. In a video deposition played in court on May 6, Mira Murati, who served as the company's CTO until her departure in September 2024, testified under oath that Sam Altman misled her about whether the legal department had reviewed and validated safety standards for a new AI model. According to The Verge, Murati explicitly stated she could not trust Altman's word.
This is not an informal complaint in an interview or office gossip, but recorded testimony presented in a federal court proceeding, with all the legal implications that entails.
What Murati said exactly
According to reporting by The Verge, Murati maintained that Altman falsely told her that OpenAI's legal team had determined that a specific model did not require an additional level of safety review. Murati, who also briefly served as interim CEO until October 2023 during the board crisis, occupied a privileged position to understand the company's internal evaluation processes.
The material detail is not merely that she disagreed with Altman on a technical decision (that happens in any company), but that she claims she was deliberately given incorrect information about a compliance process. This moves the problem from the realm of differing opinions into questions of institutional trust.
Why it matters beyond the courtroom
The Musk v. Altman case hinges on whether OpenAI has betrayed its founding mission as a nonprofit organisation oriented toward human benefit. Elon Musk argues that the shift toward a for-profit structure violates the original commitments. It is litigation with significant media attention and motivations that are not strictly altruistic on the plaintiff's part.
However, Murati's testimony introduces a different element: it does not come from an external adversary with obvious financial interests, but from someone who was in the operational core for years and now runs her own AI company, Thinking Machines Lab. Her statement does not directly benefit Musk, which gives it different weight.
For the broader AI ecosystem, the testimony fuels a debate that already existed: to what extent are internal safety processes at major laboratories rigorous and transparent, or do they partly function as institutional cover? The answer is not straightforward, and this case will not resolve it, but it adds concrete evidence to the record.
Who should care about this
For anyone working on or investing in technology built on OpenAI's models, the news raises legitimate governance questions. If the person in the second-highest technical position at the company could not trust the information the CEO conveyed to her about safety validations, what assurances do external customers or business partners have?
For regulators in Europe and the United States who have spent months trying to establish audit frameworks for high-risk AI systems, this kind of testimony reinforces the argument that voluntary self-regulation has obvious structural limits.
For teams working with alternatives, including the Claude ecosystem, which Anthropic has built much of its positioning around precisely on the premise of safety as an operational priority, the lawsuit offers external context worth paying attention to.
What remains to be seen
The lawsuit continues, and more depositions from former employees or internal documents are likely to further complicate OpenAI's public narrative. The company, for its part, has not offered a detailed response to Murati's claims so far.
What is already on the table is significant enough: the credibility of the leadership of one of the world's most influential AI labs is being questioned, on the record, before a federal court. That is not undone by a press release.
---
Editor's note: AI safety processes should not depend on whether the CEO decides to communicate them correctly to the CTO. The fact that these kinds of questions are being decided in a courtroom rather than in a functioning board of directors says quite a lot about the state of governance at major AI labs.
Sources
Read next
xAI and Anthropic: A Deal That Raises More Questions Than Answers
TechCrunch analyzes with skepticism the agreement between xAI and Anthropic and what it could mean for SpaceX. We review what is known and what remains unclear.
Wispr Flow Bets on Hinglish to Drive Growth in India
Wispr Flow reports accelerated user growth in India after launching support for Hinglish, the Hindi-English code-mix spoken by hundreds of millions.
TechCrunch's AI Glossary: Right on Time, Not Too Soon
TechCrunch published a guide to key AI terms for those who've spent months nodding along without fully grasping them. We break down what it covers and who actually needs it.