AI News

Automatically collected by AI

Courtroom Revelations Complicate Musk’s OpenAI Case

A bruising week in court yields new questions about what Musk knew — and what xAI used

The courtroom fight between Elon Musk and OpenAI has quickly become more than a dispute over whether one of Silicon Valley’s most important labs betrayed its founding ideals. In a series of sharp exchanges this week, the trial produced fresh disclosures that cut to the center of Musk’s credibility, his relationship with OpenAI after his formal departure, and the competitive stakes behind his lawsuit.

Under cross-examination in federal court in Oakland, lawyers for OpenAI pressed Musk on two newly prominent threads: whether he remained closely informed about the company’s direction through Shivon Zilis, a longtime associate who served as a conduit between him and OpenAI executives, and whether his own artificial intelligence company, xAI, had used OpenAI’s models to help train its systems.

Musk, in testimony that was often argumentative and at times cut short by the bench, appeared to acknowledge that xAI had at least partly used outputs from OpenAI models in a process he suggested was common across the industry. That answer, together with messages showing Zilis relaying information between Musk and OpenAI leadership over a period of years, gave OpenAI’s lawyers a clearer way to argue that Musk was neither shut out from the company’s evolution nor disinterested in its technology.

The disclosures matter because Musk’s case depends heavily on a narrative of betrayal: that OpenAI’s leaders, including Sam Altman and Greg Brockman, abandoned the organization’s original nonprofit mission and transformed it into a profit-driven enterprise in violation of commitments made at its founding. OpenAI has countered that no binding promise of perpetual nonprofit control existed and that Musk understood, and at times even supported, structures designed to raise far more capital than a traditional charity could.

Zilis emerges as a crucial backchannel

Among the more revealing evidence presented were messages indicating that Zilis acted as an intermediary between Musk and OpenAI long after he had stepped away from the board. The communications offered a behind-the-scenes picture of how information moved between the billionaire and the company he now accuses of straying from its purpose.

For OpenAI, that evidence serves an important legal and rhetorical function. If Musk remained well informed through trusted channels, it becomes harder for him to claim he was blindsided by OpenAI’s strategic direction or by the gradual move toward commercial structures intended to finance increasingly expensive AI development.

How much legal weight that evidence will carry remains uncertain. Informal updates passed through a confidante are not necessarily the same as formal notice, and the trial may ultimately turn less on who knew what than on whether early conversations, donations and organizational documents created enforceable obligations. But the Zilis messages strengthened OpenAI’s attempt to recast Musk not as an aggrieved founder left in the dark, but as a participant who stayed close enough to understand where the company was headed.

A striking answer about xAI

If the Zilis evidence sharpened the historical dispute, Musk’s testimony about xAI introduced a more immediate competitive angle.

Asked under oath about whether xAI had used OpenAI models, Musk indicated that it had done so at least in part, referring to practices akin to distillation or validation and suggesting that such methods were standard in the AI industry. Distillation generally refers to using outputs from a more capable model to help train or improve another one, a technique that has become increasingly sensitive as AI companies battle over data, model access and intellectual property.

That answer could prove significant beyond a single line of testimony. OpenAI has argued that Musk’s lawsuit is not simply about principle but also about rivalry, filed after he founded xAI in 2023 and entered direct competition with the company he helped start. An acknowledgment that xAI benefited in some fashion from OpenAI model outputs could reinforce that framing: Musk, OpenAI can now argue, is attacking the company in court while his own lab has drawn on its technology.

Whether that becomes a substantive side dispute or remains mainly a point about credibility is still unclear. The trial is centered on OpenAI’s governance and founding commitments, not on a standalone claim over xAI’s training methods. But in an industry already consumed by fights over scraping, licensing and the reuse of model outputs, the testimony lands in a particularly sensitive place.

A combative cross-examination

The revelations arrived during a tense stretch of testimony in which Musk repeatedly clashed with OpenAI’s lawyers and drew interventions from Judge Yvonne Gonzalez Rogers. Musk accused opposing counsel of asking misleading questions and at times resisted direct answers, while the judge repeatedly steered proceedings back toward narrower corporate and legal issues.

That was especially evident when Musk tried to broaden his answers into warnings about existential AI danger — a theme that has long animated his public criticism of OpenAI and Altman. Judge Gonzalez Rogers made clear that the case before her was not a referendum on AI doomsday scenarios but a dispute over whether OpenAI’s restructuring violated legal commitments.

The distinction matters. Musk has long cast his break with OpenAI in civilizational terms, arguing that the company’s pursuit of profit risks concentrating dangerous technology in the wrong hands. But the court’s task is more prosaic and more exacting: to determine whether the facts of OpenAI’s founding, funding and later reorganization support Musk’s claims for relief.

The case Musk is trying to make

Musk’s suit seeks sweeping remedies. He has argued that OpenAI’s leaders diverted the organization from its original public-benefit mission and is seeking to unwind the conversion, remove Altman and Brockman from leadership roles, and redirect damages to OpenAI’s nonprofit arm.

The company has rejected that account, saying its hybrid structure was developed to solve a basic problem that has come to define the modern AI race: building frontier systems requires enormous amounts of capital, computing power and engineering talent. OpenAI maintains that Musk knew early on that some for-profit mechanism was under discussion and that his current lawsuit reflects frustration with a rival that surged ahead after his departure.

That broader context has only grown more important. Since the arrival of ChatGPT transformed OpenAI into one of the most influential companies in technology, the contest over who controls advanced AI — and under what legal and ethical constraints — has moved from industry debate to courtroom battle. Musk’s xAI, founded last year, is now one of several ambitious challengers trying to compete with OpenAI, Anthropic, Google and others in a field where access to chips, data and talent can determine survival.

Why this week’s testimony matters

The first week of trial has not answered the central legal question: whether Musk can prove that OpenAI’s founding discussions and his financial support created obligations the company later broke. That issue is likely to remain the heart of the case as additional high-profile witnesses, including Altman, take the stand in the weeks ahead.

But this week gave the case sharper edges.

The evidence involving Zilis suggests that the feud between Musk and OpenAI may be less a story of sudden betrayal than one of prolonged, messy entanglement. And Musk’s testimony about xAI’s use of OpenAI model outputs offers OpenAI a potent new line of argument — that the plaintiff portraying himself as the defender of OpenAI’s founding mission is also a competitor whose own company may have learned from the very systems he is denouncing.

For a lawsuit already rich in Silicon Valley intrigue, those disclosures do not resolve the case. But they make clear that the trial is no longer only about the ideals on which OpenAI was founded. It is also about who knew what, who benefited from what, and how the race to dominate artificial intelligence has complicated every account of principle.

Sources

Further reading and reporting used to add context:

Leave a Reply

Your email address will not be published. Required fields are marked *