A courtroom drama over OpenAI’s transformation raises urgent questions about who should set AI rules and how society protects public interest.
In New York, Elon Musk expressed an opinion that many may agree with: Microsoft should not control the future of artificial intelligence. At the same time, he stressed that in this matter it’s not only individual companies that matter, but who and how sets the rules of the game in the industry.
The problem, according to Musk, lies in understanding who should govern this promising direction. Such disagreements lie at the heart of his lawsuit against OpenAI and its leaders, related to the organization’s transition from predominantly non-profit laboratory status to a for-profit company, under the aegis of a nonprofit foundation. Musk believes that the leaders of OpenAI deceived him and betrayed the charitable mission of AI development in a transparent and safe way for the sake of financial interests. At the same time, OpenAI’s leaders dispute his position, with Musk, who was a cofounder and left the company in 2018, allegedly now voicing concerns about its rapid growth in the market and competition from his new AI company.
Who should have the helm over the future of AI?
In court hearings, four main players are discussed: Elon Musk, OpenAI (led by Sam Altman), Microsoft, and, probably, Google or Meta or Anthropic. It is this discussion that raises questions about the concentration of authority over such a technology that could radically affect the course of human civilization. And is it necessary to involve transparent governance and accountability of public institutions to prevent risks and abuses?
Judge Yvonne Gonzalez Rogers noted that the case is not about the existence of risks associated with AI alone, but whether the governance created for these risks serves the interests of society. She emphasized that the issue will continue to be discussed in the future, since it is not so much about current legal conclusions as about moving forward in regulation and responsible use of technologies.
As the process continues, the answer to who controls such systems remains an open topic for discussion. And although the focus is on the names and ambitions of the rich, the overall logic of the case points to a deeper question: how should society determine who gets the keys to the future of artificial intelligence and what mechanisms guarantee safety, transparency, and accountability in the use of powerful technologies.
This situation underscores the importance of an open and thoughtful conversation about AI governance: who should make decisions, what rules should be enshrined in law, and how to ensure a balance between innovation and public interests. Ultimately, the resolution in this case could determine the path of artificial intelligence development for many years to come and set standards for responsible governance of global technologies.






