![](https://i0.wp.com/mathewingram.com/work/wp-content/uploads/2025/02/GettyImages-2147787483.webp?resize=525%2C295&ssl=1)
If you’ve been reading the news at all, you probably know that Elon Musk — and/or a group of twentysomething programmer/hackers with nicknames like “Big Balls” (no, I am not making this up) — have taken control of significant parts of the functional machinery of the US government, including the Departments of Energy, Education, Housing and Urban Development, and the Federal Emergency Management Agency. They are installing external servers and shutting down billions of dollars worth of payments, as well as firing tens of thousands of federal employees. Do they have the authority to do this? Donald Trump has issued an executive order saying that they do. The courts disagree, but it remains to be seen whether Trump will obey the courts or simply ignore them. Can he do that? Perhaps. Is there any way for the state or Congress to stop him? Not really. Is this the beginning of a constitutional crisis? Probably.
This isn’t a political newsletter, so all of that is outside my purview at the moment, although I have to say it is troubling in the extreme. From a technological point of view, however, what is interesting to me is that even with all of that going on — running and/or dismantling the entire federal government — Elon Musk still managed to find the time to put together a $97-billion hostile takeover offer for OpenAI. Here’s how the Wall Street Journal describes it:
A consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI, upping the stakes in his battle with Sam Altman over the company behind ChatGPT. Musk’s attorney, Marc Toberoff, said he submitted a bid for all the nonprofit’s assets to OpenAI’s board of directors Monday. The unsolicited offer adds a major complication to Altman’s carefully laid plans for OpenAI’s future, including converting it to a for-profit company and spending up to $500 billion on AI infrastructure through a joint venture called Stargate. He and Musk are already fighting in court over the direction of OpenAI. “It’s time for OpenAI to return to the open-source, safety-focused force for good it once was,” Musk said in a statement. “We will make sure that happens.”
Note: This is a version of my Torment Nexus newsletter, which I send out via Ghost, the open-source publishing platform. You can see other issues and sign up here.
Altman says Musk is “not a happy guy”
![](https://i0.wp.com/torment-nexus.mathewingram.com/content/images/2025/02/107418094-1716321352537-gettyimages-2153474140-AFP_34TH9TC.jpeg?w=525&ssl=1)
Sam Altman told Musk the company wasn’t interested in his offer, but that he would “buy X for $9.7 bilion if you want.” He also told Bloomberg TV that Musk was probably just trying to slow the company down (which likely has some truth to it), that Musk’s whole life has been lived “from a position of insecurity,” and that he doesn’t think Musk is “a happy guy.”
Those of you who read The Torment Nexus regularly — as I’m sure you all do! — are of course well aware that Musk has been attacking OpenAI for some time now over its continued failure to be, well… as open as its name implies. According to Musk, who was one of the cofounders of OpenAI back in 2015, the company was supposed to make some or all of its research and technology available to the public, but instead it chose to become mostly a for-profit entity. In my previous piece, I argued that Musk had a good point, in the sense that being open was one of the core principles OpenAI was founded on, but it has done the exact opposite. It’s also noteworthy that DeepSeek, the Chinese AI engine that actually is open, has made some interesting leaps beyond some or all of its competitors, and at a lower cost (how much lower is subject to debate).
Note: In case you are a first-time reader, or you forgot that you signed up for this newsletter, this is The Torment Nexus. You can find out more about me and this newsletter in this post. This newsletter survives solely on your contributions, so please sign up for a paying subscription or visit my Patreon, which you can find here. I also publish a daily email newsletter of odd or interesting links called When The Going Gets Weird, which is here.
Musk wanted OpenAI to be for-profit
Was Musk ever really committed to being open? In its response to the lawsuit that Musk filed against OpenAI (and then withdrew and re-filed last July) the company said that in the early discussions about structure, it was Musk who argued that OpenAI should become a for-profit entity, since being a for-profit was the only way to raise the massive amounts of funding required to build human-like artificial intelligence. In 2017, the company said in a statement on its website, Musk not only discussed making OpenAI a for-profit company, he actually created a for-profit corporate entity that could take over the corporation’s assets.
![](https://i0.wp.com/mathewingram.com/work/wp-content/uploads/2025/02/Frame_214722491-2.jpg?resize=525%2C455&ssl=1)
But there was one catch, according to OpenAI: Musk demanded that he be given a majority of the equity, as well as control of the board of directors, and he also wanted to be the CEO. According to the statement on the OpenAI website:
Elon not only wanted, but actually created, a for-profit as OpenAI’s proposed new structure. When he didn’t get majority equity and full control, he walked away and told us we would fail. Now that OpenAI is the leading AI research lab and Elon runs a competing AI company, he’s asking the court to stop us from effectively pursuing our mission.
After Elon left the startup in a huff, as I mentioned in my previous post, OpenAI came up with a byzantine corporate structure as a way of trying to be both nonprofit and for-profit at the same time: in effect, it created a for-profit entity that could raise the billions of dollars required to build an artificial intelligence engine, but that entity has a nonprofit at its core, which has theoretically been in charge of the entire structure — like a tiny alien pulling the levers in the body of a much larger humanoid robot (anyone who has seen the original Men In Black movie will know what I am describing).
OpenAI’s arcane structure
![](https://i0.wp.com/mathewingram.com/work/wp-content/uploads/2025/02/image-10-1024x945-1.png?w=525&ssl=1)
So does Musk want to buy that entire structure? No. In fact, he may not want to buy any of it, but I’ll get to that. In December, OpenAI chief executive Sam Altman announced that the company plans to “transition” into a for-profit company — one known as a Public Benefit Corporation, whose goals explicitly include making investments for the public good. The nonprofit part of the company would continue to exist, OpenAI said in a blog post, but it would no longer be in control of the fate of the company. As part of this restructuring, the nonprofit arm has to be given shares in the parent for-profit entity, and for a variety of regulatory reasons, this has to be done at fair market value.
So what is fair market value for the ownership stake that the nonprofit will have in OpenAI the parent company? Therein lies the rub, as Shakespeare liked to say. “The issue is that there are probably six to 10 different ways to value a company, depending on who you ask,” Columbia University Business School professor Angela Lee told a Yahoo Finance reporter, adding that her guess is that “depending on which model you use, you could be off by a factor of like 3x to 5x.” OpenAI’s overall market value is estimated at upwards of $157 billion, which is the post-money valuation implied by the funding round it just closed in October. But that value is expected to almost double to $300 million in an upcoming funding round that Softbank is said to be leading.
Despite these massive numbers, however, OpenAI is expected to lose $5 billion this year. That makes valuing it in the traditional way — based on its profits — impossible, which means that analysts and investors have to make forecasts about future revenues and potential future profits. How much would human-like intelligence be worth if OpenAI actually achieves it? Then we have to take those estimates and come to some conclusion about how much the nonprofit that currently controls OpenAI is worth, based on the ownership stake that it will have once the transition to for-profit is complete. That stake is expected to be about 25 percent, which some have estimated back in October would be worth about $30 billion. Based on the increase in OpenAI’s theoretical market value, some argue that the nonprofit stake is now worth about $65 billion.
A turn towards the bizarre
![](https://i0.wp.com/torment-nexus.mathewingram.com/content/images/2025/02/elon-musk-ai-isaacson-01.webp?w=525&ssl=1)
This is where Musk’s takeover offer comes in. By offering $97 billion for the nonprofit operations of the company, he has arguably helped set a market value for those assets, and it is significantly higher than what some believe it is worth. Is this just a stalking horse, as traders like to say? Or just a feint designed to cause trouble for Sam Altman (who Musk has called a “swindler” and “Scam Altman”)? Not according to a statement from Musk’s lawyer, which states:
“If Sam Altman and the present OpenAI Board of Directors are intent on becoming a fully for-profit corporation, it is vital that the charity be fairly compensated for what its leadership is taking away from it: control over the most transformative technology of our time. As we understand the OpenAI Board’s present intentions, they will give up majority ownership and control over OpenAI’s entire for-profit business in exchange for some minority share of a new, consolidated for-profit entity. If the Board is determined to relinquish OpenAI’s assets, it is in the public’s interest to ensure that OpenAI is compensated at fair market value. That value cannot be determined by insiders negotiating on both sides of the same table. The public is OpenAI’s beneficiary, and a sweetheart deal between insiders does not serve the public interest.”
As Bloomberg finance columnist Matt Levine noted, this leaves us with an “absolutely bizarre circumstance” in which a nonprofit plausibly might have a fiduciary obligation to sell to the highest bidder, even if it finds that highest bidder unsavory and uncharitable. “What if it got a topping bid from the Chinese Communist Party?” Levine writes. “What if a robot wearing a fake mustache came in and said “I will pay $150 billion for your company and will not use it to take over the world and enslave humanity, what even gave you that idea”? Would the charity’s obligation — its obligation not to give assets away to a for-profit company, but to be paid fair value for them — require it to sell to the highest bidder?” Recognizing these bizarre circumstances, Levine says, Musk decided to lob in a bid. “I cannot fault it!” Levine wrote. “It is top-tier M&A trolling.”
So does OpenAI have some kind of fiduciary obligation to consider this offer? Even Levine says even he doesn’t really know:
If you are the board of directors of a nonprofit organization and a consortium led by Elon Musk comes to you with a $97.4 billion hostile takeover offer, are you obligated to get the highest possible price, or are you allowed to consider other factors in deciding whether or not to accept his offer? Man: I do not know! It seems very unlikely that any nonprofit board has ever faced this problem before, and when I put it like that, it sounds completely incoherent. Nonprofits do not get hostile takeover offers, nonprofits do not get acquired by investor consortiums for $97 billion, and surely nonprofit boards do not have an obligation to maximize their valuation?
Is this all just an elaborate troll?
![](https://i0.wp.com/torment-nexus.mathewingram.com/content/images/2025/02/elon_musk_robot.png?w=525&ssl=1)
Does Musk really want to acquire OpenAI? Levine says probably not. The more likely rationale, he says, is that making the offer is just throwing a spanner in the works, to drive the price of the nonprofit stake up, and thereby cause problems for Altman and for OpenAI (which of course is a competitor of Grok, Musk’s own AI, which is attached to X). One reason things could get complex is not just that OpenAI is in the process of trying to raise $40 billion at a $300 billion valuation — it also has to think about, and negotiate with, its major shareholder: namely, Microsoft, which owns the rights to a 49-percent share of any future profits that OpenAI makes, and has given the company more than $13 billion in funding since its inception. If the nonprofit entity’s stake in OpenAI is worth more, does that mean that Microsoft’s stake has to be reduced?
It’s worth noting that Musk isn’t the only one who opposes OpenAI’s conversion to a for-profit company. Encode, a nonprofit advocacy group that co-sponsored California’s AI safety bill SB 1047, filed an amicus brief with the court in December supporting Musk’s claim. In its brief, the organization argues that:
If the world truly is at the cusp of a new age of artificial general intelligence (AGI), then the public has a profound interest in having that technology controlled by a public charity legally bound to prioritize safety and the public benefit… OpenAI plans to transfer control of its operations to a Delaware public benefit corporation. That would do more than shift control from one kind of “inc.” to another, leaving the organization’s mission in place. It would convert an organization bound by law to ensure the safety of advanced AI into one bound by law to “balance” its consideration of any public benefit against “the pecuniary interests of [its] stockholders.”
Sam Altman wants us to believe that he and OpenAI are on the verge of generating human-level artificial intelligence, something that a number of experienced scientists (although not Yann LeCun of Meta, one of the co-developers of early AI science) say could be extremely dangerous. But Altman also wants this to become a for-profit company motivated primarily by generating profits — it will be a public benefit corporation, but that status doesn’t impose many actual restrictions on how a company operates. Does Musk’s epic troll of a takeover offer help this in any way? Of course not. But it does help to highlight some of the problems and conflicting interests of Altman and OpenAI, and perhaps it will get more people thinking about what the future of AI looks like.
Got any thoughts or comments? Feel free to either leave them here, or post them on Substack or on my website, or you can also reach me on Twitter, Threads, BlueSky or Mastodon. And thanks for being a reader.