The OpenAI Coup Is Great for Microsoft. What Does It Mean for Us?
There was the coup that hit the headlines: the OpenAI board’s abrupt ousting of its co-founder and chief executive, Sam Altman. Now we are on the verge of a second, even more critical coup, one that cements control of one of the most powerful and promising technologies on the planet under one of this country’s tech titans.
Monday, it was announced that Microsoft was hiring Mr. Altman and another OpenAI co-founder, Greg Brockman. Microsoft had already invested more than $13 billion in OpenAI; its absorption of OpenAI leadership — and the likely hiring of hundreds of OpenAI staff members who signed a letter saying they would leave to join Microsoft unless the board resigned — effectively completes its takeover. OpenAI may find some way out of this self-induced disaster, but any solution would probably require satisfying its infuriated investors by making its board more accountable to their interests.
There’s no small irony that OpenAI’s board, which reportedly was worried about the safety of its hugely popular product, triggered events that will probably shift it to leaders more beholden to market pressures for fast growth. The likely outcome of this fracas is a nail in the coffin of the most prominent effort to build a noncommercial version of artificial intelligence that would serve the public at least as much as it sought profits.
OpenAI was founded in 2015 with the explicit mission of building an alternative to the for-profit A.I. models being developed elsewhere. It was established as a nonprofit, and its stated mission is to “ensure that artificial general intelligence benefits all of humanity.”
But ultimately, the cost of building A.I. was too expensive to attract investors to a nonprofit. The cost of training just one of OpenAI’s chatbots, GPT-4, is estimated to be $100 million. So in 2019, OpenAI set up a hybrid model: It remained a nonprofit but set up a commercial arm that it called a “hybrid of a for-profit and a nonprofit” that it said would “increase our ability to raise capital.”
The hybrid model allowed Microsoft to invest billions of dollars and acquire a 49 percent ownership stake in the for-profit arm of OpenAI. (The nonprofit parent owns just 2 percent.) In other words, Microsoft had already acquired a large interest in the promising start-up. Poaching its employees would be barely more than a formality, albeit one made much easier by the clueless actions of OpenAI’s board.
Generative A.I. (the large language models that are being hailed for their ability to create plausibly humanlike writing, speech and images) has been ceded to the for-profit sector. All of the leading A.I. companies — Microsoft, Alphabet, Meta, Anthropic, Hugging Face — are for-profit companies seeking to reward their investors.
If A.I. has anywhere near the power that its makers claim it has, should its future rest solely in the hands of the commercial sector, particularly businesses whose models have largely involved hoovering up our data and using it to manipulate us?
“The risks of placing A.I. development entirely in the hands of demonstrably untrustworthy Silicon Valley companies are too high,” argues the technologist Bruce Schneier. He and others are pushing for the United States to support an A.I. public option — technology built through a government-directed program “that could support democracy rather than undermining it.”
A public option could look like Europe’s attempt to set up Gaia-X, a European cloud service. Or it could be more like that of China, which has invested heavily in building A.I. capacity through public-private partnerships.
In the United States, A.I. is largely a commercial enterprise. In 2023 the U.S. government is expected to invest $1.8 billion into core A.I. research, while venture capital invested almost $18 billion in A.I. start-ups in the third quarter alone. Access to A.I. is largely controlled by three companies — Amazon, Alphabet and Microsoft — that control two-thirds of the global cloud computing market, a market essential to building powerful A.I. models.
An American public investment in cloud computing would help Fei-Fei Lee, an A.I. pioneer and a director of Stanford’s Human Centered Artifical Intelligence. She has been working to build better A.I. to help keep hospital patients safe by automatically analyzing video footage from inside the hospital. But she said she couldn’t afford the enormous amounts of computing power she needed to analyze huge troves of video data.
“I cannot build the kind of model I wish I could to help keep patients safer,” she told me. “The public sector is very underresourced.”
Some have argued that the best approach is to break up Big Tech’s control over the cloud computing market — increasingly an A.I. choke point, given the massive amounts of computational power needed to collect and analyze the huge data sets that power artificial intelligence.
We have already seen what those companies have done when they control multiple parts of a market. Amazon is being sued by the Federal Trade Commission, which accuses it of using its power as a commerce marketplace to give its own products an unbeatable advantage. And Google is being sued by the Department of Justice for paying Apple and others to not use competing search engines. Similarly, the big three cloud providers also provide consumer-facing services and could easily use their market power, for instance, to prevent competitors from getting access to their services.
British regulators have already started an investigation into whether Amazon and Microsoft are abusing their market power in cloud computing by making it too hard for customers to switch cloud providers.
Barry Lynn, the executive director of the Open Markets Institute, argued that cloud computing has become too important to be left to for-profits. Not only is it the backbone of artificial intelligence, but it is also the backbone of nearly all computing, from office productivity apps to games and social media.
“This is foundational infrastructure for our entire online economy,” he said. “The fact that there are only three corporations that do this gives them all sorts of power, including the power to exclude competitors or set pricing in a discriminatory way, and it also leads to them not paying enough attention to stability and resiliency.”
Mr. Lynn said that cloud infrastructure should be separated from Big Tech’s other businesses and regulated as an essential utility.
Imagine for a moment that cloud computing was a public resource that anyone could use for a modest fee, like public libraries. Innovation would blossom. Dr. Lee could use as much computing power as she needed to train her patient safety models. OpenAI would never have had to go to Microsoft for computing capacity in the first place, and it could have been one of many nonprofits building large A.I. models.
That would be the ultimate coup: taking back the power of computation on behalf of the public.
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].
Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.