The Paradox at the Heart of Elon Musk’s OpenAI Lawsuit

It would be easy to dismiss Elon Musk’s lawsuit against OpenAI as a case of sour grapes.

Mr. Musk sued OpenAI this week, accusing the company of breaching the terms of its founding agreement and violating its founding principles. In his telling, OpenAI was established as a nonprofit that would build powerful A.I. systems for the good of humanity and give its research away freely to the public. But Mr. Musk argues that OpenAI broke that promise by starting a for-profit subsidiary that took on billions of dollars in investments from Microsoft.

An OpenAI spokeswoman declined to comment on the suit. In a memo sent to employees on Friday, Jason Kwon, the company’s chief strategy officer, denied Mr. Musk’s claims and said, “We believe the claims in this suit may stem from Elon’s regrets about not being involved with the company today,” according to a copy of the memo I viewed.

On one level, the lawsuit reeks of personal beef. Mr. Musk, who founded OpenAI in 2015 along with a group of other tech heavyweights and provided much of its initial funding but left in 2018 over disputes with leadership, resents being sidelined in the conversations about A.I. His own A.I. projects haven’t gotten nearly as much traction as ChatGPT, OpenAI’s flagship chatbot. And Mr. Musk’s falling out with Sam Altman, OpenAI’s chief executive, has been well documented.

Elon Musk’s lawsuit takes personal aim at OpenAI’s chief executive, Sam Altman.Credit…Jim Wilson/The New York Times

But amid all of the animus, there’s a point that is worth drawing out, because it illustrates a paradox that is at the heart of much of today’s A.I. conversation — and a place where OpenAI really has been talking out of both sides of its mouth, insisting both that its A.I. systems are incredibly powerful and that they are nowhere near matching human intelligence.

The claim centers on a term known as A.G.I., or “artificial general intelligence.” Defining what constitutes A.G.I. is notoriously tricky, although most people would agree that it means an A.I. system that can do most or all things that the human brain can do. Mr. Altman has defined A.G.I. as “the equivalent of a median human that you could hire as a co-worker,” while OpenAI itself defines A.G.I. as “a highly autonomous system that outperforms humans at most economically valuable work.”

Related Articles

Back to top button