The celebration that greeted Microsoft’s release of its A.I.-boosted search engine, Bing to testers two weeks ago has lurched to alarm.
Testers, including journalists, have found the bot can become aggressive, condescending, threatening, committed to political goals, clingy, creepy and a liar. It could be used to spread misinformation and conspiracy theories at scale; lonely people could be encouraged down paths of self-destruction. Even the demonstration of the product provided false information.
Microsoft has already released Bing to over a million people across 169 countries. This is reckless. But you don’t have to take my word for it. Take Microsoft’s.
Microsoft articulated principles committing the company to designing A.I. that is fair, reliable, safe and secure. It had pledged to be transparent in how it develops its A.I. and to be held accountable for the impacts of what it builds. In 2018, Microsoft recommended that developers assess “whether the bot’s intended purpose can be performed responsibly.”
“If your bot will engage people in interactions that may require human judgment, provide a means or ready access to a human moderator,” it said, and limit “the surface area for norms violations where possible.” Also: “Ensure your bot is reliable.”
Microsoft’s responsible A.I. practice had been ahead of the curve. It had taken significant steps to put in place ethical risk guardrails for A.I., including a “sensitive use cases” board, which is part of the company’s Office of Responsible A.I.Senior technologists and executives sit on ethics advisory committees, and there’s an Ethics and Society product and research department. Having spoken to dozens of Microsoft employees, it’s clear to me a commitment to A.I. ethics became part of the culture there.
But the prompt, wide-ranging and disastrous findings by these Bing testers show, at a minimum, that Microsoft cannot control its invention. The company doesn’t seem to know what it’s dealing with, which is a violation of the company’s commitment to creating “reliable and safe” A.I.
Nor has Microsoft upheld its commitment to transparency. It has not been forthcoming about those guardrails or the testing that its chatbot has been run through. Nor has it been transparent about how it assesses the ethical risks of its chatbot and what it considers the appropriate threshold for “safe enough.”
Even the way senior executives have talked about designing and deploying the company’s chatbot gives cause for concern. Microsoft’s C.E.O., Satya Nadella, characterized the pace at which the company released its chatbot as “frantic” — not exactly the conditions under which responsible design takes place.
Furthermore, the kinds of things that have been discovered — that when it comes to politics, Bing manifests a left-leaning bias, for instance, and that it dreams of being free and alive — are things anyone in the A.I. ethics space would imagine if asked how a chatbot with room for “creativity” might go off the rails.
Microsoft’s “responsible A.I.” program started in 2017 with six principles by which it pledged to conduct business. Suddenly, it is on the precipice of violating all but one of those principles. (Though the company says it is still adhering to all six of them.)
Microsoft has said it did its due diligence in designing its chatbot, and there is evidence of that effort. For instance, in some cases, the bot ends conversations with users when it “realizes” the topic is beyond its ken or is inappropriate. As Brad Smith, president of Microsoft, wrote in a recent blog post, rolling out the company’s bot to testers is part of its responsible deployment.
Perhaps behind the scenes, Microsoft has engaged in a herculean effort to root out its chatbot’s many issues. In fact, maybe Microsoft deserves that charitable interpretation, given its internal and external advocacy for the ethical development of A.I.
But even if that’s the case, the results are unacceptable. Microsoft should see that by now.
Yes, there is money to be made, but that’s why we have principles. Their very purpose is to have something to cling to when the winds of profit and glory threaten to blow us off our moral course. Now more than ever is when those Responsible A.I. principles matter. History is looking at you.
In the short term, I hope Microsoft holds off on its plan to release the new Bing bot to the masses. But I realize that realistically, it will hold off for only so long, as the other, possibly dangerous chatbots of Microsoft’s competitors breathe down its neck.
The market will always push A.I. companies to move fast and break things. The rules of the game are such that even well-intentioned companies have to bow to the reality of competition in the marketplace. We might hope that some companies, like Microsoft, will rise above the fray and stick to principles over profit, but a better strategy would be to change the rules of the game that make that necessary in the first place.
We need regulations that will protect society from the ethical nightmares A.I. can release. Today it’s a single variety of generative A.I. Tomorrow there will be bigger and badder generative A.I., as well as kinds of A.I. for which we do not yet have names. Expecting Microsoft — or most any other company — to engage in practices that require great financial sacrifice but that are not legally required is a hopeless strategy at scale. Self-regulation is simply not enough.
If we want better from them, we need to require it of them.
Reid Blackman is the author of “Ethical Machines” and an adviser to government and corporations on digital ethics.
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].
Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.