News

I Still Believe in Effective Altruism, Emphasis on ‘Effective’

Credit…Pavel Popov

This article is part of Times Opinion’s Holiday Giving Guide 2022. Read more about the guide in a note from Opinion’s editor, Kathleen Kingsbury.

This is my annual giving column, so I won’t beat around the bush. I recommend donating to GiveWell’s four top-rated charities: the Malaria Consortium, the Against Malaria Foundation, Helen Keller International and New Incentives. These charities distribute medication and bed nets to prevent malaria, vitamin A supplements to prevent blindness and death in children and cash to get poor kids vaccinated against a host of diseases.

What sets these groups apart is the confidence we have in the good that they do. Plenty of charities sound great to donors, but their programs are never studied, and when they are, the benefits often disappoint. These organizations are different: Their work is backed by unusually high-quality studies showing that they save lives and prevent illness at lower cost than pretty much anything else we know of.

In most years, this would be as banal a set of recommendations as I could offer. This year is a little different.

Effective altruism, the philanthropic movement that GiveWell is part of, is undergoing a reckoning after the fall of Sam Bankman-Fried, its most famous financier and adherent. Bankman-Fried was an unusual case: He became a crypto trader after lunching with Will MacAskill, an Oxford philosopher who’s one of effective altruism’s founders. MacAskill told Bankman-Fried he could probably do more good by making a lot of money and giving it away than by working in a nonprofit somewhere, and a couple of years later, Bankman-Fried apparently put that idea into practice. In effective altruism parlance, this is called earn to give, and it’s common advice. Don’t join the Peace Corps and build schools overseas, this line of thinking goes; join a hedge fund, and soon enough, you’ll be shunting $500,000 a year toward bed nets.

Bankman-Fried wasn’t alone in following MacAskill’s advice, but he was unique in the scale of his apparent earning and proposed giving. He founded the crypto trading platform FTX and amassed a fortune that was notionally in the tens of billions of dollars, nearly all of which he promised to donate. He was a media fixture, giving interview after interview about the way he’d yoked his riches to his ideals.

But the fortune was never real. FTX was intertwined with Alameda, Bankman-Fried’s trading firm, and the companies were shifting money back and forth and propping up their reserves with crypto assets that were later revealed to be functionally worthless. And if the fortune was fake, what then of the ideals that supposedly drove it? Did effective altruism help rationalize or even motivate the risk taking and boundary crossing that vaporized billions of dollars? How can a movement that prides itself on long-term thinking and constant risk analysis have been so clueless about its golden child?

These are reasonable questions, but I worry about overlearning the lessons of what is, in truth, an old story: A young, brash financier in a basically unregulated market made a fast fortune playing loose with his customer’s deposits and then blew up after a bank run.

I’m skeptical that effective altruism deserves much blame for that, and I don’t want to see the mounting backlash overwhelm a movement that has done, and could do, much good. So let me suggest a few places where effective altruism could use some rebalancing — not just because of Bankman-Fried — and then return to where I began: to GiveWell and what effective altruism gets unambiguously right.

Be More Skeptical of Earning to Give

Earning to give always struck me as a thought experiment masquerading as life advice. The problem isn’t in the logic. You can, in theory, join a private equity firm and donate the money to charities and do a lot of good. But people are not automatons. High-earning professions change their participants.

Whether it’s the “golden handcuffs” that keep public-spirited lawyers billing their hours to corporate clients or the hundreds of millions of dollars in Bahamian real estate that the supposedly ascetic Bankman-Fried and his circle purchased, we have no end of examples showing that the Spartan tastes and glittering ideals of do-gooder college students rarely survive a long marinade in the values and pressures and possibilities of expansive wealth.

Earning to give adds a darker possibility of rationalizing unethical means in service of virtuous ends. Balzac famously suggested that there was usually a great crime behind great fortunes, particularly those that appear suddenly. That’s true often enough that giving people more motivation to amass as much wealth as they can needs to be balanced against the psychological and historical truism that money is a corrupting force and few who pursue it relentlessly do so without damage or compromise.

Then there’s the risk major donors pose to the movements they back. Art museums, for instance, have had to reckon with the donations they received from the Sackler family and its opioid profits. This is not a new problem in philanthropy.

Effective altruism has a particular weakness here, as in my experience, it tends toward lionization of today’s tech billionaires. That’s partly because of cultural and intellectual recognition: This is a movement that is strongly connected to the Bay Area, that prizes the kinds of quantitative argumentation and cost-benefit theorizing natural to software engineers and CEOs, and that has been fulsomely embraced by today’s tech elite. It’s easy to like people who are like you, who believe what you believe and who are eager to give you lots of money to do your work.

But a lesson of the Bankman-Fried fiasco is that the more that donors do to support you, the more you are seen as supporting them. That’s a risk — and one that a mature movement needs to take more seriously.

Be More Skeptical of Thought Experiments

I want to be careful here, because effective altruism has its roots in a thought experiment, and it’s a good one. Peter Singer, the moral philosopher, asked whether you should jump in a pond to save a drowning child, even if it might muddy or ruin your new shoes and suit. “Of course,” comes the natural reply. Singer asks: What does it matter, then, if the child is in a pond in front of you or a country half a world away from you?

That, basically, is the ethical intuition behind effective altruism: If it would be monstrous to let a child drown in front of you because of a modest expense, then isn’t it monstrous to let a child die a world away when the same modest expense might have saved his or her life?

If you buy into this thought experiment — and I largely do — then you face the difficult question of deciding where its logic ends. When the choice is your comfort or another’s life, then even the most modest luxuries come to seem immoral. Following this moral logic to its outer edges is manageable only for the saintliest among us — Larissa MacFarquhar’s “Strangers Drowning” is an unforgettable exploration of what that level of commitment looks like — but a bit more altruism is in reach for many of us. For me, Singer’s parable has been a provocation worth wrestling with and one that has substantially increased my annual giving.

But I think the culture of effective altruism, perhaps because of it comes out of the hothouse of the Oxford philosophy department, is a bit too taken with thought experiments and toy models of the future. Bankman-Fried was of that ilk, famously saying that he would repeatedly play a double-or-nothing game with the earth’s entire population at 51-to-49 odds.

This is a reputational risk for effective altruism, as the work of those saving lives and the work of those imagining future lives can point in opposite directions. That is not to say that the future isn’t worth thinking about. But I’ve noticed, in the past few years, the energy of effective altruism tilting much further toward its more speculative obsessions and interests. I think a correction is overdue.

During the Bankman-Fried fallout, social media began passing around a snippet from a philosophy dissertation by Nick Beckstead, who until recently served as the head of the FTX Foundation. “It now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal,” he wrote.

His reasoning was that the actions that matter most today are those that shape “the general trajectory along which our descendants develop over the coming millions, billions and trillions of years.” Since residents of rich countries produce and innovate more (at least according to standard economic measures), saving lives in poor countries “may have significantly smaller ripple effects than saving and improving lives in rich countries.”

Beckstead’s dissertation is complex, and philosophers need to be able to make arguments that would sound odd outside the confines of their discipline. The question is which way of viewing the world will come to define effective altruism.

Because, as GiveWell has found, the logic goes in precisely the opposite direction when you confine yourself to the results you can actually measure. “At the very beginning, we had no conception how different the impact of working in a low-income country can be from a high-income country,” Elie Hassenfeld, GiveWell’s co-founder and chief executive, told me. “We initially had a menu for working in low-income countries and high-income countries. But the most basic learning was the simple fact that a dollar goes so much further overseas.”

Today GiveWell recommends only charities that work in poor countries. One of effective altruism’s main accomplishments has been to persuade donors in rich countries to put more of their money into effective charities working in poor countries.

To be fair, it is not just the effective altruists who are a bit too seduced by speculative concerns. This is also a problem in the media, and I’m a good example. When I look at my giving, it’s entirely to the side of the movement that focuses on saving lives today. But the two podcast episodes I did over the past two years with leading effective altruists were with thinkers focused on the existential risks and the possibilities of the far future. There’s a mismatch between the allure of these speculations and the difficult, unsexy work of trying to understand which study measuring the spillover effects of deworming medication had the most rigorous design.

Be Skeptical of False Precision

It’s worth asking how a movement can hold such disparate wings together, and I think the answer is rhetorical. In particular, I think the more speculative wing of the movement has developed a culture that uses probabilistic thinking and concocted values in ways that create a false sense of precision. So it may sound as if the people focusing on what we can know and the people focusing on what can’t be known are doing the same work. But they’re not.

Toby Ord, one of the Oxford philosophers who founded effective altruism, published a terrific book in 2020, at the start of the Covid pandemic, called “The Precipice,” which thinks through many of the ways humanity could come to total ruin. The core of the book is a table offering his estimates of the likelihood that different existential risks could cause our destruction over the next 100 years. Asteroids, he thinks, have only a one-in-one-million chance of ending humanity in the next century. A supervolcanic eruption gets a one in 10,000. Natural pandemics are also at one in 10,000, while engineered pandemics are at one in 30. Nuclear war and climate change both have one-in-1,000 chances of causing our extinction, but rogue artificial intelligence has a one-in-10 shot.

To me, this table represents both the value and the danger of converting speculations into probabilities. The value is that Ord is trying to state his views as precisely as possible. The risk is that this cold procession of numbers gives these estimates an authority they don’t deserve. Probabilities are meant to convey uncertainty, but to many, they imply credibility. If I say something has a one-in-20 chance of happening, I sound more authoritative than if I say just that there’s a chance it could happen. And the person who is perhaps easiest to fool with this shift into false precision is myself.

Some of Ord’s estimates have data backing them up — we know a bit, for instance, about past supervolcanic eruptions and asteroid impacts — but the threats we’re told to be most frightened of are also the ones that are nearest to pure speculation. Which is not to say the speculation is wrong; I think synthetic bioweapons are terrifying, and the risks of artificial intelligence are worth seriously considering. But I think the false precision can make the unreal feel real and hide the shakiness of all that came before the quantification.

This, to me, is the clearest link between Bankman-Fried’s fall and certain elements of effective altruist culture: Crypto is built on attaching values and probabilities to notional assets and currencies. What looks like a balance sheet from one perspective proves to be nothing but a set of arguments, assertions and thought experiments from another.

This problem exists elsewhere in capitalism, but it is concentrated into crystalline form in crypto markets. Too often, the only real value of crypto assets is self-referential: The asset is valued because it is valued. There is nothing behind it except other people’s willingness to believe the story you are telling them. There is no army or auto factory or even beloved work of art. There’s just code and quantification. The numbers are lending undeserved solidity to an abstraction. I think effective altruists have a tendency to bewitch themselves in much the same way.

We Can Do More Good, and We Should

I worry that this column will be taken as a reason to dismiss any concerns that sound weird the first time they’re heard. That’s not what I’m saying.

Effective altruists have fought hard to persuade people to worry more about artificial intelligence, and they have been right to do so. I don’t think you can look at the remarkable performance of the latest artificial intelligence models — Meta created Cicero, an A.I. system that can manipulate and deceive humans to achieve other goals, and OpenAI’s latest bot will advise you on how to create nuclear weapons if you ask cleverly enough — and not think it important to consider the consequences of vastly more powerful A.I. systems.

But I think too much of the energy and talent in effective altruism is flowing away from the compassionate rigor that initially distinguished it, and that the world still needs. Effective altruism will not be nearly as, well, effective if it loses touch with its early focus on improving the lives of people living today.

All of this brings me back to GiveWell. GiveWell was founded to assess and even produce that evidence, and it does an excellent job. Its research is comprehensive, thoughtful and, most important, transparent. I don’t agree with every decision it makes — I think it sets the bar for cost-effectiveness too high, and some charities that have dropped off its list, like GiveDirectly, remain on mine — but the persnicketiness is the point. I give to GiveWell’s charities every year, and while that’s not the whole of my giving, that’s the part I feel most confident about. Giving to organizations I’m so certain of is a good feeling, and I hope you get to feel it, too.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].

Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.

Back to top button