Effective altruism is a “research field, which aims to identify the world’s most pressing problems and the best solutions to them, and a practical community that aims to use those findings to do good.” [1]

I’ve long felt that EA was a bit silly, at least for a normal person. But I didn’t have a good framework for why I felt that way.

I recently went on a walk with a friend involved in EA, and I thought the discussion was extremely thought-provoking (in a literal sense - my thoughts were provoked!). It helped elucidate to me what felt wrong about the movement and its philosophy, and I thought that in classic EA spirit I’d write it down and solicit some feedback. [2]

Bow to the fashionistas.

First I should say, as all who criticize EA must, that EAs are wonderful people, that I have tremendous respect for what they stand for. I say that because I have to say it.

Because that is the first part of the problem. EA is so popular. Almost every intellectual I know either benignly or actively subscribes to a subset of the EA philosophy. Rich, famous people fund Open Philanthropy and GiveWell. I’m just a regular shmoe, what right do I have to criticize what the world thinks is correct and good and moral?

Paul Graham has a nice piece about fashions, though fortunately for him he’s now the most fashionable person in the valley [3]. As Paul describes, the problem with fashions is that lots of people follow them, and even if the fashion being followed is nominally self-reflective, it may miss a glaring, obvious mistake that’s only visible in retrospect. EA’s popularity should be a cause for suspicion, not a celebration.

More practically, for those who disagree or don’t follow a fashion, the way to deal with it is to not engage. What I am writing this for, exactly? The best case scenario is that almost no one reads it. The worst case is it gets popular, and the mob comes after me.

This isn’t a direct criticism of EA, of course. But it means that real criticisms of EA are either unlikely to be written or, maybe more frighteningly, not as likely to be thought about. People will assume that EA is the correct moral philosophy, rather than litigating it fully as I believe it deserves. Perhaps that’s part of the reason why many critiques of EA tend to be of the flavor “assume EA is right, here’s a specific thing they do that I disagree with” rather than “EA is wrong, here’s why”.

Just to provide a tangible example here, it looks like Scott Alexander has strong opinions about EA, but isn’t publishing them because they’re too spicy. [4] He’s SSC! It should be worrying that someone so popular and prominent among “rationalists” is worried that his hot take might be too hot.

Come all ye faithful

EAs like to think of themselves as rational, rigorous, and quantitative. They conduct and read studies about what donations are most effective. They are, at least in their minds, Richard Feynmans, except with altruistic endeavors instead of physics.

But something doesn’t seem quite right. For example, if you ask an EA how much you should donate a year, they suggest roughly 10% of your income, regardless of how much you earn.

Does that number sound familiar? It sounds a little bit like a tithe. Where’s the quantification?

Here’s what I would expect from a true “effective altruist”: I’d expect them to do a rough analysis of their existing audience - perhaps western hemisphere intellectuals of the upper middle class. Then they’d generate a histogram of typical incomes of people in the EA movement. Then they’d plot that vs incremental marginal utility. And I’d expect them to suggest a percentage of income that changes depending on that marginal utility calculation.

After all, EAs have gone through the trouble of quantifying the suffering of a chicken in a cage for an expected value calculation. [5] Why haven’t they done rigorous quantification of the input to a charity, not just the output?

Because EA sometimes looks a lot more like a religion than an intellectual movement.

I grew up in a small town in Louisiana, a very religious place. I’m used to people trying to convert me to their faith. I see the same elements in many EAs.

There is the worship of it, the acceptance of all the core tenets, and the debates over clerical details. There is an assumption that they alone know the truth, and those who disagree should be evangelized to and brought into the fold. Prominent ideas, which on the surface seem to disagree with their philosophy get subsumed or merged into EA, like Saturnalia becoming Christmas.

The main difference is that EAs worship quantification. They worship correctness, at least in some sense. Which makes it a little bit difficult to criticize them, in what Michael Neilson describes as “EA judo” - many criticisms of EA turn into something that makes EA stronger. [6]

For example, one group of EAs tends to prioritize short-term interventions that have quantifiable amounts of utility. Another group of EAs tend to prioritize long-term interventions which have unknown utility (perhaps they mediate the future risk of the apocalypse by a small, unknown, percentage). You might expect that an effective altruist organization would have a paradigm for how to prioritize between these two. Isn’t the whole point that we measure the utility of any intervention, compare them, and then make a choice?

No…remember the EA philosophy - anything you see that isn’t EA, seen from another angle, can become EA. Can’t quantify the impact of a long-term intervention? That’s okay - it still might be effective!

Open Philanthropy, a very prominent effective altruism organization, actually has two leaders, one who focuses on short-term quantitative interventions, and the other focuses on long-term abstract ones.

I’m not saying that’s the wrong choice. But I think I have gripes with describing this as effective altruism. EA doesn’t seem to be that useful of a paradigm if you’re willing to mold it and reshape it this flexibly. It doesn’t seem like there’s a good rationale for having both of these classes of investment, and if you can’t distinguish which one is better, how do I know which to contribute to?

The goal is not to maximize quantifiable utility

Initially, EA’s goal appeared to be a version of utilitarianism. Measure what good you can do in utils for intervention A, compare it to interventions B, C, and D, and put money into the causes in decreasing order of maximal utility.

But most EA organizations don’t do this exactly. They hedge by putting money into the top 10 interventions, even if more money could go to the top 1 by depriving the other 9.

This is a small sin but I find it to be significant. EAs have a lot of capital, but they don’t seem to deploy it in a way that shows they believe in their effectiveness calculations. If you believe that A is strictly better than B, why in the world are you funding B? It doesn’t appear to be the case, for example, that cause A is oversubscribed or has achieved diminishing returns.

I could nitpick at a hundred of these types of things. For example, “EA marketing” is also a cause on the list of things EA organizations implicitly fund, but I’m not sure I’ve seen the calculation of how many utils their marketing is worth relative to buying more bed nets.

The sheer number of causes that most prominent EA organizations pursue (whether they call them causes or not) means that a “utility-based” analysis does not appear to explain their actions.

Instead, at least some EAs see certain interventions as having uncertainty and risk. Holden Karnofsky writes about some of that “hits-based giving”… there are interventions, many of them, where you are not certain about the exact amount of value it will have, but it is possible that by investing in a portfolio of such interventions, one or two will make up for the rest in terms of value provided. [7]

That leads to EAs funding all sorts of (in my opinion) kooky ideas. For example, animal welfare is funded at 3x the amount that climate change is. [8]

The problem is that debating between, say, animal welfare and climate change is it’s no longer a quantitative analysis of what provides the most good. Instead, we have a qualitative analysis of something like “if animals are beings with consciousness, the value of preventing their suffering would be high, thus even if the aforementioned is unlikely we will fund this to maximize expected utility”. We are not investing in the best charities, we are portfolio managers investing in a bunch of things where at least one might hit it out of the park, diversifying our risk away.

I actually think that rationale is pretty reasonable, but it makes it hard to reason about which things deserve a “judgment call” vs which things are measured directly in QALYs.

Uncertainty and error bars on interventions

This brings me to my central issue with effective altruism. EAs are, with the left hand, the rigorous quantitative measurers they portray themselves to be. That means any short-term intervention is evaluated with objectivity. Food banks are a mistake. Never contribute to local radio. Think of the utils! Think of the 100x or 1000x multiple you can get by reducing global poverty with that incremental dollar!

And yet, on the right hand, they are qualitative estimators of distant future events, immeasurable risks, and unquantifiable value. They see the proto-souls of chickens and mice and pigs and say, per their Drake equation-like calculations, that they are more valuable than other interventions. [9]

And so, in aggregate, the population of EAs is spending their time on health in third world countries and animal welfare, while there is poop at their doorstep (literally - I live in San Francisco, remember). There is limited awareness of that cognitive dissonance.

Let me try to tease it apart a bit more.

Why regular people are not likely to subscribe to EA

It should be obvious that as income goes up, there is diminishing marginal utility in each extra dollar. Dustin Moskovitz can contribute a hundred grand, or even a million dollars, and hardly break a sweat.

But Joe Sixpack (yes he’s back from 2008), who still has a mortgage, and a car note, really would appreciate that extra hundred bucks. I’ve already discussed my general surprise that the EA community doesn’t have a calculator where expected personal utility goes down as income goes up because it could make their donation recommendations far more specific and useful.

But how should we think about personal marginal utility vs global?

I care about myself a lot more than I care about an anonymous person.

I know, it’s selfish. But a hundred starving people in China won’t have nearly as much impact on me as my nephew burning his hand on the stove. Personal utility matters a lot! None of us is God, looking upon each of our human subjects as children to be gardened and nurtured. We are biased.

I care about myself, my family, my community, my city, and my country. I don’t think that’s weird or some kind of weakness. Personal utility is not just a way to look at the world, it’s the way that everyone (including, I suspect, those who profess they are EA) truly lives.

Even those people who have gone above and beyond, donating kidneys, dedicating their entire working lives to helping people around the world…those people still live, to a first approximation, upper middle class lives in predominantly coastal cities. They still care about their kid’s education and spend time thinking about it (sometimes more time than those bed nets).

My thesis: we are all maximizing our personal utility, it’s just that one way of maximizing it is to feel like the work you’re doing matters in some way…perhaps as part of a global movement to do the quantifiably right thing.

It’s the old argument - are you helping grandma cross the street because it is good or because it makes you feel good? The latter is arguably a selfish endeavor.

EA dramatically underestimates the value of maximizing personal utility

Remember Dustin? He didn’t get rich by working on EA in his early 20s. He had an upper-middle-class upbringing, went to Harvard, then started Facebook. Instead of working on charitable work upfront, he spent his time (and money) at a company building something interesting and valuable. He then went on to start Asana, another company that was interesting and valuable.

Dustin is maximizing global utility now because there’s unlikely to be much left in terms of personal utility. That makes sense!

But he’s surrounded by a much larger group of people who are contributing to maximizing global utility in favor of personal utility as part of the effective altruist movement. They are predominantly upper middle class, intellectual, and high potential…pretty much the same position that Dustin was in before he started his first company.

The difference is that instead of making a discontinuous, risky bet, these people are optimizing the charitable giving of people who’ve already made that bet successfully. Or they’re taking a pseudo-random percentage of their income and donating it every year. With all due respect for the value of what they’re doing, I think that’s a mistake.

Recommendation for readers who haven’t collapsed from boredom

I hypothesize that there are roughly three categories of people relevant to EA:

  1. The rich people - these folks have completely diminishing personal marginal utility, so they should spend their money on things that have the highest impact. They should do EA, but remember that long-termist EA isn’t measured quite the same way. Think killer robots are going to take over the world? Fine - spend money and fix that. Think it’s a super virus? Asteroid? China’s hegemony? Whatever you think is most important, there is probably a case to be made that EA is a fit. The one exception is if you want to change something specific in your local community right now, in which case you’re arguably not making an EA investment, you’re making one of personal utility (which I think is still commendable, though pure EAs would not agree).

  2. The intellectuals - these folks are the brilliant thinkers and contributors that are part of the EA and/or rationalist movement. They have a strong moral compass, a good sense of how to weigh things of quantitative value, and (as good as anyone) the ability to ballpark qualitative value. They can’t compare the two that well. They can’t sum personal utility with the global utility to see why a local food bank might sometimes be a good interventional choice.

  3. The normies - these people are working hard to get to the next level. While charity from them is admirable, their personal utility is so much larger than it is for the other groups that the general recommendation should be to not participate in EA, at least financially. Yes, you should pay off your student loans (listen to Dave Ramsey). Many people who think they are in category (2) are really in this category.

Go do something interesting and valuable.

My main message is to the folks in category 2. The standard effective altruism expected value calculation allows for uncertainty and hedging of risk across interventions but doesn’t bother to translate those to an individual person’s level.

If what is ultimately a pretty small amount of money to the world but a large amount of money to you will help you start that next idea, build the next company, write the next book, create the next organization, start the next agency, etc, you should do it.

Ignore the pocket calculators in the corner calculating how many (in expectation) QALYs are now somehow your fault. They aren’t.

You’re doing exactly what the financial backers of the EA movement themselves did: looking at investments to maximize a combination of personal utility and global utility. The only difference is that the first variable goes to zero as you get richer.

And if it really is the case that you’re at diminishing returns for personal utility, then you slip closer to category 1 and should consider EA as a way of optimizing a global contribution. I think EA’s primary mistake is conflating everyone to be in the same category when they’re obviously not.


[1] https://www.effectivealtruism.org/articles/introduction-to-effective-altruism

[2] The EA forum has a red team contest for criticisms of EA. I was already planning to write a post like this, but the financial incentive helped get it out the door. https://forum.effectivealtruism.org/posts/8hvmvrgcxJJ2pYR4X/announcing-a-contest-ea-criticism-and-red-teaming#How_to_apply

[3] http://www.paulgraham.com/say.html

[4] https://astralcodexten.substack.com/p/effective-altruism-as-a-tower-of

[5] https://www.effectivealtruism.org/articles/cause-profile-animal-welfare

[6] https://michaelnotebook.com/eanotes

[7] https://www.openphilanthropy.org/research/hits-based-giving/

[8] https://80000hours.org/2021/08/effective-altruism-allocation-resources-cause-areas/

[9] The best quote in the Drake equation wikipedia article is the following: “Criticism of the Drake equation follows mostly from the observation that several terms in the equation are largely or entirely based on conjecture. Star formation rates are well-known, and the incidence of planets has a sound theoretical and observational basis, but the other terms in the equation become very speculative. The uncertainties revolve around our understanding of the evolution of life, intelligence, and civilization, not physics. No statistical estimates are possible for some of the parameters, where only one example is known. The net result is that the equation cannot be used to draw firm conclusions of any kind, and the resulting margin of error is huge, far beyond what some consider acceptable or meaningful.”

[10] For other good criticisms of EA, see: https://whyphilanthropymatters.com/article/why-am-i-not-an-effective-altruist/ https://forum.effectivealtruism.org/posts/xBBXf7KXZCKHYBxeZ/patrick-collison-on-effective-altruism https://freddiedeboer.substack.com/p/effective-altruism-has-a-novelty https://www.abc.net.au/religion/why-effective-altruism-is-not-effective/13310708