There is a totally different approach to personal life policy, potentially also valid, held by Guerrilla Foundation and possibly Extinction Rebellion and others.
See guerrillafoundation.org/some-thoughts-on-effective-altruism/
and guerrillafoundation.org/additional-thoughts-on-effective-altruism/
Some things held in common with EA:
- Yes, traditional philanthropy may do more harm than good (for reflections on why, Guerrilla mentions book Winner Takes All by Anand Giridharadas)
- Yes, good to evaluate actions by their consequence instead of their effect on the agent (no "warm-glow giving")
- Yes, good to seek out neglected causes
- Yes, donors (of either time or money) should try to seek out where they can make the most difference.
- Caveat: they argue that the places where you'll make the most difference will rarely be certain or measurable, and it's still worth using gut feelings combined with an EA-like mindset.
Some pain points they see in EA:
- Does not invite the philanthropist to reflect on the systemic causes of their wealth
- In addition, there's a worry that once someone has the EA framework, they may never reflect on the system because they've gotten a way to feel ethically superior and hold a delusion of impartiality. When you think you know something, it blocks you from learning.
- But so what? They still have wealth to give… is the alternative not to give? Is it fine if they give nothing, or waste it on a corrupt organization? Is it so important that the philanthropist knows where they got the wealth, so long as they give away a large chunk of their wealth anyway? Perhaps this sort of "introspecting on your privileges" is more important for people who weren't naturally inclined to give anything.
- What does it mean to "acknowledge privilege"? Giving/sharing the decision of donation with the historically underprivileged?
- At least with the values held by most philanthropists that currently buy into EA (highly educated white technologists from Silicon Valley), it's not clear that this effective giving will lead to any improvement in the social system as a whole. These people may be fixing symptoms caused by a broken system instead of going for the root causes, and in the long term this could mean having done zero net good, if there's any risk that the philanthropy means the system is allowed to persist.
- This anti-good could take many forms:
- If EA became mainstream, perhaps it winds up enforcing a socially "correct" way to contribute, thus invalidating forms of activism that are harder to justify with numbers.
- But it seems to me that with EA going mainstream, there will be much more activism overall than there is now (see subpoint 1). So even if most of the activism became EA-guided, the fraction that's not EA-guided would likely still involve even more people than it does currently, so there is nothing lost.
- I don't know why I think this, but I have the feeling that EA empowers people and turns activism into a "real option" in people's minds who would never otherwise have gotten into it. Measurable and tangible outcomes energize people to cause those exact outcomes, and starting to consider the idea of donating 80% of your income goes hand-in-hand with adopting other life policies related to doing good. That's how my own process went, anyway, so maybe I'm overestimating the amount of people who would be likewise affected by discovering EA.
- While it can make sense to have an aversion to "cold numbers" (because number-guided policies and recommendations can be subverted as tools of the powerful), all forms of activism can ultimately be described with numbers, so the problem cannot, strictly speaking, be that the numbers won't work – it just takes probabilistic numbers instead of concrete numbers for the hard-to-measure things. In later years, effective altruists have been increasingly talking about "long shots": fighting for things with huge margins of uncertainty or things that can't be measured. (Proxy measures can often be Fermi-calculated to give you a rough idea of an action's relative impact, and if not, you can "use your gut" to elicit prior probabilities.) Effecting radical systemic change looks to me absolutely as something that can fit within the framework, they're not opposed. I don't know what wouldn't fit within it.
- It's a "valve for releasing the pressure from systemic injustice". Slowly drain wealth from Haiti over decades, but when disaster hits Haiti, donate $10 and suddenly you're a "white savior" even if in some sense you're just giving Haiti's resources back to them. But I feel this hasn't anything to do with EA specifically, just with the idea of philanthropy in general. EA aims only to improve how the money is given, and maybe empowers more people to start giving in the first place. Of course people can exploit the fact of giving as a moral license to be bad people, but they could do that before EA too.
- In the linked article, I don't understand this paragraph: "[…] preventing empathy and solidarity for those who aren't as well off as you"
- EA donors favor things that have already been proven to work, so may fail to experimentally fund "startup charities" that haven't yet shown their worth. Guerrilla Foundation cites how they funded Extinction Rebellion when it was new, without any sort of guarantees. Quote: "More philanthropic funding, about half of it we would argue, should go to initiatives that are still small, unproven and/or academically ‘unprovable’, that tackle the system rather than the symptoms, and adopt a grassroots, participatory bottom-up approach to finding alternative solutions, which might bear more plentiful fruit in the long run."
- The scale only goes up to "global", but they'd prefer to go up one more level, to the "system"
… the founder of the Chorus Foundation, which started out as a traditional single-issue climate funder. A couple of years into his spending down plan for the foundation (!) he shares one of their main lessons learned arguing that their work is not about “identifying the best policy or the most promising technology or the scariest science” (which is what EA would focus on) but that it’s about “generating the political will to enact the best policy, adopt the most promising technology and heed the scariest science“. This means a more radical, root-cause oriented approach to philanthropy oriented towards a just transition. It involves building political and cultural power to change the goals of the system (e.g. from maximum wealth generation for a few, to wellbeing for all), opposing and breaking power where it is unchallenged and concentrated, building grassroots power and providing the funding for the creation of bold alternatives to the current system.
I agree with Guerrilla that "the end goal of philanthropy must be its own abolition".
Within any specific cause, there's only so much you can give until the cause is "done" and the problem is solved. The same seems to apply to the concept of giving overall, if it's effective and the problems targeted are true social problems. EA is not something you can do forever. Giving is always dependent on how much others have given so far – and on how many people in the past caused the social problem in the first place through some form of exploitation, and exploiting is fundamentally just "negative giving", right? Giving helps correct the balance.
To put another spin on what I said above, take so-called "offset donation" for the climate, where you give money to permanently prevent releasing some X kilograms of CO2-equivalents into the atmosphere, to make up for an action you did, such as flying, that released X into the atmosphere. When we hear about this strategy, we may have any of several immediate reactions: that it looks like moral licensing, someone trying to worm their way out of responsibility with "mere money", that it's cheating, or that it's using your privilege to avoid living by example.
But the thing is: only airplane companies, burger chains and similar consumer services offer "offset" donation, where you only offset X or maybe double of X. Their donation service may also be so ineffective as to be ineffectual, so that you don't even manage to offset X.
By direct donation to an effective organization, on a Western income, you may prevent release of not just X but a thousand times X. It's no longer talk of mere "offsetting", it's a real attack on CO2 levels. And this is possible precisely because so few people do it. Of course if everyone did it, it would no longer be as effective an approach, and they'd then have to look at their lifestyle, but as it stands, looking at lifestyle may even be harmful due to wasting time and money on something that makes a small difference.
As long as EA remains unknown and uncommon, it tends to be a neglected way to spend resources for good. Paradoxically, we want more people to do EA so that EA becomes less necessary. To engage in EA is to live by example, since it's what we'd want the wealthy to do if we were not ourselves wealthy, and it's what we will do more of if and when we become more wealthy.
Not everyone in EA thinks this way. At least near the beginning of EA, people did not discuss what to do in the medium- to long-term, after all the low-hanging fruit are eliminated. Guerrilla argues it must consider more complex, politically-involved actions, even if they're harder to measure.
In Doing Good Better, MacAskill mentions a book, Dead Aid: Why Aid Is Not Working and How There Is a Better Way for Africa by Dambisa Moyo. Griselda suspects that Moyo probably had more to say that MacAskill neglected to respond to.
Quote: "Shouldn’t wealth owners first, at the very least, be sure to have engaged in reparations for past wrongs (an issue that increasingly receives attention right now), act consciously to produce wealth for the many and not the few, through a more just and regenerative economy, and then and only then, think about how to maximize the impact of their philanthropic giving?"
How are these three not one and the same?
I guess it's easy to take a too-narrow perspective when you engage in EA and not consider what systemic effects you may be able to bring about.
But it still seems like it's just guidance for which EA causes to pick, not stepping outside the EA toolkit.