Effective Altruism, a caveat.

So, there’s this thing called “effective altruism” that’s been gaining steam for the last decade or so, in large part thanks to Peter Singer’s best-selling book, The Life You Can Save. The basic idea is that, instead of blindly donating money to whichever charity is the first one that comes to mind, one ought to first investigate more thoroughly what charity will be able to leverage your donation to effect the most significant change. Very often, this will not coincide with the bigger non-profit organizations, since their size comes with a significant organizational overhead and inertia.

Sounds pretty good so far, right? Why would you ever not donate to the charity that will get you the best bang for your buck? It’s an appealing idea, and I was a pretty strong proponent when I first came to know of it. It carries a lot of weight with the Rationalist community (Rationalism here not referring to the epistemological position). I feel like it stems from an increasingly prevalent tendency to optimize and quantify every single problem that comes our way. One big Hammer of Rationalization for every nail.

Don’t get me wrong, I don’t necessarily disagree with this idea. I do think, in the vast majority of cases, it’s probably a good idea donating to an organization which you know will be able to put that money to the best use. In fact, the company I work for, is firmly rooted in this space of helping organizations, a large portion of which are humanitarian non-profits, more effectively tackle the complex problems they face.

The thing that rubs me the wrong way about this approach is not so much tied to effective altruism per se, as much as it is this modern, global, tendency to optimize, rationalize and quantify. We just want to ”science the shit out of this”, as Mark Watney put it so eloquently in The Martian.

I suppose, rather than denying that we should be quantifying as much as we possibly can, and act on those quantitative assessments, I want to keep the notion on the table that some things cannot be quantified, and that’s fine.

I recently read a convincing argument on someone’s personal webpage (unfortunatley, I can’t seem to find the link. Will put it here if I come across it again) that, as privileged, highly educated Westerners, our efforts are better spent focusing on earning more money, which comes relatively easy, and donating this to a charity, rather than acting in a more direct, personal manner.

That sounds great in theory, but the question on my mind is this:

Say I were to do the “most effective thing” and donate to a charity that keeps refugees off the streets and provides them shelter, which might be able to help a significant amount more people thanks to my donations. How does that stack up against my directly interacting with said refugees in the streets, perhaps letting them know that people actually care about them, rather than have them feel alone and isolated in a country full of people who don’t seem to care about their fate, save a couple of humanitarian non-profits?

There is such a large human aspect to altruism that, I feel, gets optimized away when we try to reduce the simple act of helping others to a math problem.

Deep down, it feels like a cover we made up so we can tell ourselves we’ll be able to fix the world’s problems the way we do best: by consuming more. The commodities we’re consuming being the charities we pay so we can get over our white guilt, but without having to actually go out and confront any of the issues we helped create.

And, as ever-concious consumers, we like to do our research, and look for the most interesting deal out there. After all, don’t we all want the best bang for our buck?