Feb 24, 2019

Has your EA worldview changed over time?

Related, from long ago.

On the EA Forum, Ben Kuhn asks:

Has your "EA worldview" changed over time? How and why?

If you're Open Phil, you can hedge yourself against the risk that your worldview might be wrong by diversifying. But the rest of us are just going to have to figure out which worldview is actually right. Do you feel like you've made progress on this? Tell me the story!

I answer:

I feel like I've made progress on this. (Caveat with confirmation bias, Dunning-Kruger, etc.)

Areas where I feel like I've made progress:

  • Used to not think very much about cluelessness when doing cause area comparison; now it's one of my main cause-comparison frameworks.

  • Have become more solidly longtermist, after reading some of Reasons & Persons and Nick Beckstead's thesis.

  • Have gotten clearer on the fuzzies vs. utilons distinction, and now weight purchasing fuzzies much more highly than I used to. (See Giving more won't make you happier.)

  • Have reduced my self-deception around fuzzies & utilons. I used to do a lot more "altruistic" stuff where my actual motivations were about fulfilling some internal narrative but I thought I was acting altruistically (i.e. I was thought I was purchasing utilons whereas on reflection I see that I was purchasing fuzzies). I do this less now.

  • Now believe that it's very important to pay good salaries to people who have developed a track record of doing high-quality, altruistic work. (I used to think this wasn't a leveraged use of funds, because these people would probably continue doing their good work in the counterfactual. My former view wasn't thinking clearly about incentive effects.)

  • Have become less confident in how we construct life satisfaction metrics. (I was naïvely overconfident before.)

  • Now believe that training up one's attention & focus is super important; I was previously treating those as fixed quantities / biological endowments.

Some areas that seem important, where I don't feel like I've made much progress yet:

  • Whether to focus more on satisfying preferences or provisioning (hedonic) utility.

  • Stuff about consciousness & where to draw the line for extending moral patienthood. (Rabbit hole 1, rabbit hole 2)

  • What being a physicalist / monist implies about morality.

  • What believing that I live in a deterministic system (wherein the current state is entirely the result of the preceding state) implies about morality.

  • Whether to be a moral realist or antirealist. (And if antirealist, how to reconcile that with some notion of objectivity such that it's not just "whatever I want" / "everything is permitted," which seem to impoverish what we mean by being moral.)

  • How contemplative Eastern practices (mainly Soto Zen, Tibetan, and Theravada practices) mesh with Western analytic frameworks.

  • Really zoomed-out questions like "Why is there something rather than nothing?"

  • What complexity theory implies about effective action.