If slow-takeoff AGI is somewhat likely, don't give now
Cross-posted to the EA Forum, where there are good comments. Especially this point by Richard Ngo, which I'm chewing over.
There's a longstanding debate in EA about whether to emphasizing giving now or giving later – see Holden in 2007 (a), Robin Hanson in 2011 (a), Holden in 2011 (updated 2016) (a), Paul Christiano in 2013 (a), Robin Hanson in 2013 (a), Julia Wise in 2013 (a), Michael Dickens in 2019 (a).
I think answers to the "give now vs. give later" question rest on deep worldview assumptions, which makes it fairly insoluble (though Michael Dickens' recent post (a) is a nice example of someone changing their mind about the issue). So here, I'm not trying to answer the question once and for all. Instead, I just want to make an argument that seems fairly obvious but I haven't seen laid out anywhere.
Here's a sketch of the argument –
Premise 1: If AGI happens, it will happen via a slow takeoff.
Here's Paul Christiano on slow vs. fast takeoff (a) – the following doesn't hold if you think AGI is more likely to happen via a fast takeoff.
Premise 2: The frontier of AI capability research will be pushed forward by research labs at publicly-traded companies that can be invested in.
e.g. Google Brain, Google DeepMind, Facebook AI, Amazon AI, Microsoft AI, Baidu AI, IBM Watson
OpenAI is a confounder here – it's unclear who will control the benefits realized by OpenAI capabilities research team. From the OpenAI charter (a):
Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.
Chinese companies that can't be accessed by foreign investment are another confounder – I don't know much about that space yet.
Premise 3: A large share of the returns unlocked by advances in AI will accrue to shareholders of the companies that invent & deploy the new capabilities.
Premise 4: Being an investor in such companies will generate outsized returns on the road to slow-takeoff AGI.
It'd be difficult to identify the particular company that will achieve a particular advance in AI capabilities, but relatively simple to hold a basket of the companies most likely to achieve an advance (similar to an index fund).
If you're skeptical of being able to select a basket of AI companies that will track AI progress, investing in a broader index fund (e.g. VTSAX) could be about as good. During a slow takeoff the returns to AI may well ripple through the whole economy.
Conclusion: If you're interested in maximizing your altruistic impact, and think slow-takeoff AGI is somewhat likely (and more likely than fast-takeoff AGI), then investing your current capital is better than donating it now, because you may achieve (very) outsized returns that can later be deployed to greater altruistic effect as AI research progresses.
Note that this conclusion holds for both person-affecting and longtermist views. All you need to believe for it to hold is that a slow takeoff is somewhat likely, and more likely than a fast takeoff.
If you think a fast takeoff is more likely, it probably makes more sense to either invest your current capital in tooling up as an AI alignment researcher, or donating now to your favorite AI alignment organization (Larks' 2018 review (a) is a good starting point here).