I'm often critical of Effective Altruism (EA) and I'm sure I'll get more pushback for this, but I've been thinking a lot lately about the discourse on AI doomerism, extinction risk, etc., and here's my big take on what's going on and why. Buckle up, friends, it gets spicy.🧵
The fundamental calculus of EA, at least the version embracing longtermism, necessitates a focus on things that could kill us all, even if exceedingly unlikely, rather than plausible--or even current--things that merely harm lots of people in lots of ways.
Indeed, EA is essentially a point system where you weigh impact on human lives times likelihood of the event--and the basic trap of doing this is that a human extinction event, even if it's one-in-a-million, is infinite points so dominates even high likelihood large impact harms.
This raises the question: what *could* kill us all? One obvious answer is nuclear war, but nobody would be impressed by a sexy new philosophical movement whose main conclusion is that... nuclear war is bad. So they had to dig deeper and find a less obvious x-risk. They chose AI.
AI is mysterious and often scary, and there already were communities claiming AI could wipe us all out--and these communities (centered around "rationalism") use faux-mathy arguments compatible with the probabilistic and logical methods EA embraces. It was the perfect choice.
The problem: it wasn't enough to document current AI harms (as others do) or predict likely near-term ones, those are just finite-point harms like many others faced in life. To justify their ideology, they really needed to convince people AI could eliminate all of humanity.
Reeling from the reputational damage SBF caused to EA, this became somewhat of an existential risk to the EA movement itself: nukes are too obvious, mosquito nets are too small, putting AI x-risk on the map was the path to show the world the enormous value EA offers society.
So EA-affiliated people and orgs funded a number of "AI safety" related efforts. But the delicate balancing act they faced is they couldn't just focus on reducing the chance of x-risk from AI--they had to also focus on convincing the non-EA world that AI really poses this risk.
Hence the very public statements we keep seeing about AI extinction risk--not AI harms like disinformation, election interference, discrimination, massive job loss, concentration of economic power, etc.--the message HAD to be that AI could kill us all, for that's what EA needed.
have EA ties, or at least some public embrace of EA ideology, so they can use that and AI x-risk to provide cover to rush out whatever harmful-yet-profitable AI they want--again, by the EA calculus, any finite harm is worthwhile if it prevents the infinite harm of extinction.
(2) Leaders at other tech companies, ones without any EA ties, quickly saw and jumped on the opportunity to use x-risk as convenient cover to do whatever they wanted to do--so Google and others happily embraced the message as a potent decoy and distraction from real AI concerns.
(3) AI is moving VERY quickly, and it is legitimately scary--and often unpredictable! This has generated a lot of public fear, often quite valid, so the natural anxiety around this new tech tied perfectly into existing sci-fi tropes of killer robots and AI gone awry.
(4) Famous AI researchers faced a choice: say their lifelong work contributed to the possible end of humanity but position themselves as heroes fighting it now (the @geoffreyhinton route--see @sharongoldman's excellent interview with @kchonyc: https://venturebeat.com/ai/top...
or deny the x-risk of AI (the @ylecun route). In reality, many simply avoided the topic or declared the x-risk too speculative, but the media strongly highlights those making bold public statements either for or against x-risk. Which brings up the final point of amplification:
(5) The media LOVES stories like: Computers could kill us all! Evil tech company building bots that will wipe out humanity! Famous AI leader warns of extinction risk! So they jump at the chance to cover every statement these AI safety orgs put out, every letter with signatories..
In sum, the rationalists convinced themselves AI could spell the end of humanity, EA saw this as the perfect cause to champion but needed to spread the message that it's real, the tech industry leveraged this message--amplifying it in the process--as cover to race ahead in AI,
and the media ate up all the juicy fear and debates and public doomsday declarations and hero scientist narratives. None of this says AI isn't an extinction risk--in my view we really don't know, it's far too early to say much about it. What I do strongly believe is that
(1) we don't yet have credible science underlying this kind of AI risk the way we do with, say, the harms caused by climate change, and (2) it is a mistake to overlook the socioeconomic context of the loud voices in the AI doomsday debates.
Tech companies need to profit, AI leaders need to protect their reputation, EA needs to champion an overlooked cause that could wipe out humanity, and newspapers need articles that get clicks and hence generate ad revenue.
Also, some have used OpenAI's weird limited profit structure or said Sam Altman isn't invested and won't personally profit to argue he's genuinely working for humanity. But he may be drawn more to power than money--and both AI progress and AI x-risk give him enormous power.
That would in part explain why he seems somewhat contradictory, saying AI could kill us all but racing ahead with AI anyway. Similarly with EA, they can't simply oppose AI because AI will have many benefits; their ideology says to weigh harms vs benefits but this is impossible,
these harms and benefits are far too unpredictable, as are the probabilities they are supposed to weight them by. So their calculus falls apart--EXCEPT when it comes to extinction, there the probability is irrelevant. So EA is conspicuously quiet about AI in non x-risk matters.
The bottom-line in this thread is that incentives matter. A lot. (Be them money or pride.) And this was my effort at untangling some of the history and incentives that I think shed light on the current debate and media storm over AI posing an extinction risk to humanity.
CLARIFICATION: EA did not *start* championing AI x-risk & funding AI safety efforts in the aftermath of SBF. I'm arguing that putting AI x-risk on the map (a cause they long advocated) was a path for EA to salvage reputation after SBF, and AI going big recently was that chance.
Also: I'm not arguing EA doesn't work on nukes, I'm arguing it would look silly if that's the only x-risk they work on. MacAskill's book would not have been a bestseller if its main conclusion is nuclear war is bad. He needed AI x-risk to be real and to convince the public.
