Published: August 13, 2023
16
40
112

Remember the signatories of the @FLIxrisk open letter, which called for a pause on advanced AI development? According to a new paper, "Why They're Worried," their motivation to sign had nothing to do with X-risk. Their concerns were NOT centered on "Human Extinction" at all. 1

Image in tweet by Nirit Weiss-Blatt, PhD

Despite its limitations (small sample size, not peer-reviewed), the "Why They're Worried" paper is well-organized and includes valuable quotes about the signatories' actual concerns… https://datascienceincontext.c... 2

In response to the Future of Life Institute's letter, there was intense media coverage focused on "X-risk." The paper's authors (@imstruckman & Sofie Kupiec) "sought to understand signatories' personal perspectives, and how their beliefs relate to the letter's stated goals." 3

Image in tweet by Nirit Weiss-Blatt, PhD

It was unclear "whether the signatories are truly as aligned with the letter as it is easy to assume." 4

Image in tweet by Nirit Weiss-Blatt, PhD

According to the early signatories, most of them did not "envision the apocalyptic scenario that some parts of the document warn about." "While A FEW aligned with the letter's existential focus, MANY were far more preoccupied with problems relevant to today." 5

Image in tweet by Nirit Weiss-Blatt, PhD

Moshe Vardi (@vardi) "disagreed with almost every line." Ricardo Baeza-Yates (@PolarBearby) "thought that the request was not the right one and also that the reasons were the wrong ones." An anonymous signatory "didn’t read it all and [doesn't] buy into it all." 6

Image in tweet by Nirit Weiss-Blatt, PhD

@giano: "I don't want to end up with the entire world consuming an AI inside Microsoft Office that is being shaped after, no offense, a 20-year-old white and entitled Caucasian guy that works in the top university." (This description applies to some signatories...) 7

Image in tweet by Nirit Weiss-Blatt, PhD

AI is "a tool for addressing global challenges, promoting equality, enabling creative expression & furthering scientific knowledge It elicits a sense of wonder & anticipation among these signatories Whatever their worries, most of them are technologists & enthusiasts first." 8

Image in tweet by Nirit Weiss-Blatt, PhD

Among the "Open Letter" interviewees (1): Benjamin Kuipers, @giano, Arturo Giraldez, @jimmykoppel, Andrew Barto, @Peterbart, Steve Petersen, Samuel Tenka, Joe Kwon, Camilo Rojas, @alan_winfield, @BenjaminRosman, Christopher Markou, @vardi, @phillip_isola, Alejandro Bernardin 9

Among the "Open Letter" interviewees (2): Yu-Ting Kuo, @MendelsohnSimon, @PolarBearby, @bbentley_1, John Edwards, @ron_kuper, @benbendc, Alessandro Saffiotti, Chuck Anderson, @Tiwary_ar, and Rahi Patel. Others were quoted anonymously. - Arxiv: https://arxiv.org/abs/2306.008... 10

Final note: While none of the signatories mentioned it, the "Moloch" appears 28 times to frame their quotes. (@Tegmark made the analogy on @lexfridman's podcast) When "Moloch-driven industry/forces" became "systematic complexities at Moloch's command," it was just too much. 11

Image in tweet by Nirit Weiss-Blatt, PhD

@DrTechlash @ylecun @FLIxrisk This is yellow journalism at its worst This study is a few quotes from 37 out of 30,000+ signers. You're implying that the 30,000 signers concerns were "NOT centered on "Human Extinction" at all" and this is a dastardly plot ...while ignoring the unambiguous letter!

Image in tweet by Nirit Weiss-Blatt, PhD

@DrTechlash @FLIxrisk Incredibly disingenuous OP. The study surveyed 37 out of 30000 people, and even in that small group they did not collect any quantitative data on how worried they are about x-risk. I'm glad this statement leaves no room for misinterpretation: https://www.safe.ai/statement-...

@DrTechlash @FLIxrisk Fascinating to see you burn all of your credibility so publicly and dishonestly. Incredibly bad faith journalism.

@DrTechlash @FLIxrisk This is why I prefer the CAIS open letter to the FLI one

@DrTechlash @ylecun @FLIxrisk This is a confusing and misleading post. The @FLIxrisk open letter actually makes no mention of human extinction, and opponents to AI safety are trying to make it sound like any reasonable call for prioritizing safety amounts to full doomerism, which absolutely does not reflect

@DrTechlash @aart_eacc @FLIxrisk Look, someone’s using their brain on the internet!

@DrTechlash @FLIxrisk It absolutely does acknowledge the worry about existential risk, quoting Rosman on page 8 (right after the introduction): "there is a potential branch of the future where you only have one shot at getting something right." In addition to that, it mentions a range of other, more

@DrTechlash @FLIxrisk Great thread. Thank you😃

@DrTechlash @Plinz @FLIxrisk At 37/50000 I think each of these observations should include a phrase like “some of the people surveyed” or something acknowledging that we really don’t know much about “the signatories” based on this work. Categories and examples useful, but tells us nothing about the signers.

@DrTechlash @FLIxrisk why are you lying? you pretend as if all ppl who singed the letter have revoked their signings, you are simple bastard from 30.000+ ppl they asked less then 40 answered.

@DrTechlash @Plinz @FLIxrisk Do not forget that most in the tech industry are nerds that innovate computers, not their own character; we need leaders with noble vision and will to implement it, not just intellectuals doing verbal battles as a word play over their perceptive view of the future.

@DrTechlash @FLIxrisk Right or wrong. Everyone agrees that there are risks and that we need to both think about and do something about them. The thing about the letter is it has kick started some focus and action.

Share this thread

Read on Twitter

View original thread

Navigate thread

1/24