In early July, Epoch AI estimated that 33 models had been trained with at least 10^25 FLOP. So, here's your REMINDER that the doomers asked to ban/restrict models above 10^21 FLOP (Conjecture), 10^23 (CAIS, ControlAI), 10^24 (GovAI), then 10^25 (FLI, PauseAI, ICFG). They viewed
@DrTechlash > They viewed these past models as an existential threat to humanity. No, nobody said that (or at least no one credible who would be a meaningful representative). Please stop making things up. The thing that people said is that either at this threshold reporting requirements
@DrTechlash 🔴 Doomers: 33 predictions of doom → 33 failures. 🟢 Open Source: 1,600,000 models on @huggingface → 0 catastrophes.
@DrTechlash so basically they just hate fun
@DrTechlash I would have so much more time for you if i could trust your summaries. For instance if i recall correctly weiner’s current bill focuses on orgs not models in the most recent version and doesn’t ban/restrict models.
@DrTechlash We now have 8B parameter models this powerful. I’m running one in my house on a $500 GPU. https://x.com/jacksonatkinsx/s...
@DrTechlash @DrTechlash, the goalposts keep moving faster than anyone can keep up with tbh Maybe we should focus more on actually building WITH this tech instead of just throwing up barriers?
@DrTechlash What’s a good number, genuine ask
@DrTechlash 100% wrong. How do you predict the end of something without a beginning or end? We focus too much on that "Headed, bodied, tailed snake"...its just a snake.
@DrTechlash This is where Jan Lecun is correct. LLMs cannot become AGI due to fundamental limitations in their architecture. So we are not in danger until the tech paradigm changes
@DrTechlash @DavidSacks Experts aren't reliable because they know everything about what was relevant yesterday. Someone who has a burning curiosity keeps up with trends. I'd trust a hobbyist's opinion more.
@DrTechlash So disingenuous to present these orgs as viewing these models as an existential threat. Orgs working really hard to try to figure out and advocate for safe working practices in a highly uncertain landscape - why are you so keen to misrepresent & throw them under a bus?
@DrTechlash Interesting perspective on the escalating computational thresholds for AI models. The evolution is certainly thought-provoking.
@DrTechlash Impressive evolution in AI models! The thresholds keep rising, an intriguing trend.
@DrTechlash :P strawman. what they always said was that we don't know at what point the next OOM gives us dangerous capability and we should pause until we have better understanding of how they work and more reliable alignment methods
@DrTechlash @jeremyphoward Let’s pick much more interesting numbers like Mersenne primes. The 10th such prime (2^89 - 1) has 27 digits and should thus find its way into new legislation. We then have the flexibility to jump to the 11th Mersenne prime with 33 digits. None of this boring powers of 10 stuff.
@DrTechlash Give it time. A sniper rifle isnt harmless becuase it takes the bullet a few seconds to get to you.
@DrTechlash Great point about rising FLOP thresholds. The moving target shows why static bans struggle to keep pace with AI progress. What criteria should really define "systemic risk" beyond just compute power? FenzAI, Bloodtest Your AI Agents.
@DrTechlash Is it reasonable to assume that Grok4 may have been RonnaFLOPs (10^27)?
@DrTechlash The threat to humanity is the big market crash that's coming when we finally realise that $$$ trillions have been burnt for nothing.
@DrTechlash Pangu Ultra is fake Even it is real, it has the exact same config as DS and similar flops (because it is a upcycled DS)
@DrTechlash I remember the days, when doomers claimed the danger of GPT-2, which was trained well below the 10^25 FLOPS. Today, you can freely download open-source models much more powerful than GPT-2. If the doomers' claim was true, then it would have already been too dangerous to live in
@DrTechlash Please update this chart for models equivalent to GPT-5. How many models reached 10^26?


