This is critical. The previous framework tried to regulate every model trained with higher than 10^23 FLOPs. It was absurd then, no reason to think 10^26 is now the magic number. It's always *just* a bit higher than where we are today. Imagine if we had done this!!
This was last year by the way. In 2023. The year we got Llamas and Alpacas and GPT-4 and a Claude. The world, as you'll note, did not collapse into computronium.
For context, here's the plethora of models that are almost all above the 10^23 barrier which was thought essential for existential risk reasons.
@krishnanrohit The Campaign for AI Safety proposed “prohibiting the development of models above the level of Open AI GPT-3 or GPT-4 series of models” - in 2023/4 “In practice that means banning training runs using more than 10^23 FLOP in compute” https://shorturl.at/YL9kd
@DrTechlash Yup! Absurd.
@krishnanrohit ive heard its because this value is when scaling becomes inference optimal, and so its being used to gate market capabilities more than performance
@krishnanrohit @jeremyphoward I really don’t want regulation dictating things like model training and parameters. We already know that’s what would end up happening.
@krishnanrohit Imagine if they just maybe said: "nothing as random as a frickin casino"? https://x.com/realcoffeeAI/sta...
@krishnanrohit They do this because it's a backdoor pause AI attempt
@krishnanrohit "The previous framework tried to regulate uranium piles above 10^26 uranium atoms. It was absurd then, no reason to think 10^29 is now the magic number." The fact that you have no reason to think any level of intelligence is dangerous is a fact about your map, not the territory.
@krishnanrohit This is proof the safety people can't be trusted with FLOPS
@krishnanrohit Insanity Doomers must never live this down
@krishnanrohit There is no magic number. The goal is to de-risk this enterprise. 10^23 or any random # is a good starting point. Adjust the upper bound as you gain more knowledge. 10^23 turned out to be safe, now carefully explore & verify up to 10^26. Incrementalism in the face of xrisk.



