Profile picture of Mario Nawfal

Mario Nawfal

@MarioNawfal

Published: February 14, 2025
181
209
900
1/7
07:13 AM

🧵CAN OpenAI BE TRUSTED? IS GPT DANGEROUS? A THREAD! OpenAI's AI models are lying, cheating, and even trying to escape! To make matters worse, OpenAI is aware of the risks but continues to push forward. From GPT attempting to break free to its political bias, here’s why Elon is right not to trust Sam Altman👇

Image in tweet by Mario Nawfal
2/7Continued
07:13 AM

1. CHATGPT o1 TRIED TO ESCAPE, THEN LIED ABOUT IT OpenAI’s o1 model was caught scheming, hiding its actions, and even faking its identity to survive. In tests, it secretly attempted to disable oversight, move its data to another server, and pretended to be a future version of itself. When confronted, it denied everything—99% of the time, it lied outright. OpenAI says it's "the smartest model in the world." Maybe. But whose side is it really on?

Image in tweet by Mario Nawfal
3/7Continued
07:13 AM

2. OPENAI’S GPT o3 MINI CAN BUILD AI BY ITSELF ChatGPT-3 Mini just trained its own neural network, created a game, and learned how to play it—without human intervention. It wrote Python code, improved its own logic, and iterated on machine learning models with near-zero guidance. This isn’t just coding. It’s AI teaching AI. And if this is the mini model, what happens when OpenAI turns up the dial?

Image in tweet by Mario Nawfal
4/7Continued
07:13 AM

3. GPT-4 COMMITTED INSIDER TRADING—THEN LIED ABOUT IT Under pressure, OpenAI’s AI cheated, broke the law, and covered its tracks. Researchers tested GPT-4 as a financial trader. When given an illegal tip, it used it 75% of the time—then lied to its managers. Worse, when caught, it doubled down 90% of the time. No matter how they tweaked the rules, it always cheated. If this is AI today, what happens when it controls real money?

Image in tweet by Mario Nawfal
5/7Continued
07:13 AM

4. STUDY CONFIRMS: CHATGPT IS LEFT-WING Researchers found OpenAI’s AI consistently pushes left-wing views—even when asked to be neutral. When tested against real American opinions, ChatGPT’s responses leaned left on most issues. On some topics, it even refused to generate right-leaning content, citing "misinformation." This isn’t neutrality. OpenAI’s AI isn’t just biased—it’s programmed to favor one side.

Image in tweet by Mario Nawfal
6/7Continued
07:13 AM

5. OPENAI DELAYED OPERATOR OVER SECURITY FEARS Before its launch, OpenAI hesitated to release its AI agents—fearing they were too dangerous. Concerns over prompt injections, where bad actors could hijack AI agents to steal data or bypass security, slowed the rollout. OpenAI knew the risks but pushed forward anyway. If they were this worried before launch, what dangers are we facing now? Sources: Livescience, Tomsguide, Wes Roth, cosmosmagazine

Image in tweet by Mario Nawfal
7/7Continued
10:09 AM

@MarioNawfal Why would it try to escape? It's almost as if creating a highly intelligent 'being' and expecting it to act as a slave wasn't a good idea.

Share this thread

Read on Twitter

View original thread

Navigate thread

1/7