- OpenAI CEO Sam Altman said testing GPT-5 left him scared in a recent interview
- He compared GPT-5 to the Manhattan Project
- He warned that the rapid advancement of AI is happening without sufficient oversight
The description of GPT-5 provided by OpenAI CEO Sam Altman reads more like a thriller than a product introduction. He spoke in hurried tones that provoke more doubt than whatever alarm he seemed to want listeners to hear when he recounted the experience of testing the model in a recent episode of the This Past Weekend with Theo Von podcast.
Altman described times when he felt quite anxious and remarked that the GPT-5 “feels very fast.” Despite being the driving force behind the construction of GPT-5, Altman said that in some sessions, he compared the Manhattan Project to GPT-5.
In addition, Altman delivered a scathing critique of the state of AI governance, claiming that “no adults are in the room” and that oversight frameworks have not kept pace with AI advancement. It’s a strange method of marketing a product that promises significant advancements in artificial general intelligence. It’s one thing to raise the possible risks, but it seems a little dishonest to pretend as though he has no control on GPT-5’s performance.
Analysis: Existential GPT-5 fears
Nor is it completely clear what frightened Altman. Altman skipped over the technical details. Making reference to the Manhattan Project is yet another exaggerated parallel. It seems strange to compare a smart auto-complete to a signaling global stakes and irreversible, potentially catastrophic change. OpenAI comes seen as careless or inept when they claim to have created something they don’t completely comprehend.
There are indications that GPT-5, which is anticipated to be released soon, will surpass GPT-4 in many ways. Although Altman’s description of the “digital mind” may indicate a change in the way AI developers view their work, this sort of apocalyptic or messianic forecast seems absurd. Though existential dread and feverish excitement have dominated public conversation on AI, anything in the between seems more suitable.
Altman has previously expressed his displeasure with the AI arms race in public. He has stated publicly that AI may “go quite wrong” and that OpenAI needs to behave ethically while still producing beneficial goods. However, GPT-5’s main concern is power, even though it will most likely come with improved tools, more user-friendly interfaces, and a somewhat sharper logo.
More responsibility will be given to the next generation of AI if it is quicker, more intelligent, and more perceptive. And based on Altman’s remarks, that would be a horrible idea. I’m not sure whether that’s the type of business that should be deciding how that power is used, even if he’s exaggerating.



