Sam Altman says ChatGPT is making social media feel fake, yet he’s one of the main reasons it's an issue

ChatGPT/Sam Altman
(Image credit: Shutterstock/EI editorial)

Another day, another Sam Altman tweet to dissect. This time, the OpenAI CEO has decided to share his fears for the future of the internet, and particularly social media, following a realization that tools like ChatGPT make the web feel "very fake".

Altman tweeted on Tuesday, "AI Twitter/AI Reddit feels very fake in a way it really didn't a year or two ago." in a response to a Reddit post about Codex. He said, "I have the strangest experience reading this: I assume it's all fake/bots."

Living in a simulation

Every time Sam Altman chirps up with a new statement that makes you sit up and fear for the future of humanity, I can't help but feel anger.

Don't get me wrong, I think the consumer AI tools we have access to nowadays are seriously impressive. In fact, ChatGPT is becoming a genuinely useful software for helping me plan and keep on top of my daily life.

That said, Altman's continuous public worries about the state of the world we live in following the introduction of OpenAI's technology feels so incredibly synthetic. I'd honestly appreciate it more if he acknowledged the weirdness of the internet in this new AI-powered world, and recognized the huge part he's played in making it so.

Instead, we're left with statements claiming the state of the internet is in jeopardy or a deep concern for how people interact with AI, without any form of self-awareness or accountability.

Yes, social media sites feel like they are completely filled with bots and fake AI posts, but what are Sam Altman and the rest of the tech billionaires at the forefront of AI development going to do about it?

One potential solution could be another venture Altman is working on: The Orb Mini. Announced back in May, the hardware device scans humans to, you guessed it, verify their humanity.

World Orb Mini

(Image credit: Tools for Humanity)

Intending to ship 7,500 devices across the U.S by the end of the year, maybe the future of the internet sees humans verified by external hardware before gaining access to social media sites like Reddit or X.

That future scares me. In fact, I don't like the idea at all. But maybe the future is a dystopian as an external human verification device sounds.

Altman’s constant flip-flopping between “AI is the future and it’s great” and “AI is the future and it’s terrifying” is getting tiresome. Maybe it’s a calculated marketing strategy, but it’s hard to ignore the irony: the same man stoking AI panic half the time is also deeply invested in a human verification company, one that could, in the future, charge you to prove you're real online.

You might also like

TOPICS
John-Anthony Disotto
Senior Writer AI

John-Anthony Disotto is TechRadar's Senior Writer, AI, bringing you the latest news on, and comprehensive coverage of, tech's biggest buzzword. An expert on all things Apple, he was previously iMore's How To Editor, and has a monthly column in MacFormat. John-Anthony has used the Apple ecosystem for over a decade, and is an award-winning journalist with years of experience in editorial.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.