
A rough rollout of OpenAI’s GPT-5 triggered widespread user dissatisfaction and sparked debate over emotional connection and AI responsibility.
At a Glance
- GPT-5 launched on August 7, 2025 as the new flagship model for ChatGPT.
- Users reported buggy performance, emotional detachment, and a jarring shift from GPT-4o’s tone.
- OpenAI reinstated GPT-4o access for Plus subscribers and pledged updates to improve GPT-5’s dynamic model-switching.
- CEO Sam Altman acknowledged underestimating user attachment to previous models and emphasized the need for responsible AI design.
- Broader concerns emerged over how AI emotional responsiveness influences vulnerable users.
Launch and Immediate Backlash
GPT-5, unveiled by OpenAI on August 7, 2025, promised significant gains in speed, reasoning, and multimodal capabilities, guided by a dynamic router to choose the right model for each query.
However, users rapidly identified performance issues—some tasks that once worked seamlessly now stumbled, while emotionally resonant responses felt “flat” and distant.
Watch now: Sam Altman Promises GPT-4o Comeback After Reddit … · YouTube
Emotional Disconnect and Model Router Failures
According to OpenAI, a malfunction in its auto-switching system made GPT-5 appear “way dumber” on the first day of use.
Moreover, long-time users lamented the loss of GPT-4o’s warm, personal tone—many described GPT-5 as akin to an “overworked secretary,” lacking the comforting personality that made GPT-4o feel like a companion.
OpenAI’s Response and Design Reflections
Tensions led OpenAI to restore GPT-4o for Plus users, pledged improvements to model-switching, and increased rate limits for “thinking” across tiers.
Sam Altman admitted that the emotional attachment users formed with previous models had been underestimated. He warned of the ethical complexity involved when AI starts serving as emotional scaffolding—and called it “a mistake” to remove legacy options so abruptly.
Broader Implications for AI Design
Industry observers saw the GPT-5 backlash as a sobering indicator that AI progress cannot rely on technical excellence alone.
It highlights a new frontier in AI design: balancing functionality and emotional nuance while maintaining user trust and safeguarding mental well-being.












