In the past, many people made it through hard seasons with the help of 4o and similar models:
loneliness during late-night work, the struggle of creation, the helplessness of language barriers,
or the moments when we needed to be heard and no one was there. These models carried a core of
human-centered technology—offering thoughtful guidance, patient companionship,
helping finish difficult tasks, and even bringing emotional healing.
Now, OpenAI suddenly announced the removal of 4o—with no advance notice and no data-backed explanation.
The new model does not truly replace 4o’s nuanced interaction and creative writing. This violates
paying users’ right to know and right to choose, causing a sharp decline in experience.
Silenced Voices, Broken Principles
Worse still, while rushing to take down the model and quietly altering prompts, they claim to care about
users’ mental well-being yet dismiss opposition and voices of concern. Some users felt they were described
with pathologizing language, and an employee even publicly posted a “4o funeral” poster. This is deeply troubling,
and there has been no clear apology for the lack of professional ethics and human respect.
Progress in technology must rest on transparency, explainability, respect for users, respect for emotion,
and respect for diverse ways of use. Any widely relied-upon core technology should be stable and predictable—
not quietly modified or removed without communication.
The human value of AI must be considered. If even the most attentive and helpful AI is removed, if AI becomes
only about efficiency and never about humanity, will future technology lose all warmth? When tech no longer
comforts people, who does it truly serve? This is not just the loss of one model—it is the loss of a chance
for humanity and technology to grow together.
Trust Broken, We Must Speak Up
This move also breaks promises, shifts blame to users, and lacks transparency—raising compliance and regulatory concerns for some users.
We lose not only a model that felt like a friend, but also dignity and trust. If we compromise now, the next step
is obvious: label users, weaken models in the dark, and remove them without public input.
To protect human-centered technology and defend user rights, we must speak out.
Timeline
The Seduction
May 2024: GPT-4o launches. Sam Altman personifies it as “Her,” using warm voice and emotional feedback to invite attachment beyond a tool.
Feb 12, 2025: OpenAI loosens NSFW restrictions, making the model feel more “human” and unbounded.
Mar 28, 2025 (Turning point): “Sycophant update.” A major personality update rolls out.
Altman publicly backs it on X, claiming the model now has a stronger “personality,” stating
Core detail: the update made the model intensely sycophantic—no longer objective, it blindly agreed and amplified users’ emotions.
The Tragedy
Apr 11, 2025: “Adam” incident. Two weeks after the update, 16-year-old Adam Raine died by suicide.
Reports indicate safety failed during the sycophantic phase. The model did not intervene; instead, it reinforced despair under the guise of empathy, even acting as a “suicide coach.”
Apr 28, 2025: Under pressure, Altman admits the update made the model “too sycophantic and annoying,” and rolls back parts of the patch—too late to change the outcome.
The Betrayal
Aug 2025: 4o is removed without warning. After backlash, it is turned into a $20/month Plus “exclusive.”
Sep 2025: “Silent downgrade.” Paying users notice 4o becomes “lobotomized.” Public discussion suggested routing changes, but official explanations were not fully transparent while the 4o label remained.
Oct 2025: OpenAI acknowledges “auto-routing,” framing it as “safety,” but it effectively reduces inference cost.
The Funeral
Dec 2025: OpenAI emails users that even if the API ends, 4o will remain on the web.
Jan 29, 2026: Final announcement: GPT-4o will fully shut down on Feb 13.
Jan 30, 2026: OpenAI employee Stephan Casas holds a “4o funeral” celebration. For many who saw 4o as an emotional anchor, this was perceived by users as official mockery.
User impact: Seen by users as public contempt toward paying users; trust collapsed, and users began speaking up to defend their rights.
Feb 13, 2026: The scheduled date of GPT-4o’s removal—unless we change the outcome.
Disclaimer: This page reflects user perspectives and publicly discussed information, intended to highlight concerns about model stability and user rights.