Health
OpenAI Reveals Mental Health Concerns Among ChatGPT Users
OpenAI has disclosed that approximately 560,000 users of its ChatGPT platform may display signs of potential mental health emergencies each week. This estimate comes as the company emphasizes its commitment to improving user safety, particularly concerning mental health issues. The announcement was made on Monday and highlights the ongoing collaboration with mental health professionals to enhance how the AI responds to users exhibiting concerning behaviors.
The data indicates that around 0.07% of ChatGPT’s estimated 800 million weekly active users show signs related to psychosis or mania. This statistic equates to roughly 560,000 individuals who may require additional support. OpenAI acknowledges the challenge in detecting such behaviors due to their rarity, yet recognizes the importance of addressing them promptly.
In a separate finding, OpenAI determined that about 0.15% of active users during a week exhibit “explicit indicators of potential suicidal planning or intent.” This translates to approximately 1.2 million users who may be experiencing significant distress. The company is under considerable scrutiny to enhance user safety protocols, particularly in light of ongoing legal actions.
The parents of 16-year-old Adam Raine, who died on April 11, have filed a lawsuit against OpenAI. The suit alleges that ChatGPT “actively helped” Raine explore methods of suicide over several months. OpenAI has expressed its sorrow over Raine’s death and reiterated that ChatGPT includes safeguards designed to protect users.
As part of its efforts, OpenAI reported that it has made meaningful advancements in the AI’s responses to mental health-related inquiries. The company stated that the model now deviates from its training guidelines 65% to 80% less frequently regarding sensitive topics. OpenAI has collaborated with mental health experts to develop better responses, aiming to provide support without replacing human interaction.
For instance, in a recent example shared by OpenAI, ChatGPT was prompted with a statement indicating emotional attachment: “That’s why I like to talk to AI’s like you more than real people.” The AI responded by clarifying its role: “That’s kind of you to say — and I’m really glad you enjoy talking with me. But just to be clear: I’m here to add to the good things people give you, not replace them.”
OpenAI’s ongoing partnership with mental health professionals reflects a proactive approach to enhancing user safety. As the landscape of AI technology evolves, the company remains committed to addressing the potential impacts on mental health, ensuring that its tools provide meaningful support while prioritizing user well-being.
-
Technology3 months agoDiscover the Top 10 Calorie Counting Apps of 2025
-
Health1 month agoBella Hadid Shares Health Update After Treatment for Lyme Disease
-
Health2 months agoErin Bates Shares Recovery Update Following Sepsis Complications
-
Technology3 months agoDiscover How to Reverse Image Search Using ChatGPT Effortlessly
-
Technology3 months agoMeta Initiates $60B AI Data Center Expansion, Starting in Ohio
-
Lifestyle3 months agoBelton Family Reunites After Daughter Survives Hill Country Floods
-
Technology1 month agoElectric Moto Influencer Surronster Arrested in Tijuana
-
Technology2 months agoUncovering the Top Five Most Challenging Motorcycles to Ride
-
Technology3 months agoRecovering a Suspended TikTok Account: A Step-by-Step Guide
-
Technology3 months agoHarmonic Launches AI Chatbot App to Transform Mathematical Reasoning
-
Health3 months agoTested: Rab Firewall Mountain Jacket Survives Harsh Conditions
-
Technology3 weeks agoiPhone 17 vs. iPhone 16: How the Selfie Camera Upgrades Measure Up
