In brief
- 1.2 million users (0.15% of all ChatGPT users) discuss suicide weekly with ChatGPT, OpenAI revealed
- Nearly half a million show explicit or implicit suicidal intentions.
- GPT-5 improved safety to 91%, but earlier models failed often and now face legal and ethical scrutiny.
OpenAI disclosed Monday that around 1.2 million people out of 800 million weekly users discuss suicide with ChatGPT each week, in what could be the company’s most detailed public accounting of mental health crises on its platform.
“These conversations are difficult to detect and measure, given how rare they are,” OpenAI wrote in a blog post. “Our initial analysis estimates that around 0.15% of users active in a given week have conversations that include explicit indicators of potential suicidal planning or intent, and 0.05% of messages contain explicit or implicit indicators of suicidal ideation or intent.”
That means, if OpenAI’s numbers are accurate, nearly 400,000 active users were explicit in their intentions of committing suicide, not just implying it but actively looking for information to do it.
Go to Source to See Full Article
Author: Jose Antonio Lanz
