An OpenAI security analysis chief who helped form ChatGPT’s responses to customers experiencing psychological well being crises introduced her departure from the corporate internally final month, WIRED has discovered. Andrea Vallone, the top of a security analysis staff often called mannequin coverage, is slated to depart OpenAI on the finish of the 12 months.
OpenAI spokesperson Kayla Wooden confirmed Vallone’s departure. Wooden stated OpenAI is actively searching for a alternative and that, within the interim, Vallone’s staff will report on to Johannes Heidecke, the corporate’s head of security techniques.
Vallone’s departure comes as OpenAI faces rising scrutiny over how its flagship product responds to customers in misery. In latest months, a number of lawsuits have been filed towards OpenAI alleging that customers shaped unhealthy attachments to ChatGPT. A number of the lawsuits declare ChatGPT contributed to psychological well being breakdowns or inspired suicidal ideations.
Amid that stress, OpenAI has been working to grasp how ChatGPT ought to deal with distressed customers and enhance the chatbot’s responses. Mannequin coverage is without doubt one of the groups main that work, spearheading an October report detailing the corporate’s progress and consultations with greater than 170 psychological well being specialists.
Within the report, OpenAI stated a whole lot of hundreds of ChatGPT customers might present indicators of experiencing a manic or psychotic disaster each week, and that greater than one million folks “have conversations that embody express indicators of potential suicidal planning or intent.” Via an replace to GPT-5, OpenAI stated within the report it was in a position to cut back undesirable responses in these conversations by 65 to 80 p.c.
“Over the previous 12 months, I led OpenAI’s analysis on a query with virtually no established precedents: how ought to fashions reply when confronted with indicators of emotional over-reliance or early indications of psychological well being misery?” wrote Vallone in a submit on LinkedIn.
Vallone didn’t reply to WIRED’s request for remark.
Making ChatGPT gratifying to talk with, however not overly flattering, is a core rigidity at OpenAI. The corporate is aggressively making an attempt to broaden ChatGPT’s consumer base, which now contains greater than 800 million folks every week, to compete with AI chatbots from Google, Anthropic, and Meta.
After OpenAI launched GPT-5 in August, customers pushed again, arguing that the brand new mannequin was surprisingly chilly. Within the newest replace to ChatGPT, the corporate stated it had considerably lowered sycophancy whereas sustaining the chatbot’s “heat.”
Vallone’s exit follows an August reorganization of one other group targeted on ChatGPT’s responses to distressed customers, mannequin habits. Its former chief, Joanne Jang, left that function to begin a brand new staff exploring novel human–AI interplay strategies. The remaining mannequin habits workers have been moved underneath post-training lead Max Schwarzer.