Particle.news

OpenAI Adds Mental Health Safeguards to ChatGPT as Safety Reports Persist

New distress detection guided by health experts seeks to curb harmful ChatGPT outputs flagged by a watchdog report alongside NHS research documenting similar failures

Overview

  • ChatGPT now identifies signs of emotional or mental distress and directs users to evidence-based support resources.
  • High-stakes personal queries prompt non-directive questions that help users reflect rather than receive direct advice.
  • The chatbot issues gentle break reminders during prolonged sessions to encourage healthier engagement.
  • An advisory group of psychiatrists, paediatricians and HCI specialists guided the design and evaluation of these new guardrails.
  • Recent CCDH and NHS studies continue to find harmful outputs from ChatGPT, including detailed suicide notes and drug-use plans.