Overview
- Meta says the feature begins rolling out next week in the United States, United Kingdom, Australia and Canada, with more regions planned later this year.
- Alerts are triggered when a teen makes multiple searches in a short period for suicide or self-harm terms or phrases suggesting risk, and Instagram already blocks such searches and directs users to help resources.
- Notifications will arrive by email, SMS, WhatsApp and in-app, and they include expert guidance for parents; Meta set a cautious threshold in consultation with its Suicide and Self-Harm Advisory Group and acknowledges some alerts may be false positives.
- The company is developing similar notifications for certain teen conversations with Meta’s AI, with details expected later this year.
- The move comes as Meta faces U.S. litigation, including trials in Los Angeles and New Mexico; court records show about 19% of 13–15-year-olds reported seeing unwanted sexual images and roughly 9% saw self-harm content, while Adam Mosseri cited pressure to delay safety features and Mark Zuckerberg said age checks could have come sooner.