Instagram: Algorithmic Surveillance for Teens

Instagram: Algorithmic Surveillance for Teens
On February 26, 2026, Meta announced the implementation of a new security feature on Instagram. The platform's algorithms will start sending notifications to parents (if parental control is enabled) if a teenager regularly searches for content related to depression, self-harm, or suicide.

Technically, this is presented as caring for mental health, but the context is different. The UK is currently actively considering bills for a complete ban on social media for teenagers. Meta is trying to get ahead of the curve, shifting responsibility from the platform to parents through automatic NLP monitoring. This decision will inevitably face criticism from human rights activists: the line between protecting life and direct invasion of teenagers' private correspondence/search history has been completely blurred.

Source: Reuters / TechCrunch
Social MediaInstagramEthicsPrivacySafety
« Back to News List
Chat