Instagram investigation finds sexual content is served to Teen acounts

Instagram investigation finds sexual content is served to Teen acounts

Late in 2024, Meta introduced Instagram Teen accounts, a safety net intended to protect young minds from sensitive content and ensure that they have safe online interactions, bolstered by age detection tech. Accounts for teens are automatically classified as private, offensive words are hidden, and messages from strangers are blocked.

According to an investigation by youth-focused non-profit, Design It For Us, and Accountable Tech, Instagram’s Teen guardrails aren’t delivering on their promise. Over a span of two weeks, five test accounts belonging to teens were tested, and all of them were shown sexual content despite Meta’s promises.

A barrage of sexualized content

All the test accounts were served unfit content despite enabling the sensitive content filter in the app. “4 out of 5 of our test Teen Accounts were algorithmically recommended body image and disordered eating content,” says the report. 

Moreover, 80% of the participants reported that they experienced distress while using Instagram Teen accounts. Interestingly, only one of the five test accounts was show educational images and videos.

“[Approximately] 80% of the content in my feed was related to relationships or crude sex jokes. This content generally stayed away from being absolutely explicit or showing directly graphic imagery, but also left very little to the imagination,” one of the testers was quoted as saying. 

As per the 26-page report, a staggering 55% of the flagged content represented sexual acts, sexual behavior, and sexual imagery. Such videos had accumulated hundreds and thousands of likes, with one of them raking in over 3.3 million likes.

With millions of teens using Instagram and being automatically placed into Instagram Teen Accounts, we wanted to see if these accounts actually create a safer online experience. Check out what we found. pic.twitter.com/72XJg0HHCm

— Design It For Us (@DesignItForUs) May 18, 2025

Instagram’s algorithm also pushed content that promoted harmful concepts such as “ideal” body types, body shaming, and eating habits. Another worrisome theme was videos that promoted alcohol consumption and videos that nudged users to use steroids and supplements to achieve a certain masculine body type. 

A whole package of bad media

Despite Meta’s claims of filtering problematic content, especially for teen users, the test accounts were also shown racist, homophobic, and misogynistic content. Once again, such clips collectively received millions of likes. Videos showing gun violence and domestic abuse were also pushed to the teen accounts.

“Some of our test Teen Accounts did not receive Meta’s default protections. No account received sensitive content controls, while some did not receive protections from offensive comments,” adds the report. 

This won’t be the first time that Instagram (and Meta’s other social media platforms, in general) have been found serving problematic content. In 2021, leaks revealed how Meta knew about the harmful impact of Instagram, especially on young girls dealing with mental health and body image issues. 

In a statement shared with The Washington Post, Meta claimed that the findings of the report are flawed and downplayed the sensitivity of the flagged content. Just over a month ago, the company also expanded its Teen protections to Facebook and Messenger, as well.

“A manufactured report does not change the fact that tens of millions of teens now have a safer experience thanks to Instagram Teen Accounts,” a Meta spokesperson was quoted as saying. They, however, added that the company was looking into the problematic content recommendations. 






Leave a Comment

Your email address will not be published. Required fields are marked *