Instagram, operated by Facebook parent Meta, is to apply restrictions similar to the US’ PG-13 cinema rating system to teenagers’ accounts, as the company faces pressure to show that it is not prioritising its profitable advertising business over protecting young people.
The restrictions will be automatically applied to the accounts of those under 18, and users will not be able to opt out without a parent’s consent.
The restrictions mean not recommending posts with strong language, certain risky stunts, or content that could encourage potentially harmful behaviours, such as posts showing marijuana paraphernalia, Meta said.

Restrictions
The update, which follows the rollout of mandatory teen accounts last year, is to first be applied in Australia, the UK, Canada and the US, with plans to expand it to the rest of Europe and elsewhere early next year.
Teens will no longer be able to follow accounts that regularly share age-inappropriate content or if the name or biography contains inappropriate material, such as a link to an OnlyFans account, Meta said.
If teens already follow such accounts, they will no longer be able to see or interact with their content.
The update will also apply to AI chats and other experiences targeted to teens.
A stricter “limited content” setting can be applied by parents to block more content and remove teens’ ability to see, leave or receive comments under posts.
Meta has long said that it has restrictions in place to protect young people from dangerous content, but a recent study led by a former Meta whistleblower found that teen accounts researchers created were regularly recommended age-inappropriate sexual content.

Sexual content
This included “graphic sexual descriptions, the use of cartoons to describe demeaning sexual acts, and brief displays of nudity”, said the study, which was led by Arturo Béjar, a former senior Meta engineer, with academics from New York University, Northeastern University, the UK’s Molly Rose foundation and others.
Instagram also recommended a “range of self-harm, self-injury, and body image content” to teen accounts that the researchers said “would be reasonably likely to result in adverse impacts for young people, including teenagers experiencing poor mental health, or self-harm and suicidal ideation and behaviours”.
Meta rejected the findings and said it protects young people on Instagram.
Rowan Ferguson, policy manager at the Molly Rose Foundation, said Meta’s PR announcements often “do not result in meaningful safety updates for teens” and further updates must be tested on their effectiveness.
A number of countries have passed or are considering banning social media for young people as a result of persistent reports of child harms.
Australia has passed a landmark social media ban for youths aged under 16, which comes into force in December, while France urged similar measures last month after a parliamentary committee found that TikTok used an addictive design that had been copied by other social media platforms, including Meta’s properties.
Social media ban
The French committee was set up in March initially to examine the psychological effects of TikTok on minors after seven families filed suit against the company last year, accusing it of exposing their children to content that pushed them toward taking their own lives.
In June of last year the US surgeon general called for warning labels to be included on social media platforms, similar to those attached to cigarettes and other tobacco products, as a step toward addressing a mental health “emergency” amongst young people.
Facebook, Instagram and TikTok face dozens of legal challenges in the US alleging they entice millions of children onto their platforms and use addictive techniques to ensure they stay there.
New York City this month filed a federal lawsuit against Snapchat, Facebook and Instagram parent Meta Platforms, TikTok and YouTube for allegedly contributing to a mental health crisis amongst youths by developing intentionally addictive platforms.
EU investigation
The lawsuit says the platforms are not doing enough to block under-age users, accusing them of gross negligence and creating a public nuisance.
The EU opened an investigation into multiple social media platforms over child protection concerns earlier this month.






