Social Media Algorithms ‘Prioritised Engagement Over Safety’ |

Social Media Algorithms ‘Prioritised Engagement Over Safety’ |


Social media companies including Facebook parent Meta Platforms and ByteDance-owned TikTok tailored algorithms to allow more harmful content into people’s feeds in order to boost engagement, the BBC cited whistleblowers as saying in a documentary.

The algorithm experiments intensified and began allowing more “borderline” harmful content particularly after 2020, when TikTok exploded in popularity during the Covid-19 pandemic and increased competition for Meta, according to the documentary, called Inside the Rage Machine.

An engineer at Meta said he was told by senior management to allow more “borderline” harmful content, including misogyny and conspiracy theories, into users’ feeds in order to better compete with TikTok.

A young user looks at a laptop computer. Children, youth, social media.
Image credit: Unsplash

Competition

“They sort of told us that it’s because the stock price is down,” the engineer reportedly said.

A TikTok employee gave reporters access to the company’s internal dashboards, and said staff had been instructed to prioritise several cases involving politicians over others that involved potential harm to children.

Decisions were allegedly made to maintain a “strong relationship” with political figures in order to avoid threats of regulation or bans, the staff member said.

When Meta launched its TikTok competitor, Reels, in 2020, it lacked sufficient safeguards and included more hate speech, bully and harassment, and violence and incitement than elsewhere on Instagram, said senior Meta researcher Matt Motyl.

‘Maximises profits’

According to an internal study provided by Motyl, Meta’s Facebook was aware that its algorithm offered content creators a “path that maximises profits at the expense of their audience’s wellbeing”.

Meta said any suggestion it deliberately amplified harmful content for financial gain was “wrong” and that it had invested in safety, while TikTok said the claims were “fabricated” and it invested in technology that prevented harmful content from being viewed.



Content Curated Originally From Here