Featured Posts

To top
22 Jan

Is demystifying the algorithm the important thing to safer

Is demystifying the algorithm the important thing to safer

Under fire for promoting harmful content to vulnerable teens, TikTok has announced plans to make its recommendations more transparent

The algorithms that resolve what we see after we log onto any given social media platform are notoriously opaque, and for probably the most part we allow them to guide our scrolling habits with little or no thought or intervention. Why is Twitter so insistent that I follow art history accounts with Roman bust profile pictures and dubious intentions? Why is Instagram’s Explore page so out-of-touch with my actual interests? Why does TikTok exclusively feed me videos of dogs making friends with unlikely animals? OK, that last one is smart – as for the remainder, it’s a mystery.

It’s grow to be clear, nevertheless, that there are some dark unwanted side effects to the algorithms that Big Tech uses to “manufacture serendipity” in our online lives. Last week (December 14), for instance, the Center for Countering Digital Hate published a report titled “Deadly By Design”, which revealed that TikTok’s “thermonuclear algorithm” directs content related to self-harm and eating disorders to vulnerable users.

Researchers from the CCDH created TikTok accounts for fictional 13-year-olds across 4 separate countries (the US, UK, Australia, and Canada) to check how they were targeted by the app’s algorithms. Each recent account watched half-hour of algorithmically-recommended content from its ‘For You’ page, liking any videos related to body image, mental health, or eating disorders. Reportedly, eating disorder and self-harm content was really useful to the fictional teens inside minutes of creating an account. Accounts considered “vulnerable” were also targeted with 12 times as many self-harm videos as “standard” teen accounts.

It isn’t just TikTok. The brand new research echoes a lawsuit against Meta launched in June this yr, which claims that Instagram’s “addictive” algorithm caused a preteen girl to develop an eating disorder, self-harm behaviours, and suicidal ideation, partly by pushing “thinspo” or “thin-spiration” content to her Explore page. This follows Instagram admitting that it promoted pro-eating disorder content to teens, back in 2021.

In fact, it’s no secret that social media corporations’ platforms are inherently addictive, and push polarising content as a way to boost engagement and, in turn, ad revenue. But the harmful real-world impact, especially on young users, is now seemingly undeniable. So what are we – or, more importantly, the tech corporations themselves – going to do?

Within the CCDH report, the organisation lays out recommendations to assist cultivate a safer online environment, including “proactive, informed enforcement” of eating disorder content and coded hashtags which are used to share it, and laws to carry social media corporations accountable for the content their algorithms promote. Notably, algorithmic transparency comes top of the list, with the CCDH recommending: “TikTok must provide full transparency of its algorithms and rules enforcement, or regulators should step in and compel the platform to accomplish that.”

Shortly after the report was published, TikTok actually announced a recent feature that goals to shed some light on its algorithm, which began rolling out on Tuesday (December 20). The feature – which appears as a matter mark icon in your FYP – tells users why they were really useful a certain video, citing aspects resembling previous interactions, content the user has recently posted, or content that’s popular within the user’s region.

“This feature is one in every of some ways we’re working to bring meaningful transparency to the individuals who use our platform,” says TikTok in a press release. “Looking ahead, we’ll proceed to expand this feature to bring more granularity and transparency to content recommendations.”

Could transparency alone be the important thing to creating the platform safer? Perhaps not – users trapped in harmful content cycles probably aren’t going to care as much about how they got there, as about what they’re being shown. Nonetheless, demystifying the algorithm may make it easier to diagnose how harmful content is usually recommended, and to develop fixes and laws in consequence.

Speaking of toxic feedback loops on social media, Elon Musk has spoken out against the opaque algorithms that determine our content intake on several occasions, warning: “You might be being manipulated by the algorithm in ways you don’t realise.” Months before he bought Twitter for $44 billion at the tip of October, he also floated the thought of creating the platform’s suggestion algorithm completely open source, which could be a giant step toward unpicking its effect on our brains. 

Shortly after the deal was finalised, Musk appeared to double down on the plan. In his inaugural statement, he said that he wants “to make Twitter higher than ever” by, amongst other things, “making the algorithms open source to extend trust”. Unfortunately, the open source idea appears to have been lost in the following chaos, although Elon has had time to ban his impersonators, introduce a disgusting recent color scheme, and run polls about whether he’s fit to steer the corporate (answer: no lol). Hopefully he can fulfil his promise and make some strides for transparency before he finds a CEO “silly enough” to take his place.

Recommended Products

Beauty Tips
No Comments

Sorry, the comment form is closed at this time.