This Is Most Definitely the Way to Prevent Cyberbullying on Your App

Latest
This Is Most Definitely the Way to Prevent Cyberbullying on Your App
Image:LIONEL BONAVENTURE (AFP via Getty Images)

TikTok has finally found a way to prevent cyberbullying experienced by marginalized people online: Just don’t let people see their videos in the first place!

On Tuesday, the mega popular app admitted to quashing videos made by queer people, disabled people, and other “special users” who were deemed a “bully risk by default,” according to German news site Netzpolitik. Moderators were instructed to watch the videos and decide if its creator was someone likely to be “susceptible to harassment or cyberbullying based on their physical or mental condition,” according to moderation documents the site obtained.

If the creator was someone were, the video was flagged to limit its reach to protect the creator from cyberbullying. From Netzpolitik:

TikTok uses its moderation toolbox to limit the visibility of such users. Moderators were instructed to mark people with disabilities as “Risk 4.” This means that a video is only visible in the country where it was uploaded. For people with an actual or assumed disability, this means that instead of reaching a global audience of one billion, their videos reached a maximum of 5.5 million people.

But the rules further limited the reach of creators considered disabled by moderators. And, because it’s content moderation on a tech platform, there is basically no time to do so. Moderators had about 30 seconds to judge if someone on screen was disabled and therefore needed to be saved from themselves.

This happened to videos trying to make it into the For-You-Feed on TikTok.

Again, per Netzpolitik:

Many users tag their videos with hashtags like #foryou or #fyp. From this stage, TikTok’s instructions to the moderators said, people with disabilities were to be kept away.
The guidelines also gives examples of users for whom this applies: „facial disfigurement,“ „autism,“ and „Down syndrome“. Moderators were supposed to judge whether someone has these characteristics and mark the video accordingly in the review process. On average, they have about half a minute to do this, as our source at TikTok reports.

TikTok told Netzpolitik: “This approach was never intended to be a long-term solution and although we had a good intention, we realised that it was not the right approach.”

There are two big questions this practice raises: First, how the hell are you supposed to judge if someone is sufficiently disabled, queer, fat, whatever, to warrant such flagging for suppression in under 30 seconds? But second, and arguably more importantly: Why is this the way you choose to deal with cyberbullying on your platform?! Why privilege cyberbullies instead of those just making content (and therefore MONEY) for you?

I can’t believe this has to be spelled out for a company worth literal billions: A better way to deal with bullying on your platform may be to actually ban the people being any shade of terrible to other users, instead of the users just trying to have fun.

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin