Instagram's Robot Moderators Are Probably Just as Prejudiced Against Plus-Sized Bodies as Humans

Latest
Instagram's Robot Moderators Are Probably Just as Prejudiced Against Plus-Sized Bodies as Humans
Image: (AP)

The Turing test declares artificial intelligence to be truly intelligent when a human can’t tell a robot from an actual person, and even as AI becomes just as efficient as a human at jobs like removing flagged posts from social media, it’s also probably ingrained with all the prejudices of its human creators.

For example, on Instagram, many plus-sized influencers report that their pictures are removed with much more frequency than others who post similar content. Katana Fatale told Buzzfeed that a picture she took on vacation in Hawaii, a side view of her naked body, was removed for violating Instagram’s terms even though it was not, in fact, in direct violation of any of Instagram’s policies and was quite similar to images frequently posted by influencers like Emily Ratajkowski that earn millions of likes:

“She had followed Instagram’s community guidelines, which ban female nipples, sexual acts, genitals, and close-ups of nude butts…So Fatale was confused as to why she was now being told that further violations could see her account taken away, especially when other women seemed to be able to post similar images with zero issues.”

The reason for this targeting, according to experts, is most likely lies in a mix of human and AI biases. Most social media companies rely on a blend of human and artificial intelligence to determine which flagged images get removed, and over time, the AI “learns” which content to remove by scanning millions of images to recognize and filter potentially offensive content, like nipples or pornography. But since AI possibly isn’t given a ton of content featuring plus-sized bodies, it’s possible that it never “learns” which of those bodies break the rules and which do not.

In March, Lizzo reported a similar problem on TikTok, asking why a video in which she wore a bikini was removed from the platform when videos of people in bikinis are pretty prevalent on the app. While the problem could be fixed, experts say that remedying it would be expensive and difficult, which is likely why social media companies would prefer to just let the robots have their biases. Users on many social media platforms say that their bodies have been singled out for censorship, though there’s no data that specifically proves that AI has learned to filter out larger bodies.

However, if Microsoft’s experiment in machine learning, a chatbot called Tay that Twitter taught to be vilely racist in a matter of minutes, proved anything, it’s that our machines are only as good as the deeply flawed humans that create and teach them.

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin