Instagram's Robot Moderators Are Probably Just as Prejudiced Against Plus-Sized Bodies as Humans

Illustration for article titled Instagrams Robot Moderators Are Probably Just as Prejudiced Against Plus-Sized Bodies as Humans
Image: AP

The Turing test declares artificial intelligence to be truly intelligent when a human can’t tell a robot from an actual person, and even as AI becomes just as efficient as a human at jobs like removing flagged posts from social media, it’s also probably ingrained with all the prejudices of its human creators.

Advertisement

For example, on Instagram, many plus-sized influencers report that their pictures are removed with much more frequency than others who post similar content. Katana Fatale told Buzzfeed that a picture she took on vacation in Hawaii, a side view of her naked body, was removed for violating Instagram’s terms even though it was not, in fact, in direct violation of any of Instagram’s policies and was quite similar to images frequently posted by influencers like Emily Ratajkowski that earn millions of likes:

“She had followed Instagram’s community guidelines, which ban female nipples, sexual acts, genitals, and close-ups of nude butts...So Fatale was confused as to why she was now being told that further violations could see her account taken away, especially when other women seemed to be able to post similar images with zero issues.”

Advertisement

The reason for this targeting, according to experts, is most likely lies in a mix of human and AI biases. Most social media companies rely on a blend of human and artificial intelligence to determine which flagged images get removed, and over time, the AI “learns” which content to remove by scanning millions of images to recognize and filter potentially offensive content, like nipples or pornography. But since AI possibly isn’t given a ton of content featuring plus-sized bodies, it’s possible that it never “learns” which of those bodies break the rules and which do not.

In March, Lizzo reported a similar problem on TikTok, asking why a video in which she wore a bikini was removed from the platform when videos of people in bikinis are pretty prevalent on the app. While the problem could be fixed, experts say that remedying it would be expensive and difficult, which is likely why social media companies would prefer to just let the robots have their biases. Users on many social media platforms say that their bodies have been singled out for censorship, though there’s no data that specifically proves that AI has learned to filter out larger bodies.

However, if Microsoft’s experiment in machine learning, a chatbot called Tay that Twitter taught to be vilely racist in a matter of minutes, proved anything, it’s that our machines are only as good as the deeply flawed humans that create and teach them.

Share This Story

Get our newsletter

DISCUSSION

As an AI researcher, I have pretty strong opinions about this, but tl;dr: the models are almost certainly biased and I don’t think this problem is going to go away.

First of all, all neural networks do is learn EXACTLY what they are trained on. If you give them biased data, they are going to learn that bias. That’s basically how they work. They have mechanisms to look for millions of tiny trends in the training data and then exploit that to make predictions. If there’s a large trend in the data, say bias against plus-size models, the model will absolutely pick it up. That’s basically what they’re designed to be good at.

Secondly, there’s a lot of people out there doing machine learning that really don’t understand what they’re doing. It’s a really hot field right now and doesn’t require a ton of education to be able to try out some of these techniques. As a result, a significant portion of the AI being done is by people who don’t understand my first point. You see this in a lot of tech news, where people are interpreting a network learning something as being meaningful of what it has learned. They believe the network to be finding “universal truths” about what they’re using it for or “understanding” the facets of the subjects, when in reality it is just exploiting biases in the training data.

Third of all, honestly my experience is that there’s a surprisingly large amount of right-wing people doing AI research. For all the criticism academia gets for being the ivory tower liberal elite, there’s an appreciable amount of researchers who are aware of all of the above, yet believe that it is a good thing, exposing inherent truths about the world.

Also, in every seminar I’ve sat through on dealing with biases in machine learning, everyone has a lot to say, but it largely all goes out the window when you return to day-to-day work.