Unsurprisingly, Robots Are Just as Racist as Their Human Creators

Latest
Unsurprisingly, Robots Are Just as Racist as Their Human Creators
Image: (Getty)

Less than two weeks ago, Microsoft laid off its human journalists in favor of AI-powered algorithms, which almost immediately proved to be racist—but not quite as quickly as Microsoft’s 2016 chatbot, Tay, which began spouting racial epithets on Twitter within 24 hours of its launch. (You’d think they’d have learned this lesson the first time, but no.) Microsoft’s robot journalists fared no better than its robot social media user, which mistook Little Mix member Jade Thirlwall for her bandmate, Leigh-Anne Pinnock, in the header image on a blog, ironically, about the singer’s experiences with racism.

On the podcast No Country for Young Women, Thirlwall spoke about being assaulted by classmates for her Yemeni and Egyptian heritage, saying she’d been “pinned down in the toilets” by racists at school. MSN.com’s news-sourcing algorithm quickly picked up the story and attached a photograph of Pinnock, who is Barbadian and Jamaican. Thirlwall was quick to point out that it’s not just robots who confuse them, but rather robots mirroring the exact same racism she and Pinnock face constantly from human journalists:

“This shit happens to @leighannepinnock and I ALL THE TIME that it’s become a running joke,” Thirlwall tweeted. “It offends me that you couldn’t differentiate the two women of colour out of four members of a group … DO BETTER!”

MSN.com doesn’t produce its own journalism but instead curates and reposts news from other sources on Microsoft’s website, which then splits ad revenue with the original sources. On May 30, the company announced its decision to layoff the hundreds of humans doing this work around the world and replace them with artificial intelligence. When asked if the company now regrets that decision and what plans Microsoft has to course-correct for avoiding future technological racism, the company basically said nothing:

“As soon as we became aware of this issue, we immediately took action to resolve it and have replaced the incorrect image,” a spokesperson said in a statement.

The news of Microsoft’s terrible technology comes on the heels of IBM announcing in a letter to Congress that the company will no longer be offering or developing facial recognition or analysis software, as studies have proven the technologies are not only biased but also potential tools for “violations of basic human rights and freedoms” in the hands of law enforcement. Meanwhile, plus-sized influencers say that their content is frequently unfairly flagged and removed by Instagram’s AI-powered moderators.

Probably completely unconnected to the white male overlords of Silicon Valley, who have populated their AI research labs with scientists who are very white and fed their algorithms with data ripped from crowd-sourced sites like Wikipedia, whose editors are predominantly white. But it is awfully odd that their little Frankenstein algorithms suck in such specific ways!

0 Comments
Inline Feedbacks
View all comments
Share Tweet Submit Pin