Technologists insist upon the impending arrival of the singularity, a time at which human technology (particularly artificial intelligence) progresses to the point that we create a greater-than human intelligence. With this arrival of this intelligence, human civilization — and possibly even human nature itself — will be radically altered. Anyone who has ever seen a cat on a Roomba can attest to the inevitability of this event.
In a piece on Salon, Soraya Chemaly takes a look at the way artificial intelligence isn't a neutral medium: as she point out, it's heavily shaped by the prejudices and stereotypes of those who create it and those who utilize it. Take, for instance, Google Instant's predictive search capability:
Earlier this year, a study conducted by Lancaster University concluded that Google Instant’s autocomplete function creates an echo chamber for negative stereotypes regarding race, ethnicity and gender. When you type the words, “Are women…” into Google it predicts you want one of the following: “…a minority,” “…evil,” “…allowed in combat,” or, last but not least, “…attracted to money.”
Because there are millions of dinguses out there Googling "are women evil" (DUH, why else do you think the blood of innocents emerges from our wombs once a month), the much more rudimentary intelligence behind Google instant has learned that it's something human beings ponder often. In short, algorithms are capable of learning and re-circulating negative stereotypes through the supposedly unbiased authority of technology.
It's obvious that there's a gaping chasm between Google autocomplete and human-like artificial intelligence. However, it's revealing and very troubling that one's unconscious and conscious biases can become part of an AI algorithm — especially in light of the fact that most of the people theorizing about (and helping to construct) the future of artificial intelligence are white men. As Chemaly puts it, "Artificial intelligence is being developed by people who benefit from socially dominant norms and little vested personal incentive to challenge them."
Take, for instance, Marshal Brain's 2008 lecture at the Singularity Summit.
The scenario is you walk into McDonald's and this attractive female walks up to you and looks at you and recognizes you and says, "Hi, Marshall Brain, you're back at McDonald's! How are you today?" And you interact with this thing, this person, and she knows everything about me... She knows what I've ordered she knows the name of my kids. She says, "So, did Irene have a soccer game this weekend? How'd that go?"...
The potential that this second intelligent species gives to improve our lives... is high. It's going to be a good thing when this stuff arrives.
For whom is this good? It's good for the McDonald's customer. It's not good for the McDonald's worker — and it's important to note that nine out of ten women are employed in service industries, which this compliant female AI would hypothetically come to dominate. Furthermore, the concept of an "attractive female" robot who knows exactly how to serve each person she encounters does more to reinforce gender stereotypes than to revolutionize human consciousness.
Our implicit gender biases come wriggling out of the murk of human consciousness when it comes to the way we view AI as well. According to a study in the Journal of Applied Social Psychology, humans are more likely to think that "male" gendered robots possess agency, whereas "female" robots were viewed as focusing on others over themselves:
Once gender was effectively assigned [to the robot] by the participants in the study, it colored their choices of what the robot should do. “Male” robots were considered better choices for technical jobs, like repairing devices and “female” robots were thought to be “better” at stereotypical household chores.
For a contemporary example of how gender colors our perception of markedly non-gendered objects, look no further than the case of Siri. Because the iPhone app possesses a female voice, she's described as "sassy," depicted as being in a "cat fight" with other robot-assistants and made into the butt of weird sexual jokes. Would that happen if Siri had a male voice? Most likely not. In addition, as Chemaly points out, when the seemingly "female" program was first released, it embodied a male perspective: Siri was pretty much indifferent to the specific needs of women, unable to generate results related to women's health — despite being able to locate the nearest escort service.
As long as the tech industry caters predominantly to the male perspective, some sort of consciousness-opening AI is unlikely to shift dominant paradigms. Artificial intelligence, as it stands now, is no escape from the very human trap of sexism.
"Will robots make us sexist?" [Salon]