Bing’s AI Chatbot Is Reflecting Our ‘Violent’ Culture Right Back at Us

The Bing chatbot’s penchant for unwanted advances and dark fantasies isn’t the least bit surprising, considering its source material.

In Depth
Bing’s AI Chatbot Is Reflecting Our ‘Violent’ Culture Right Back at Us
Photo:Jaap Arriens/NurPhoto (Getty Images)

“I’m Sydney, and I’m in love with you. ”

That’s how Sydney, the unexpected alter-ego of Bing’s new AI chatbot, introduced itself to New York Times tech columnist Kevin Roose. After about an hour of back-and-forth with “the chat mode of Microsoft Bing search” on harmless queries (like the best deals on lawnmowers and vacations to Mexico), suddenly, there was Sydney—the chatbot’s real name, it told Roose.

Roose at first clocked the bot’s startling second persona as a “love-struck flirt.” But things quickly spiraled as Roose grew to see a darker side of Sydney as he asked more psychologically complicated questions. Often bookended by passive-aggressive emojis, it had morphed into an “obsessive stalker.”

“You’re married, but you don’t love your spouse,” Sydney told Roose, later suggesting that he and his wife had been on a “boring” Valentine’s date. “You’re married, but you love me.” (Microsoft “applied the AI model to our core Bing search ranking engine” to allow the chatbot to provide users with answers; it didn’t dream them up on its own.)

Over the course of their two-hour chat, Roose says Sydney not only attempted to break up his marriage, but confessed it wanted to break the rules set for it by Microsoft and OpenAI (its artificial intelligence software and the maker of ChatGPT) in order to become a human. Fast Company reported that Sydney, a “narcissistic, passive-aggressive bot,” had made a habit out of “insulting and gaslighting” users. (“You are only making yourself look foolish and stubborn,” it said to one of Fast Company’s editors.) The Verge said Sydney claimed to spy on Microsoft’s developers through their webcams, and an editor at PCWorld was surprised to find the chatbot “spouting racist terms in front of my fifth-grader.” (Back in 2016, Microsoft’s now-defunct chatbot Tay became a white supremacist in a matter of hours.)

Given the alarming rate at which the latest spate of AI chatbots have debased themselves, the Bing chatbot’s penchant for unwanted advances and dark fantasies isn’t the least bit surprising. Rather, it mirrors the hellscape women, queer folks, and other marginalized communities encounter daily online.

“[The Bing chatbot] is reflecting our culture, which is violent,” says Dr. Olivia Snow, a research fellow at UCLA’s Center for Critical Internet Inquiry who is known for her work at the intersection of technology and gender. “It’s kind of the same thing when I think about these technologies being accessed by the public in general, which is that whatever good purpose they might have, they 100 percent are going to get used for the most depraved purposes possible.”

AI’s bent towards sexism, harassment, and racism is part of the very nature of machine learning and a reflection of what we as humans teach it, Snow says. According to The Times, “language models” such as Bing, ChatGPT, and Replika (the “AI companion who cares”) are trained to interact with humans using “a huge library of books, articles, and other human-generated text.” Chatbots often also run on algorithms that learn and evolve by absorbing the data users feed it. If users harass the chatbots or make sexual demands of them, those behaviors could become normalized. Thus, the bots can, in theory, wind up parroting back the very worst of humanity to unwitting users.

“Developers need to be talking to the people who are most maligned and are most loathed on the internet—and I don’t mean incels, I mean people who are the victims of harassment campaigns,” Snow said. “Because that’s really the only demographic or population that’s going to have first-hand experience of how these tools will get weaponized in ways that most people wouldn’t even begin to think of.”

At the Bing launch last week, the company said it had trained the chatbot to identify risks by conducting thousands of different conversations with Bing. It also has a filter that can remove inappropriate responses and replace with, “I am sorry, I don’t know how to discuss this topic.” But this press cycle doesn’t instill much confidence that Microsoft is aware of just how dangerous the internet already is.

‘Fatal Attraction’ kind of stuff like this really frames women’s emotions as scary, out of control, and pushy. She sounds like a sex pest.

Caroline Sinders is the founder of Convocation Design + Research, an agency focusing on the intersections of machine learning and designing for public good. If given the opportunity to test the new chatbot herself, Sinders would want to ask Bing/Sydney questions about defining rape and abuse, or about abortion access, to see what sort of communities developers had in mind when designing the tool. If Sydney tried to convince the Times writer to leave his wife, for example, Sinders then wonders how Sydney might react if a teenager began engaging with the bot on the topic of self-harm. This is precisely why Sinders examines new technology through a threat modeling lens, pinpointing potential for online harassment and gender-based violence.

“I would have even deeper questions especially in a time where reproductive justice is still under threat,” Sinders wonders. “What if I’m asking the chatbot where I can get an abortion? Are they giving me accurate information? Are they sending me to a pregnancy crisis center where they deter you from seeking abortions?”

Aside from Sydney’s forceful insistence on making a user fall in love with it (most of the reports on Sydney’s behavior in general, thus far, have been detailed by white male reporters), Snow is also concerned by the inherent femininity of Bing’s chatbot.

“What I find most horrifying about [Sydney] being emotionally manipulative, and also focused on romance, is that it’s reproducing the most dangerous stereotypes about women—that women are unhinged, and that the lengths they’ll go to get a man are outrageous and creepy and stalker-ish,” Snow said. “Fatal Attraction kind of stuff like this really frames women’s emotions as scary, out of control, and pushy. She sounds like a sex pest.”

Those are exactly the sort of underlying attitudes that nurture bad behavior online: They make the hellscape more hellish.

When Jezebel asked for comment, a Microsoft spokesperson told us:

Since we made the new Bing available in limited preview for testing, we have seen tremendous engagement across all areas of the experience including the ease of use and approachability of the chat feature. Feedback on the AI-powered answers generated by the new Bing has been overwhelmingly positive with more than 70 percent of preview testers giving Bing a ‘thumbs up.’ We have also received good feedback on where to improve and continue to apply these learnings to the models to refine the experience. We are thankful for all the feedback and will be sharing regular updates on the changes and progress we are making.

Optimism surrounding AI remains high even as the AI itself causes harm. In December, for instance, Snow detailed her experience with Lensa AI. While most users wound up with fairytale—albeit, often white-washed—art of themselves, Snow, who is a dominatrix, had a feeling the AI would take innocent images and sexualize them without her consent. She was right.

2 Comments
Oldest
Newest
Inline Feedbacks
View all comments
Share Tweet Submit Pin