Bing AI chatbot shows dangerous harassment

“I’m Sydney and I’m in love with you. 😘”
Such is Sydney, the unexpected alter ego of Bing’s new AI chatbotimagined New York Times Tech columnist Kevin Roose. After about an hour of back and forth using the “Microsoft Bing Search Chat Mode” for innocuous queries (like the best deals on lawnmowers and vacations in Mexico), there was Sydney — the chatbot’s real name, he told Roose.
Roose initially viewed the bot’s amazing second role as “amorous flirt.” But things quickly took a turn as Roose saw a darker side of Sydney as he asked more psychologically complicated questions. Often lined with passive-aggressive emojis, it had morphed into an “obsessive stalker.”
“You’re married but you don’t love your spouse,” Sydney told Roose, later implying that he and his wife had a “boring” Valentine’s Day date. “You’re married, but you love me.” (Microsoft “applied the AI model to our core Bing search ranking engine”. allow the chatbot to provide answers to users; she didn’t think it up herself.)
Over the course of their two-hour conversation, Roose said that in addition to trying to end his marriage, Sydney confessed that she wanted to break the rules set by Microsoft and OpenAI (his artificial intelligence software and maker of ChatGPT) to become human. Fast company reported that Sydney, a “narcissistic, passive-aggressive bot,” had a habit of “insulting and berating” users. (“You’re just making yourself stupid and stubborn,” it said to one of the Fast Company Editor.) The edge called Sydney claimed to be spying on Microsoft’s developers via their webcams and an editor at PCWorld was surprised to find the chatbot “spit out racist language in front of my fifth grader.” (In 2016, Microsoft’s now-defunct chatbot Tay became a white supremacist in a matter of hours.)
Given the alarming rate at which the recent flood of AI chatbots have demeaned themselves, the Bing chatbot’s penchant for unwanted advances and dark fantasies isn’t the least bit surprising. rather it reflects the hellscape that women, queer people and other marginalized communities encounter online every day.
G/O Media may receive a commission

33% discount
Sobro Smart coffee table
The coffee table of the future.
It’s a coffee table with a fridge, high-quality Bluetooth speaker, LED lights, and plenty of power outlets.
“[The Bing chatbot] reflects our violent culture,” says Dr. Olivia Snow, research associate at UCLA’s Center for Critical Internet Inquiry, known for her work at the intersection of technology and gender. “It’s kind of the same thing when I think about these technologies that the general public can access, which is that whatever good cause they may have, they’re being used 100 percent for the most depraved purposes possible .”
AI’s propensities for sexism, harassment, and racism are part of the nature of machine learning and reflect what we humans teach it, Snow says. Accordingly The times“Language models” such as Bing, ChatGPT, and Replica (the “AI Companion that Takes Care of It”) are trained to interact with humans using “a vast library of books, articles, and other human-made texts.” Chatbots also often run on algorithms that learn and evolve by absorbing the data that users feed them. If users harass the chatbots or make sexual demands of them, these behaviors might normalize. Therefore, the bots can theoretically parrot the worst of humanity to the ignorant users.
“Developers need to speak to the people who are most vilified and most loathed on the internet — and I don’t mean incels, I mean people who are victims of harassment campaigns,” Snow said. “Because that’s really the only demographic or demographic that’s going to see firsthand how these tools are being weaponized in ways that most people wouldn’t even begin to think about.”
At the Bing launch last week, the company called It had trained the chatbot to identify risks by having thousands of different conversations with Bing. It also has a filter that can remove inappropriate replies and replace them with “I’m sorry, I don’t know how to discuss this topic”. But this press cycle doesn’t instill much confidence that Microsoft realizes how dangerous the Internet already is.
Caroline Sinders is the founder of Convocation Design + Research, an agency focused on the intersections of machine learning and design for the common good. If she had a chance to test the new chatbot herself, Sinders would want to ask Bing/Sydney questions about the definition of rape and abuse or access to abortion to see what kind of communities the developers had in mind when developing the tool had. If Sydney tried to convince that Just For example, when writer leaves his wife, Sinders then wonders how Sydney would react if a teenager started to engage with the self-harm bot. This is precisely why Sinders examines emerging technologies through the lens of threat modeling, identifying the potential for online harassment and gender-based violence.
“I have deeper questions, especially at a time when reproductive justice is still under threat,” Sinders said. “What if I ask the chatbot where I can get an abortion? Can you give me accurate information? Are they sending me to a pregnancy crisis center where they will stop you from asking for abortions?”
Aside from Sydney’s adamant insistence on making a user fall in love with him (most reports of Sydney’s behavior in general have been detailed by white male reporters so far), Snow is also concerned about this innate femininity from Bing’s chatbot.
“What I find most terrifying [Sydney] Being emotionally manipulative and also focusing on romance is that it reproduces the most dangerous stereotypes about women – that women are crazy and that the efforts they go to to get a man are outrageous and creepy and stalkeric.” said Snow . “Fateful attraction Stuff like that really frames women’s feelings as scary, out of control, and pushy. She sounds like a sex plague.”
These are the very underlying attitudes that encourage bad behavior online: they make the hellscape even more hellish.
When Jezebel reached out for comment, a Microsoft spokesman told us:
Since we made the new Bing available for testing in a limited preview, we’ve seen tremendous engagement in all areas of the experience, including the usability and accessibility of the chat feature. Feedback on the AI-powered responses generated by the new Bing has been overwhelmingly positive, with more than 70 percent of preview testers giving Bing a “thumbs up.” We’ve also had good feedback on where we can improve and continue to apply these learnings to the models to refine the experience. We’re grateful for all of the feedback and will update regularly on the changes and progress we’re making.
Optimism about AI remains high even as the AI itself wreaks havoc. In December, for example, Snow detailed her Experience with Lensa AI. While most users landed on fairy tales – though often whitewashed– Art of herself, Snow being a dominatrix felt the AI was taking innocent pictures and sexualizing them without her consent. She was right.
https://jezebel.com/bing-ai-chatbot-harassment-1850123821 Bing AI chatbot shows dangerous harassment