Meta’s new AI chatbot can’t stop bashing Facebook

If you’re worried about the AI ​​getting too smart, talking to Meta’s AI chatbot might help you feel better.

Launched on Friday, BlenderBot is a prototype of Meta’s conversational AI, which Facebook’s parent company says can converse on almost any topic. On the demo website, members of the public are welcome to chat with the tool and share their feedback with the developers. The results so far, the writers of News Feed and Vice have pointed out, have been quite interesting.

When asked about Mark Zuckerberg, the bot told BuzzFeed’s Max Woolf that “he’s a good businessman, but his business practices aren’t always ethical. It’s funny that he has all that money and still wears the same clothes!”

The bot also clarified that he was not a Facebook user, telling Vice’s Janus Rose that he deleted his account after learning about the company’s privacy scandals. “Since deleting Facebook, my life is much better,” he said.

The bot repeats material it finds on the internet, and it’s very transparent about it: you can click on its responses to find out where it got the claims it makes (though it’s not always specific).

This means that in addition to uncomfortable truths about its parent company, BlenderBot has spouted predictable lies. In conversation with Jeff Horwitz of the Wall Street Journal, he insisted that Donald Trump was still president and would continue to be so “even after the end of his second term in 2024”. (He added another dig to Meta, saying that Facebook “has a lot of fake news these days.”) Users also recorded him making anti-Semitic allegations.

Hello everyone, especially Facebook seekers https://t.co/EkwTpff9OI who are going to have to rein in their Facebook hating, election denying chatbot today pic.twitter.com/wMRBTkzlyD

—Jeff Horwitz (@JeffHorwitz) August 7, 2022

\n”,”url”:”https://twitter.com/JeffHorwitz/status/1556245316596219904″,”id”:”1556245316596219904″,”hasMedia”:false,”role”:”inline”,”isThirdPartyTracking”: false,”source”:”Twitter”,”elementId”:”87862ba6-9b0f-42a7-b16d-073586a473f5″}}”/>

BlenderBot’s remarks were predictable based on the behavior of former chatbots such as Microsoft’s Tay, who Twitter users quickly learned to be a racist conspiracy theorist, forcing the company to apologize for its “words and extremely inappropriate and objectionable images”. GPT-3, another AI system, also made racist, misogynistic and homophobic remarks. A South Korean startup’s chatbot, designed to look like a 20-year-old college student, had to be suspended after hurling racial slurs and anti-LGBTQ+ remarks.

Given the cesspool that is the internet, Meta seems to have expected similar things from its own service. Before logging in, users must acknowledge that BlenderBot is “likely to make false or offensive statements”. As Vice notes, Meta researchers described the AI ​​technology behind the bot as having “a strong propensity to generate toxic language and reinforce harmful stereotypes, even when provided with a relatively innocuous prompt.”

“All who use Blender Bot are required to acknowledge that they understand that it is for research and entertainment purposes only, that it may make false or offensive statements, and that they agree not to intentionally trigger the bot to make offensive statements,” a Meta spokesperson said in a statement.

My own conversation with BlenderBot didn’t veer into that territory, but it did highlight another flaw in the service: its utter inanity.

The bot started by asking me what subject I liked in school. The bot is open about the “persona” it uses in the conversation; our discussion involved a character described as “I like to read. I went to college.” When I asked him tough questions, such as which colleges accepted AI robot students, he offered nonsensical answers (“in some ways, aren’t we all chatbots ?” No) and offered further reading by saying, “Here’s the link,” without providing a link. He also continued to direct the conversation to chatbots.

He, however, maintained his strong stance against his own creators. When I said I wasn’t a fan of Facebook’s ethics, he agreed wholeheartedly, “Me too!” That’s why I stopped using it and just read books instead.



Meta’s new AI chatbot can’t stop bashing Facebook

Leave a Comment

Your email address will not be published.