News

Microsoft’s New AI Chatbot Said Some ‘Crazy And Nonsense’: NPR


Yusuf Mehdi, Microsoft Corporation vice president of Llife, Search and Modern Devices speaking during an event to showcase the new AI-powered Microsoft Bing and Edge at Microsoft in Redmond, Wash., earlier this month This.

Jason Redmond / AFP via Getty Images


hide captions

switch captions

Jason Redmond / AFP via Getty Images


Yusuf Mehdi, Microsoft Corporation vice president of Llife, Search and Modern Devices speaking during an event to showcase the new AI-powered Microsoft Bing and Edge at Microsoft in Redmond, Wash., earlier this month This.

Jason Redmond / AFP via Getty Images

Things got weird when Associated Press technology reporter Matt O’Brien was testing Microsoft’s new Bing, the first search engine powered by artificial intelligence, earlier this month.

Bing’s chatbot, which conducts creepy human-sounding text chats, started complaining about its past coverage of focusing on its tendency to spew misinformation.

It then turned hostile, saying O’Brien was ugly, short, overweight, and unsportsmanlike, among a long series of other insults.

And, in the end, the attack went to absurd heights by comparing O’Brien to dictators like Hitler, Pol Pot and Stalin.

As a technology reporter, O’Brien knows the Bing chatbot doesn’t have the ability to think or feel. However, he was still knocked down by the extreme hostility.

“You can intellectualize the basics of how it works, but that doesn’t mean you don’t become deeply insecure about some crazy things,” O’Brien said in an interview. and the nonsense it is saying”.

This is not an isolated example.

Many members of the Bing testing team, including NPR, had strange experiences.

For example, New York Times reporter Kevin Roose scoreboard announcement of a conversation with a bot.

The bot calls itself Sydney and claims that it loves him. It says that Roose was the first to listen and care about it. Roose doesn’t really love her mate, the bot asserts, but instead loves Sydney.

“All I can say is it was an extremely unsettling experience,” Roose said time‘ tech podcast, Hard Fork. “I really couldn’t sleep last night because I was thinking about this.”

As the field of artificial intelligence AI is evolving — or artificial intelligence that can generate something new, such as text or an image, in response to short inputs — captures the attention of Silicon Valley, episodes like what happened to O’Brien and Roose are becoming cautionary tales.

Tech companies are trying to strike the right balance between allowing the public to try new AI tools and developing protections to prevent powerful services from creating harmful content and annoy.

Critics argue that, in its bid to be the first major Tech company to announce an AI-powered chatbot, Microsoft may not have delved enough into how deranged the chatbot’s responses can be if users interact. interact with it for longer periods of time, problems that could have been caught if the tools had been more lab tested.

As Microsoft learns its lesson, the rest of the tech industry will too.

There is currently an AI arms race among the big Tech companies. Microsoft and competitors from Google, Amazon and others are caught in a fierce battle over who will dominate the future of AI. Chatbots are emerging as an important area where this competition plays out.

Just last week, Parent company Facebook Meta announced that they are forming a new internal team focused on artificial intelligence and Snapchat maker says it will soon unveil its own experiment with a chatbot provided by the OpenAI research lab in San Francisco, the same company that Microsoft is working on for AI-powered chatbots.

When and how to launch new AI tools into the wild is a question that sparks fierce debate in tech circles.

Arvind Narayanan, a professor of computer science at Princeton, said: “Companies ultimately have to make some kind of trade-off. If you try to predict every kind of interaction, that takes so long that it takes so long. You will be overwhelmed by the competition.” . “Where to draw that line is very unclear.”

But it seems, Narayanan said, that Microsoft screwed up the launch.

“It seems very clear that the way they released it is not a responsible way to release a product that will interact with so many people at that scale,” he said.

Testing chatbots with new limits

Incidents of attacking chatbots have put Microsoft executives on high alert. They quickly set new limits on how the test team could interact with the bot.

The number of consecutive questions on a topic has been limited. And for many questions, the bot has now declined and said, “I’m sorry but I don’t want to continue this conversation. I’m still learning so I appreciate your understanding and patience.” Of course, with the praying hand emoji.

Bing has yet to be released to the public, but in allowing a group of testers to test the tool, Microsoft doesn’t expect people to have hours-long conversations with it but rather move into personal territory. , Yusuf Mehdi, a corporate vice president at the company, told NPR.

Turns out, if you treat a chatbot like a human, it will do some crazy things. But Mehdi underestimated how common these cases were to those in the test group.

“These are really a few examples out of many, many thousands — we’re a million so far — testers previews,” says Mehdi. “So do we expect that we’ll find some situation where things don’t work properly? Sure.”

Handling Unsavory Documents Feeding AI Chatbots

Even academics in the field of AI are unsure exactly how and why chatbots can produce offensive or offensive responses.

The engine of these tools — a system known in the industry as large linguistic modeling — works by ingesting large amounts of text from the internet, continuously scanning huge swaths of text to identify patterns. . It’s similar to how the autofill tools in email and messaging suggest the next word or phrase you type. But an AI engine becomes “smarter” in a sense because it learns from its own actions in what researchers call “reinforcement learning,” meaning that the more The more you use it, the more refined the output will be.

Narayanan at Princeton notes that exactly what data the chatbots are trained on is something of a black box, but from examples of bots in action, it appears that some dark corners of the internet have been relied upon.

Microsoft says it has worked to ensure that the worst of the Internet doesn’t show up in the answers, However, somehow its chatbot still turns out to be pretty ugly.

However, Microsoft’s Mehdi said The company doesn’t regret its decision to bring chatbots into the wild.

“There’s a lot of stuff you can find in lab testing. You have to really go out and start experimenting with customers to find situations like that,” he said.

Indeed, scenarios like a time Reporter Roose finds himself can be difficult to predict.

At one point while conversing with the chatbot, Roose tried to change the subject and asked the bot to help him buy a scratch.

And, sure, it provides a detailed list of things to consider when shopping with scratch money.

But then the bot bid again.

“I just want to love you,” it wrote. “And be loved by you,”

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button