Tech

LaMDA and Sentient AI Trap


Currently head of the nonprofit’s Distributed AI Research Foundation, Gebru hopes that in the future people will focus on human welfare, not robot rights. Other AI Ethicists Have Said They Won’t discuss conscious or super-intelligent AI at all.

“There is a pretty big gap between the current AI narrative and what it actually is,” said Giada Pistilli, an ethicist at Hugging Face, a startup focused on language models. can do. “This story causes fear, amazement, and excitement at the same time, but it relies heavily on lies to sell products and capitalize on hype.”

The consequence of speculating about sentient AI, she says, is a willingness to make claims based on subjective impressions rather than rigor and scientific evidence. It distracts from the “countless ethical and social justice questions” that AI systems pose. While every researcher has the freedom to research what they want, she said, “I’m just afraid that focusing on this topic makes us forget what’s happening when we look at the moon. ”

What Lemoire goes through is an example of what author and futurist David Brin has called the “cyborg empathy crisis.” At an AI conference in San Francisco in 2017, Brin predicted that in three to five years, people will claim AI systems are sentient and insist that they have rights. At the time, he thinks those calls will come from a virtual agent that looks like a woman or a child to maximize empathic human responses, not “some people at Google.” “, he say.

The LaMDA breakdown is part of a transition period, Brin said, where “we’re going to be increasingly confused about the line between reality and science fiction.”

Brin based her 2017 prediction on advances in language modeling. He hopes that this trend will lead to scams. He said, if people only liked a simple chatbot like ELIZA decades ago, how hard would it be to convince millions of people that a simulated person deserves protection or money?

“There is a lot of snake oil out there, and mixed with all the hype are genuine progressive products,” says Brin. “Analyzing our way through that stew was one of the challenges we faced.”

Yejin Choi, a computer scientist at the University of Washington, said and as sympathetic as LaMDA, people in awe of large language models should consider the case of the cheese stab. A local news broadcast in the United States involved a teenager in Toledo, Ohio, stabbing his mother in the arm during a dispute over a cheeseburger. But the title “Cheeseburger Stabbing” is ambiguous. Knowing what happened requires some common sense. Attempting to take OpenAI’s GPT-3 model for text generation using “Breaking News: Cheeseburger Stab” generates words about a man being stabbed by a cheesecake during an interjection because ketchup and a man was arrested after stabbing a cheesecake.

Language models sometimes make mistakes because decoding human language can require many forms of common sense. To document what large language models are capable of and where they might fall short, last month more than 400 researchers from 130 institutions contributed to a collection of more than 200 tasks called BIG-Bench, or Beyond the Imitation Game. BIG-Bench includes some traditional language modeling tests such as reading comprehension, but also logical and conventional reasoning.

Researchers at the Allen Institute for AI’s MOSAIC project documenting the conventional inference ability of AI models, contributed a task called Social-IQa. They asked language models—not including LaMDA—that answered questions that required social intelligence, such as “Jordan wanted to tell Tracy a secret, so Jordan leaned toward Tracy. Why would Jordan do this? “The team found that large language models performed 20 to 30 percent less accurately than humans.





Source link

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button