Tech

Facebook says its new AI can identify more problems faster


Little shot pre-trained learners on a series of billions of Facebook posts and images in over 100 languages. The system uses them to build an internal sense of the statistical patterns of Facebook content. It is tailored for content moderation with additional training with posts or images labeled in previous moderation projects and a simple description of the policies the posts violate.

Once that is prepared, the system can be navigated to find new types of content, such as to enforce a new rule or expand into a new language, with much less effort than the usual models. previous model of censorship, says Cornelia Carapcea, product manager for moderation AI at Facebook.

More conventional moderation systems may need hundreds of thousands or millions of sample posts before they can be deployed, she said. The Little Picture Learning program can be put to work using dozens – “several pictures” in its name – combined with simple descriptions or “reminders” of the new policy they relate to. regarding.

“Because it has been seen so much, it can be quicker to learn a new issue or policy,” says Carapcea. “There has always been a struggle to get enough labeled data on so many issues like violence, hate speech and incitement; This allows us to react faster. ”

Light learners can also be guided to find categories of content without showing any examples, simply by providing the system with a written description of the new policy — similarly. unusually simple operation with an AI system. Carapcea says results this way are less reliable, but the method can quickly suggest what should be covered by a new policy or identify posts that could be used for further training. system.

The impressive possibilities — and many unknowns — of AI giants like Facebook’s have prompted the Stanford researchers to set up a center to research such systems, which they call “platform model“Because they seem set to become the foundation of many tech projects. Large machine learning models are being developed for use not only in social networks and search engines, but also in industries such as finance and health care.

Percy Liang, director of the Stanford center, said Facebook’s system seemed to show some of the impressive power of these new models, but would also present some of their trade-offs. It’s exciting and useful, says Liang, to be able to direct an AI system to do what you want with just text, as Facebook says it can do with its new content policies, but this ability is still not well understood. “It’s more of an art than a science,” he said.

Liang says that the Speed ​​of Lesser-Fire Learners can also have a downside. When engineers don’t have to manage a lot of training data, they sacrifice some control and knowledge of their system’s capabilities. “There is a bigger leap in trust,” says Liang. “With more automation, you are less likely to be monitored.”

Facebook’s Carapcea says that as Facebook develops new moderation systems, it also develops ways to test their performance for accuracy or bias.


Stories with WIRED are more amazing

.



Source link

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button