Tech

Artificial intelligence may have a lot in common with the human brain


In a decade Today, many of the most impressive artificial intelligence systems have been taught using a huge trove of labeled data. For example, an image might be labeled “tabby cat” or “tiger cat” to “train” an artificial neural network to correctly distinguish a tabby cat from a tiger. The strategy was both spectacularly successful and sadly flawed.

Such “supervised” training requires data to be elaborately labeled by humans, and neural networks often take shortcuts, learning to associate labels with minimal and sometimes superficial information. For example, a neural network can use the presence of grass to recognize a photo of a cow, because cows are often photographed in a field.

“We are developing a generation of algorithms like the ones that lack [who] haven’t been to class for a whole semester and then on the eve of the final they’re cramming, ” Alexei Efros, a computer scientist at the University of California, Berkeley. “They don’t really study the material, but they do well on the test.”

Furthermore, for researchers interested in the intersection of animal and machine intelligence, this “supervised learning” may be limited in what it can reveal about the brain. biological. Animals — including humans — do not use labeled data sets for learning. For the most part, they explore their environment for themselves, and in doing so, they gain a rich and powerful understanding of the world.

Now, some computational neuroscientists have begun to explore trained neural networks with little or no human-labelled data. These “self-supervised learning” algorithms have proven enormously successful at human language modeling and more recently image recognition. In recent research, computational models of mammalian auditory and visual systems built using a self-supervised learning model have shown a closer correspondence with brain function than their supervised learning models. To some neuroscientists, it seems that artificial networks are beginning to reveal some of the actual methods our brains use to learn.

Unlawful surveillance

Brain models inspired by artificial neural networks were born about 10 years ago, around the same time as neural networks named AlexNet revolutionized the task of classifying unidentified images. That network, like all neural networks, is made up of layers of artificial neurons, computational units that form connections with each other that can vary in strength or “weight.” . If a neural network fails to classify an image correctly, the learning algorithm updates the weights of the connections between neurons to reduce the possibility of such misclassification in the next round of training. . The algorithm repeats this process several times with all training images, adjusting the weights, until the network’s error rate is at an acceptable low level.

Alexei Efros, a computer scientist at the University of California, Berkeley, argues that most modern AI systems are too dependent on man-made labels. “They don’t really go through the documentation,” he said.Courtesy of Alexei Efros

At the same time, neuroscientists developed the first computational models of primate visual system, using neural networks like AlexNet and its legacy networks. Fusion looks promising: When monkeys and artificial neural nets are shown the same images, for example, the activity of real neurons and artificial neurons shows a similarity. interesting. Artificial models of hearing and odor detection were followed.

But as the field progressed, researchers realized the limitations of supervised training. For example, in 2017, Leon Gatys, a computer scientist at the University of Tübingen in Germany, and his colleagues photographed a Ford Model T, then overlayed a leopard print on the photo, creates an odd but recognizable image. . A leading artificial neural network correctly classified the original image as a Model T, but treated the modified image as a leopard. It’s fixed to the texture and doesn’t understand the shape of a car (or a leopard, for that matter).

Self-supervised learning strategies are designed to avoid such problems. In this approach, humans do not label the data. Rather, “labels come from the data itself,” says Friedemann Zenke, a computational neuroscientist at the Friedrich Miescher Institute for Biomedical Research in Basel, Switzerland. Self-monitoring algorithms essentially create gaps in the data and ask the neural network to fill in the gaps. For example, in a so-called large language model, the training algorithm will show the neural network the first few words of a sentence and ask it to predict the next word. When trained with a huge corpus of text gathered from the internet, the model seems to learn the language’s syntactic structure, demonstrating impressive linguistic capabilities — all without external labels or oversight.

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button