Tech

A new trick allows artificial intelligence to see in 3D


Current wave belong to artificial intelligence can be traced back to 2012, and an academic competition How well did you measure? algorithm can recognize objects in the image.

That year, researchers discovered that feeding thousands of images into an algorithm inspired by the way neurons in the brain respond to input generated a large amount of leap in accuracy. The breakthrough caused an explosion in academic research and commercial activity transforming several companies and industries.

Now, a new trick, which involves training the same kind of AI algorithm to turn a 2D image into a rich 3D view of the scene, is sparking excitement in the world of computer graphics and WHO. This technique has the ability to shake video games, virtual reality, robotand automatic driving. Some experts believe it could even help machines perceive and reason about the world more intelligently — or at least like a human-Street.

“It’s very hot and it’s very hot,” said Ken Goldberg, a robotics professor at the University of California, Berkeley, who is using technology to improve the ability of artificial intelligence-enhanced robots to capture strange shapes. loud noise. Goldberg says the technology has “hundreds of applications” in areas ranging from entertainment to architecture.

A new approach to the use of neural network to capture and create 3D images from several 2D snapshots, a technique known as “neural rendering”. It arose from a combination of ideas circulating in computer graphics and AI, but interest exploded in April 2020 when researchers at UC Berkeley and Google show that a neural network that can realistically capture a scene in 3D simply by viewing some of its 2D images.

That algorithm exploits the way light travels through the air and performs calculations that calculate the density and color of points in 3D space. This makes it possible to convert 2D images into realistic 3D representations that can be viewed from any point possible. At its core is a kind of neural network like the 2012 image recognition algorithm, which analyzes pixels in 2D images. New algorithms convert 2D pixels into their 3D equivalents, called voxels. Videos of the procedure, which researchers call Neural Radiance Fields, or NeRF, have wowed the research community.

“I’ve been doing computer vision for 20 years, but when I watched this video, I was like, ‘Wow, this is unbelievable’, Frank Dellaerta professor at Georgia Tech.

For anyone working in computer graphics, Dellaert explains, this approach is groundbreaking. Creating a detailed, realistic 3D scene often requires many hours of painstaking manual work. The new method makes it possible to create these scenes from ordinary photos in minutes. It also provides a new way to create and manipulate composite scenes. “That’s important and significant, which is something crazy to say for a job that’s only two years old,” he said.

Dellaert says the speed and variety of ideas that have emerged since then has been breathtaking. Others have used this idea to create moving selfies (or “worry”), Allows you to rotate a person’s head based on several still images; arrive create 3D avatar from a headshot; and to grow automatically Brighten different scenes.

The work has gained traction in the industry at a surprising rate. Ben Mildenhallone of the researchers behind NeRF, who now works at Google, describes the brilliant rise in research and development as “a slow tidal wave”.



Source link

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button