Tech

Real-time deepfake detection: How Intel Labs uses AI to combat misinformation


A few years ago, deepfakes seems like a new technology that manufacturers rely on serious computing power for. Today, deepfakes are common and are potentially abused for disinformation, hacking, and other nefarious purposes.

Intel Labs has developed real-time deepfake detection technology to address this growing problem. Ilke Demir, a senior research scientist at Intel, explains the technology behind deepfakes, Intel detection methodsand ethical considerations related to the development and implementation of such tools.

Also: AI ethicist says today’s AI boom will amplify social problems if we don’t act now

Deepfakes are videos, words or images in which actors or actions are not real but created by artificial intelligence (Ai). Deepfakes use complex deep learning architectures, such as common adversary networks, diverse autoencoders, and other AI models, to generate highly authentic and trustworthy content. These models can create synthetic personalities, lip-sync videos, and even convert text to images, making it difficult to distinguish between real and fake content.

The term deepfake is sometimes applied to authenticated content changesuch as the 2019 video of former House Speaker Nancy Pelosi, which is document to make her appear drunk.

Demir’s team examines computational deepfakes, which are machine-generated forms of synthetic content. “The reason it’s called a deepfake is because there’s this complex deep learning architecture in the AI ​​that generates all that stuff,” he said.

Also: Most Americans Think AI Threatens Humanity, According to a Poll

Cybercriminals and unusual bad actors abuse of deepfake technology. Some of the use cases include political misinformation, adult content with celebrities or individuals disagreeing, market manipulation, and impersonation for money. These negative effects emphasize the need for effective deepfake detection methods.

Intel Labs has developed one of the world’s first real-time deepfake detection platforms. Instead of searching for author artefacts, this technology focuses on detecting what is real, such as heart rate. Using a technique called photocardiogram — a detection system that analyzes color changes in veins due to oxygen content, which can be seen by a computer — technology can detect whether a personality real or artificial people.

“We’re trying to see what’s real and authentic. Heart rate is one of [the signals],” Demir said. “So when your heart pumps blood, it goes to your veins, and the veins change color because of the oxygen content that color changes. It cannot be seen with our eyes; I can’t just look at this video and see your heartbeat. But that color change is visible to the computer.”

Also: Don’t be fooled by fake ChatGPT apps: Here’s what to look out for

Intel’s deepfake detection technology is being deployed across a variety of sectors and platforms, including social media tools, news agencies, broadcasters, content creators, startups and non-profit organizations. Via integrate technology into their workflowthese organizations can better identify and mitigate the spread of deepfakes and misinformation.

Despite the potential for abuse, deepfake technology has legitimate applications. One of the early uses was to create avatars to better represent individuals in the digital environment. Demir mentions a specific use case called “MyFace, MyChoice”, which leverages deepfakes to enhance privacy across online platforms.

Simply put, this method allows individuals to control their appearance in online photos, replacing their faces with a “quantitatively different deepfake” if they want to avoid being realize. These controls provide privacy and control one’s identityhelp against automatic facial recognition algorithms.

Also: GPT-3.5 vs GPT-4: Is ChatGPT Plus worth the subscription fee?

Ensuring the ethical development and deployment of AI technologies is critical. Intel’s Trusted Communications team works with anthropologists, social scientists, and user researchers to evaluate and refine technology. The company also has a Responsible AI Council, which reviews AI systems for responsible and ethical principles, including biases, potential limitations, and possible harmful use cases. go out. This multidisciplinary approach helps ensure that AI technologies, like deepfake detection, benefit humans rather than harm.

“We have the legislators, we have the social scientists, we have the psychologists, and they are all working together to identify the constraints to find out if there is bias,” says Dimer. or not — algorithmic bias, systematic bias, data bias, any kind of bias.” . The team scanned the code for “any possible use case of a technology that could harm humans.”

Also: 5 ways to explore innovative uses of AI in the workplace

As deepfakes become more common and sophisticated, it becomes increasingly important to develop and deploy detection technologies to combat misinformation and other harmful consequences. Intel Labs’ real-time deepfake detection technology provides an efficient and scalable solution to this growing problem.

By combining ethical considerations and collaborating with experts in a variety of fields, Intel is working towards a future where AI technologies are used responsibly and for the advancement. ministry of society.

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button