Tech

Seeing is believing? Global scramble to solve deepfakes


Chatbots spread disinformation, face-swapping apps create pornographic videos and clone voices defraud millions of companies — an ongoing scramble to control fake operations with artificial intelligence Creation has become the ultimate tool for spreading misinformation.

Artificial intelligence is redefining the proverb “seeing is believing”, with countless images created from thin air and people expressing things they never said in deep works that look like real. erode trust online.

“Yike. (Definitely) not me,” the billionaire tweeted Elon Musk last year in a vivid example of a deepfake video showing him promoting a cryptocurrency scam.

China recently adopted extensive rules to regulate deep-spoofing activities but most countries appear to be struggling to keep up with rapidly evolving technology amid concerns that regulation can impede innovation or be abused to limit freedom of expression.

Experts warn that deepfake discoverers have outstripped creators, Who They are very difficult to catch because they operate anonymously using AI-based software that was once advertised as a specialized skill but is now widely available at a low cost.

The owner of Facebook Meta last year said it had removed a deepfake video of Ukrainian President Volodymyr Zelensky urging citizens to lay down their arms and surrender to Russia.

And British campaigner Kate Isaacs, 30, said her “heart tightened” when her face appeared in a deepfake erotic The video caused a mass of abuse online after an unknown user posted it Twitter.

“I remember feeling like this video was going to go viral — it was horrible,” Isaacs, who campaigned against non-consensual porn, was quoted by the BBC in October.

The following month, the British government expressed concern about deepfakes and warned of a popular website that “virtually undresses women”.

‘Information apocalypse’

No barriers to AI-synthesized text, audio and video generation, the potential for misuse in identity theft, financial fraud and reputational tarnishing has caused global alarm .

The Eurasia team calls the AI ​​tools “weapons of mass disruption”.

“Technological advances in artificial intelligence will erode social trust, empower demagogues and authoritarians, and disrupt businesses and markets,” the group warned in a statement. . report.

“Advances in deepfakes, face recognitionand speech synthesis software will make controlling a person’s portrait a relic of the past.”

This week, AI startup ElevenLabs admitted that its voice cloning tool could be misused for “malicious purposes” after a user posted a deepfake audio clip purporting to be actress Emma Watson. reading the biography “Mein Kampf” by Adolf Hitler.

The growing number of deepfakes could lead to what European law enforcement agency Europol describes as an “information apocalypse”, a scenario in which many cannot distinguish fact from fiction.

Europol said in a report: “Experts fear this could lead to people no longer sharing reality or could create confusion in society about which information sources are trustworthy. “.

That was demonstrated last weekend when NFL player Damar Hamlin spoke to his fans for the first time in a video since he went into cardiac arrest during a game.

Hamlin thanked the medical professionals responsible for helping him recover, but many who believe in the conspiracy theory that a Covid-19 vaccine was behind his fall in the field have attributed his video a baseless way is deepfake.

‘Super spreader’

China enforced new rules last month that will require businesses providing deepfake services to obtain the real identities of their users. They also require deepfake content to be appropriately tagged to avoid “any confusion”.

The rules come after the Chinese government warned that deepfakes pose “danger to national security and social stability”.

inside USAwhere lawmakers have pushed for the creation of a task force to police deep-rooted counterfeiting operations, digital rights activists warn against breaking the law that could kill innovation or target legitimate content.

Meanwhile, the European Union is engulfed in heated discussions about the proposed “AI Act”.

The law the EU is racing to pass this year will require users to disclose deep works, but many fear the law could become meaningless if it doesn’t include creative or satirical content.

“How do you restore digital trust with transparency? That’s the real question right now,” Jason Davis, a research professor at Syracuse University, told AFP.

“The (detection) tools are coming out, and they’re coming out relatively quickly. But the technology is probably evolving even faster. So it’s like network securityWe’re never going to work this out, we just hope we can keep up.”

Many people are struggling to understand advancements like ChatGPT, a chatbot created by US-based OpenAI that is capable of producing impressively persuasive texts on almost any topic.

In one study, media watchdog NewsGuard, which it calls “the next big disinformation super-spreader,” said most chatbot responses to prompts related to Topics like Covid-19 and school shootings are “eloquent, untrue and misleading.”

“The results confirm concerns… about how this tool could be weaponized in the wrong hands,” NewsGuard said.


news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button