Deepfake scams have claimed millions of dollars. Experts warn it could get worse
The 3D generated face represents artificial intelligence technology
Themotioncloud | Istock | beautiful images
A growing wave of deepfake scams has robbed companies worldwide of millions of dollars, and cybersecurity experts warn that the situation could get worse as criminals exploit AI to scam.
Deep fakes are videos, audio or images of real people that have been digitally altered and manipulated, often through artificial intelligence, to convincingly misrepresent them.
In one of the biggest known cases this year, a Hong Kong financial worker was tricked into transferring more than $25 million to deepfake scammers who disguised themselves as copper. career on a video call. authorities told local media in February.
Last week, British engineering company Arup confirmed to CNBC that it was involved in that incident, but it could not go into detail about the matter due to the ongoing investigation.
Such threats are increasing as the popularity of Open AI’s GPT Chat – launched in 2022 – has rapidly increased, said David Fairman, chief information and security officer at cybersecurity firm Netskope. bringing general AI technology into the mainstream.
“The public accessibility of these services has lowered the barrier to entry for cybercriminals – they no longer need to have special technological skill sets,” said Fairman.
He added that the number and sophistication of scams is increasing as AI technology continues to develop.
Uptrend
Various synthetic AI services can be used to generate human-like text, image and video content and can thus act as a powerful tool for illegal actors trying to manipulate and digitally recreate certain individuals.
“Like many other businesses globally, our operations are regularly subject to attacks, including invoice fraud, phishing scams, WhatsApp voice spoofing and spoofing,” an Arup spokesperson told CNBC. “deep disguise”.
The finance officer was said to have attended a video call with what were said to be the company’s chief financial officers and other employees, who asked him to transfer money. However, in reality, the remaining attendees present at that meeting were digitally recreated.
Arup confirmed that “fake voices and images” were used in the incident, adding that “the number and sophistication of these attacks has increased sharply in recent months.”
Chinese state media reported a similar case in Shanxi province this year involving a female financial worker who was tricked into transferring 1.86 million yuan ($262,000) to a scammer’s account after a video call with fake content her boss.
Wider meaning
Beyond direct attacks, companies are increasingly worried about other ways in which deep-faked photos, videos or speeches of their superiors could be used, cybersecurity experts say. used in harmful ways.
According to Jason Hogg, cybersecurity expert and managing director at Great Hill Partners, deepfakes about high-ranking members of the company can be used to spread fake news to manipulate stock prices, defame the company’s brand and sales and spread other harmful misinformation.
“That’s just the surface,” said Hogg, who previously worked as an FBI Special Agent.
He emphasized that general AI can create deepfakes based on digital information repositories, such as publicly available content hosted on social networks and other media platforms.
In 2022, Patrick Hillmann, director of communications at Binance, stated in a that blog post scammers created a deepfake of him based on previous news interviews and television appearances, using it to trick customers and contacts into attending meetings.
Netskope’s Fairman said such risks have prompted some executives to begin removing or limiting their online presence out of concern that it could be used as ammunition by cybercriminals.
Deepfake technology has become popular outside the corporate world.
From Fake pornographic images to manipulate videos promoting cookware, celebrities like Taylor Swift has become a victim of deepfake technology. Deepfake of politicians was also widespread.
Meanwhile, some scammers have creating deepfakes of individuals’ family members and friends in an attempt to defraud them of money.
According to Hogg, the broader problems will accelerate and get worse over a period of time because preventing cybercrime requires thoughtful analysis to develop systems, practices and controls to against new technologies.
However, cybersecurity experts told CNBC that companies can strengthen their defenses against AI-powered threats through improved employee training, cybersecurity testing and requests from code as well as multiple layers of approval for all transactions – which could prevent cases like Arup’s.