The next big security threat is staring at us. Solving it will be very difficult

Videoconference at the office

A video conference call takes place inside the office – but is the person on the other end of the call really who they say they are?

Image: Getty / Luis Alvarez

If the ongoing war against ransomware doesn’t keep security teams busy, coupled with the challenges of securing the ever-expanding galaxy of Internet of Things or cloud computing devices, there will be a New challenges lie ahead – defend against the coming wave of digital counterfeiters or worm scams.

Deepfake videos use artificial intelligence and deep learning techniques to create fake images of people or events.

A recent example was when the mayor of Berlin thought he had an online meeting with former boxing champion and current mayor of Kyiv, Vitali Klitschko.

UNDERSTAND: These are tomorrow’s cybersecurity threats that you should think about today

But the mayor of Berlin became suspicious when ‘Klitschko’ started saying some very one-sided things regarding the invasion of Ukraine, and when the call was interrupted, the mayor’s office contacted the Ukrainian ambassador in Berlin. – to discover that whoever they were talking to was not the real Klitschko.

The impostor also appears to have spoken to other European mayors, but in each case it appears they held a conversation with deepfakean AI-generated fake video that looks like a real person speaking.

It’s a sign that deepfakes are getting more and more advanced and fast. Previous instances of deepfake videos that went viral often showed signs that something wasn’t real, such as unconvincing edits or odd movements.

This entire episode appears to have been completely fabricated by someone to cause trouble – but developments in deepfake technology mean it’s not hard to imagine it being taken advantage of by cybercriminals, especially when Talk about stealing money.

So this incident is also a warning: that scammers are posing a whole new set of threats – not just to the mayor, but to all of us.

While ransomware can generate more headers, business email intrusion (BEC) is the most expensive form of cybercrime today. The FBI estimates that it costs businesses billions of dollars each year.

The most common form of BEC attack involving cybercriminals exploiting emails, hack into boss’s account — or cleverly spoof their email accounts — and ask employees to authorize large financial transactions, which can often run into the hundreds of thousands of dollars.

The emails claim that the money needs to be sent urgently, possibly as part of a confidential business arrangement that cannot be disclosed to anyone. It’s a classic social engineering trick designed to force victims to transfer money quickly and doesn’t require confirmation from anyone else that might reveal it’s a fake request.

By the time anyone would suspect, the cybercriminals had taken the money, potentially closed the bank accounts they used to transfer the money – and fled.

BEC attacks are successful, but many people may still be suspicious that an email from their boss is blue, and they can avoid becoming a victim by talking to someone to confirm it wasn’t. real.

But if cybercriminals can use deepfakes to make requests, it can be much harder for victims to refuse requests, since they believe they are actually talking to their boss on camera.

Many companies publicly list their board of directors and senior management on their websites. Often, these senior business executives will speak at events or in the media, so footage of them speaking can be found.

UNDERSTAND: Cloud Security (ZDNet special feature)

By using AI-powered deep learning techniques, cybercriminals can exploit this public information to create the deep relationship of a senior executive, exploit email vulnerabilities to request a video call with the employee and then ask them to make the transaction. If the victim believes they are talking to the CEO or their boss, they will not deny the request.

Scammers used artificial intelligence to convince employees they were talking on the phone with their boss. Adding the video element makes it even harder to detect that they’re actually talking to cheaters.

The FBI warned that Cybercriminals are using deepfakes to apply for remote IT support jobsroles that allow access to sensitive employee and customer personal information that can be stolen and exploited.

The agency has also warned that hackers will use deepfakes and other AI-generated content for foreign influence operations – supposedly something along these lines was aimed at mayors.

While advances in technology mean it’s become more difficult to distinguish real-life deepfake content, the FBI has offered advice on how to detect deepfakes, including warping videos, head and body movements. strange, along with synchronization problems between faces and lip movements and any associated sounds.

But deepfakes can easily become a new vector for cybercriminals and it will be a real struggle to curb this trend. It is entirely possible that organizations will need to come up with a new set of rules around validating decisions made during online meetings. It’s also a challenge to the authenticity of remote work – what does it mean if you can’t believe what you see on the screen?

The more companies and their employees are aware of the potential risks posed by malicious worm games, the easier it will be to defend against attacks – otherwise, we’re in trouble. tangled.


ZDNet’s Monday Opener is our inaugural tech show of the week, written by members of our editorial team.


Source link


News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button