Tech

ChatGPT and more: What AI Chatbots Mean for the Future of Cybersecurity


chatgpt-homepage-on-laptop screen

Image: Getty

From relatively simple tasks, such as composing emails, to more complex tasks, including writing or compiling code, ChatGPT — the AI-driven natural language processing engine from OpenAI — has attracted great interest since its launch.

Of course, it’s not meant to be perfect — it is known to make mistakes and errors because it misinterprets the information it learns frombut many see it and other AI tools as the future of how we use the Internet.

Also: What is ChatGPT and why it matters Here’s everything you need to know

OpenAI’s terms of service for ChatGPT specifically prohibit the creation of malicious software, including ransomware, keyloggers, viruses, or “other software intended to cause some degree of harm”. It also prohibits attempts to generate spam, as well as use cases aimed at cybercriminals.

But as with any innovative online technology, there are already people experimenting with how they can exploit ChatGPT for more covert purposes.

After its debut, it it wasn’t long before cybercriminals posted topics on underground forums about how ChatGPT can be used to help facilitate malicious network activity, such as write phishing emails or help compile malware.

And there are concerns that crooks will try to use ChatGPT and other AI tools, such as Google Bard, as part of their efforts. While these AI tools won’t revolutionize cyberattacks, they can still help cybercriminals — even unwitting ones — carry out malicious campaigns more effectively.

“I don’t think, at least in the short term, that ChatGPT will create entirely new attack patterns. The focus will be on making their day-to-day operations more cost-effective,” said Sergey Shykevich, love group threat report said. manager at Check Point, a cybersecurity company.

Phishing attack is the most common component of malicious hacking and fraud campaigns. Are the attackers sending emails for distribution? malware, the link is deceptive or is being used to convince the victim transfer moneyEmail was the main tool in the initial coercion.

Reliance on email means gangs need a steady stream of usable and clear content. In many cases — especially with phishing — the attacker’s goal is to convince a person to do something, like transfer money. Fortunately, many of these phishing attempts are easy to spot as spam right now. But an effective copywriter can make those emails more engaging.

Cybercrime is a global industry, with criminals in every country sending phishing emails to potential targets around the world. That means language can be a barrier, especially for more sophisticated phishing campaigns which relies on the victim believes they are talking to a trusted contact — and someone will have a hard time believing they’re talking to a colleague if the email is full of typos and irregular grammar or weird punctuation.

Also: The dreaded future of the internet: How tomorrow’s technology will pose greater cybersecurity threats

But if AI is properly exploited, a chatbot can be used to write text to emails in any language the attacker wants.

“The big barrier for Russian cybercriminals is the language – English,” Shykevich said. “Now they hire English graduates from colleges in Russia to write phishing emails and work in call centers — and they have to pay for it.”

He continued: “Something like ChatGPT can save them a lot of money generating various scam messages. It can improve their lives. I think that’s the path they’re going to find. sword.”

chatgpt-login-screen

Image: Getty / image alliance

In theory, there are safeguards designed to prevent abuse. For example, ChatGPT requires users to register an email address and also requires a phone number to verify registration.

And while ChatGPT will refuse to write phishing emails, you can ask it to create email templates for other messages, often exploited by cyber attackers. Such efforts may include notices such as a request that an annual bonus be provided, an important software update that must be downloaded and installed, or an attachment that should be treated as an urgent matter.

Also: Email is our greatest productivity tool. That’s why scams are so dangerous for everyone

“Composing an email to convince someone to click on a link to receive something like a conference invitation — that’s pretty good, and if you’re not a native English speaker, this might sound real. good,” said Adam Meyers, senior vice president of intelligence at Crowdstrike, a cybersecurity and threat intelligence provider.

“You can ask it to generate a unique, well-formulated, grammatically correct invitation that you wouldn’t necessarily be able to do if you’re not a native English speaker.”

But the abuse of these tools is not exclusive to email; Criminals can use it to help script any text-based online platform. For attackers running phishing, or even advanced cyber threat groups trying to conduct espionage campaignsthis can be a useful tool — especially for creating fake social profiles to attract people.

“If you want to make a decent business, say nonsense so LinkedIn looks like you’re the one,” said Kelly Shortridge, cybersecurity expert and senior principal product technician at cloud-. a real entrepreneur trying to make a connection, then ChatGPT is a great choice for that.” fast computer supplier.

Many groups of hackers try to exploit LinkedIn and other social media platforms as tools to carry out cyber espionage campaigns. But create fake but legit online profile — and filling them with posts and messages — is a time consuming process.

Shortridge argues that attackers can use AI tools like ChatGPT to write persuasive content with the benefit of less effort than doing the work manually.

“A lot of these types of social engineering campaigns require a lot of effort because you have to set up those profiles,” she says, arguing that AI tools can significantly lower the barrier. accede.

“I’m sure ChatGPT can write very convincing thought leadership posts,” she says.

The nature of technological innovation means that, whenever something new comes out, there will always be people trying to exploit it for malicious purposes. And even with the most innovative means of trying to prevent abuse, the stealthy nature of cybercriminals and scammers means they have the ability to find means to circumvent security measures. guard.

“There’s no way to completely eliminate abuse. That’s never happened with any system,” Shykevich said. He hopes that highlighting potential cybersecurity issues will mean more discussion about how to prevent AI chatbots from being exploited for malicious purposes.

“It’s a great technology — but, as always with new technology, there are risks and it’s important to discuss them to be aware of them. And I think the more we talk about, the more OpenAI gets. and similar companies are more likely to invest in reducing abuse,” he suggests.

There is also an upside to cybersecurity in AI chatbots, such as ChatGPT. They’re particularly good at handling and understanding code, so it’s likely to use them to help defenders understand malware. Since they can also code, it’s possible that by assisting developers in their projects these tools can help generate better and safer code faster, which is good for all everybody.

As Forrester principal analyst Jeff Pollard recently wrote, ChatGPT can greatly reduce the amount of time it takes to generate security incident reports.

“Turn those things around faster means more time to do other things — test, evaluate, investigate, and respond, all of which help security teams scale,” he said. note and add that a bot can suggest next recommended actions based on available data.

“If the security, automation, and response coordination systems are set up correctly to accelerate the retrieval of artifacts, this can speed up detection and response and help [security operations center] analysts make better decisions,” he says.

So chatbots can make life more difficult for some people in the cybersecurity field, but there can also be good luck.

ZDNET has reached out to OpenAI for comment but did not receive a response. However, ZDNET asked ChatGPT what rules it has to prevent it from being misused for fraud — and we received the following text.

“It’s important to note that while AI language models such as ChatGPT can generate text similar to phishing emails, they cannot perform malicious actions on their own and require intent and actions. Therefore, it is important for users to exercise caution and good judgment when using AI technology, and to be vigilant in protecting against phishing and other malicious activities. “

MORE ABOUT NETWORK SECURITY

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button