With hallucinating LLMs and FraudGPT, cyberspace has got a lot more vulnerable. Companies and individuals need to watch out for the newer threats emerging
Also Read : WhatsApp is no longer working on select Android phones and iPhones, check if your phone is on the list
A typical spam email, which may hide a phishing hook at the hyperlink, often has a spammy look and feel from the sans-serif fonts, bold or underlined text, amateur layout, exclamation marks, and language that’s jarringly non-native. But those are all changing.
Generative AI, powered by large language models (LLMs), has opened a window of opportunities for bad actors adept in cybercrime. “On one hand, these tools now make it easier for people on the defensive side to come up with ideas on how to protect their environments, but on the other, the attackers can use them the same way to target people. Probably the most common use case is for writing very sophisticated emails for phishing campaigns,” says Jonas Walker, director of threat intelligence at FortiGuard Labs.
FraudGPT and WormGPT are like evil twins of the wildly popular ChatGPT. These two subscription-based black hat tools are available on the darkweb. They help bad actors do a range of tasks – from writing a phishing email to business e-mail compromise (BEC) attacks to writing malware, which a legit GPT will refuse to do because the guardrails are built in.
“For example, you can say your goal is to have a business email compromise attack and please send an email to an account manager pretending that his own boss is urgently instructing him to pay an invoice,” says Walker. So, you literally tell the tool what the whole purpose is. And the tool understands how successful BEC attacks look like, and it will probably create an email that is much better than someone who has no idea what a BEC looks like. “So, it’s much faster and more professional, and much more difficult for people on the defensive side to realise that it might be a malicious attack,” Walker says.
FraudGPT and WormGPT’s LLMs are trained on things like malware, ransomware, and phishing emails. FraudGPT can scour the dark web, which hosts data dumps of people’s personal information, and revert quickly. “Black hat GenAI tools can reduce the recce time required to carry out an at tack considerably,” says Zubair Chowgale, senior technical consultant (APMEA) at Securonix. “In cyberattacks, APTs or advanced persistent threats are bad actors who would typically be in the victim’s environment undetected for long periods of time and eventually achieve the end goal. Their window has just narrowed,” he says.
Also Read : WhatsApp Passkeys: What It Is And How To Enable The Feature – All Details
Move to in-house trained LLMs
Black hat GPT tools only form one part of the larger cybersecurity complexities being created by GenAI. The other part includes challenges posed by unchecked use of GenAI tools, especially for work purposes, and the approach companies take in building LLMs. Samsung banning its employees from using ChatGPT after some of them shared parts of source code and the details of a few meetings with the application is part of the AI fable now. Companies are going slow on introducing GenAI tools with open-source LLMs for their employees. “The biggest concern that we hear from a lot of our customers and prospective customers is how do we ensure that we can enforce a policy that prevents data from leaking out through a GPT. So, we have an application that a company can use to enforce a policy that says ‘John is allowed to use GPT while Barry, unfortunately, can’t’. There’s no point just using something for the sake of using, because that could potentially cause issues,” says Sean Duca, regional chief security officer, Palo Alto Networks.
Hallucinations thrown up by GenAI are also a huge concern for application security and misinformation. A lot of it can be tied back to the data which the LLMs are trained on. “It boils down to garbage in, garbage out. The quality of the curation of the data that you’re feeding into those models affects the output,” says Duca.
Also Read : Oops! Sent Money By Mistake? How To Reverse UPI Transaction On Your Payment App?
Walker says if companies train LLMs on ‘poisoned data’, they’ll have a big problem because all the answers it will give are based on the wrong data set. “So having good data is key for LLMs, and I think a lot of people will start using LLMs trained in-house so they have better control. If you just use it from someone else, you don’t really know what they’re doing behind the scenes,” he adds.