In October, a Microsoft security solutions consultant called Sam Mitrovic went viral after I reported how he had so nearly fallen victim to an AI-powered attack. So convincing, and typical of the latest wave of cyberattacks targeting Gmail users that it is worth mentioning briefly again. It started a week before it started, let me explain:
Mitrovic got a notification about a Gmail account recovery attempt, apparently from Google. He ignored this, and the phone call also pertaining to come from. Google that followed a week later. Then, it all happened again. This time, Mitrovic picked up: an American voice, claiming to be from Google support, confirmed that there was suspicious activity on the Gmail account.
To cut this long story short, please do go read the original, it is very much worth it, the number the call was coming from appeared to check out as being Google from a quick search, and the caller was happy to send a confirmation email.
However, being a security consultant, Mitrovic spotted something that a less experienced user may well not have done: the “To” field was a cleverly obfuscated address that wasn’t really a genuine Google one. As I wrote at the time, “It’s almost a certainty that the attacker would have continued to a point where the so-called recovery process would be initiated,” which would have served to capture login credentials and quite possibly a session cookie to enable 2FA bypass as well.
Sharp U.K. research has also concluded that “AI is being weaponized for cyber attacks,” and pointed to six specific attack methodologies that account for much of this weaponization. “While AI offers great benefits in various fields,” the report stated, “its misuse in cyber attacks represents a significant and growing threat.” Those threats were:
1 The Use Of AI In Password Cracking—AI is taking over from brute-force password cracking strategies for good reason, as machines better learn the patterns used in password creation, the report stated, “AI algorithms can analyze millions of passwords and detect common trends, allowing hackers to generate highly probable password guesses.” It’s far more efficient than bog-standard brute-forcing, allowing hackers to complete this stage of an attack process far quicker and at less cost in terms of time and resources. “AI-driven password-cracking tools are also capable of bypassing two-factor authentication,” the report claimed, “by learning from failed attempts and improving their chances of success over time.”
2 Cyberattack automation—anything that can be automated will be automated when it comes to determined hackers and cybercriminals looking for ways into your network and data; from vulnerability scanning to attack execution at scale. By deploying AI-powered bots to scan thousands of websites or networks simultaneously, the Sharp U.K. report said, weaknesses can be found to be exploited. And that exploitation process can also be automated with the help of AI. “AI-powered ransomware can autonomously encrypt files, determine the best way to demand ransom, and even adjust the ransom amount based on the perceived wealth of the target,” the researchers said.
3 Deepfakes—as already mentioned, these are being used in attacks targeting Gmail users. “In one high-profile case,” the report said, “a deepfake audio of a CEO's voice was used to trick an employee into transferring $243,000 to a fraudster’s account. As deepfake technology continues to evolve, it becomes increasingly difficult for people and organizations to distinguish between real and fake, making this a powerful tool for cyber attackers.”
4 Data mining—because AI can enable an attacker to not only collect but also analyze data at scale and at speeds that would have been considered impossible just a couple of years ago, it’s hardly surprising that this is a resource that’s being used and used hard. “By using machine learning algorithms, cybercriminals can sift through public and private databases to uncover sensitive information about their targets,” the report warned.
5 Phishing attacks—the methodology that is most applicable to the Gmail attack threat, the use of AI in constructing and delivering authentic and believable social engineering attacks. “AI tools can analyze social media profiles, past interactions, and email histories,” the report warned, “to craft messages that seem legitimate.”
6 The evolution of malware, at scale—AI-powered malware is a thing in its own right, often coming with the ability to adapt behavior in an attempt to evade detection. “AI-enhanced malware can analyze network traffic to identify patterns in cyber security defenses,” the report said, “and alter its strategy to avoid being caught.” Then there’s the small matter of code-changing polymorphism to make it harder for security researchers to recognize and, as we’ll explore in a moment, the use of large language models to create these subtle malware variations at speed and scale.
“The findings of Sharp’s recent study highlights the need for organizations to take a different approach to cybersecurity awareness training,” Lucy Finlay, director of secure behavior and analytics at ThinkCyber Security, said, “this shift is crucial to protecting people from emerging threats like deepfake phishing designed to very effectively manipulate employees.” Finlay warned that the notion one-in-three workers claim to feel “confident in spotting cyber threats,” and noted that it was a self-reported metric. I don’t doubt the numbers, to be honest, if anything, my experience suggests it could even be higher as people tend to overestimate their own capabilities. “In reality,” Finlay concluded, “it is likely they would struggle to recognize a sophisticated deepfake scam if confronted with one.”