Website under development! Thank you for your visit...

News & Trends

January 7, 2025

Urgent New Gmail Security Warning For Billions As Attacks Continue

Source: Forbes
Author: Davey Winder, A veteran cybersecurity writer, hacker and analyst.
SOPA Images/LightRocket via Getty Images
The single most popular free email platform on the planet is under attack from hackers wielding AI-driven threats. With 2.5 billion users, according to Google’s own figures, Gmail isn’t the only target of such attacks, but it sure is the biggest. Here’s what you need to know and do to protect yourself. Right now.
The AI Threat To Billions Of Gmail Users Explained
Gmail is most certainly not immune to advanced attacks from threat actors looking to exploit the treasure trove of sensitive data that is to be found ion the average email inbox. As I recently reported, there’s an ongoing Google Calendar notification attack that relies upon Gmail to succeed, and Google itself has warned about a second wave of Gmail attacks that include extortion and invoice-based phishing, for example.

With Apple also warning iPhone users about spyware attacks, an infamous ransomware gang rising from the dead and claiming Feb. 3 as the next attack date, now is not the time to be cyber-complacent. Certainly not when a giant of the security vendor world, McAfee, issued a new warning that confirmed what I have been saying about the biggest threat facing Gmail users: AI-powered phishing attacks that are frighteningly convincing.
“Scammers are using artificial intelligence to create highly realistic fake videos or audio recordings that pretend to be authentic content from real people,” McAfee warned, “As deepfake technology becomes more accessible and affordable, even people with no prior experience can produce convincing content.”
So, just imagine what people, threat actors, scammers and hackers with prior experience, can produce by way of an AI-driven attack. Attacks that can get within a cat’s whisker of fooling a seasoned cybersecurity professional into handing over credentials that could have seen his Gmail account hacked with all the consequences that could carry.
The Convincing AI-Powered Attacks Targeting Gmail Users
In October, a Microsoft security solutions consultant called Sam Mitrovic went viral after I reported how he had so nearly fallen victim to an AI-powered attack. So convincing, and typical of the latest wave of cyberattacks targeting Gmail users that it is worth mentioning briefly again. It started a week before it started, let me explain:
Mitrovic got a notification about a Gmail account recovery attempt, apparently from Google. He ignored this, and the phone call also pertaining to come from. Google that followed a week later. Then, it all happened again. This time, Mitrovic picked up: an American voice, claiming to be from Google support, confirmed that there was suspicious activity on the Gmail account.
To cut this long story short, please do go read the original, it is very much worth it, the number the call was coming from appeared to check out as being Google from a quick search, and the caller was happy to send a confirmation email.

However, being a security consultant, Mitrovic spotted something that a less experienced user may well not have done: the “To” field was a cleverly obfuscated address that wasn’t really a genuine Google one. As I wrote at the time, “It’s almost a certainty that the attacker would have continued to a point where the so-called recovery process would be initiated,” which would have served to capture login credentials and quite possibly a session cookie to enable 2FA bypass as well.

Sharp U.K. research has also concluded that “AI is being weaponized for cyber attacks,” and pointed to six specific attack methodologies that account for much of this weaponization. “While AI offers great benefits in various fields,” the report stated, “its misuse in cyber attacks represents a significant and growing threat.” Those threats were:
1 The Use Of AI In Password Cracking—AI is taking over from brute-force password cracking strategies for good reason, as machines better learn the patterns used in password creation, the report stated, “AI algorithms can analyze millions of passwords and detect common trends, allowing hackers to generate highly probable password guesses.” It’s far more efficient than bog-standard brute-forcing, allowing hackers to complete this stage of an attack process far quicker and at less cost in terms of time and resources. “AI-driven password-cracking tools are also capable of bypassing two-factor authentication,” the report claimed, “by learning from failed attempts and improving their chances of success over time.”

2 Cyberattack automation—anything that can be automated will be automated when it comes to determined hackers and cybercriminals looking for ways into your network and data; from vulnerability scanning to attack execution at scale. By deploying AI-powered bots to scan thousands of websites or networks simultaneously, the Sharp U.K. report said, weaknesses can be found to be exploited. And that exploitation process can also be automated with the help of AI. “AI-powered ransomware can autonomously encrypt files, determine the best way to demand ransom, and even adjust the ransom amount based on the perceived wealth of the target,” the researchers said.

3 Deepfakes—as already mentioned, these are being used in attacks targeting Gmail users. “In one high-profile case,” the report said, “a deepfake audio of a CEO's voice was used to trick an employee into transferring $243,000 to a fraudster’s account. As deepfake technology continues to evolve, it becomes increasingly difficult for people and organizations to distinguish between real and fake, making this a powerful tool for cyber attackers.”

4 Data mining—because AI can enable an attacker to not only collect but also analyze data at scale and at speeds that would have been considered impossible just a couple of years ago, it’s hardly surprising that this is a resource that’s being used and used hard. “By using machine learning algorithms, cybercriminals can sift through public and private databases to uncover sensitive information about their targets,” the report warned.

5 Phishing attacks—the methodology that is most applicable to the Gmail attack threat, the use of AI in constructing and delivering authentic and believable social engineering attacks. “AI tools can analyze social media profiles, past interactions, and email histories,” the report warned, “to craft messages that seem legitimate.”

6 The evolution of malware, at scale—AI-powered malware is a thing in its own right, often coming with the ability to adapt behavior in an attempt to evade detection. “AI-enhanced malware can analyze network traffic to identify patterns in cyber security defenses,” the report said, “and alter its strategy to avoid being caught.” Then there’s the small matter of code-changing polymorphism to make it harder for security researchers to recognize and, as we’ll explore in a moment, the use of large language models to create these subtle malware variations at speed and scale.
“The findings of Sharp’s recent study highlights the need for organizations to take a different approach to cybersecurity awareness training,” Lucy Finlay, director of secure behavior and analytics at ThinkCyber Security, said, “this shift is crucial to protecting people from emerging threats like deepfake phishing designed to very effectively manipulate employees.” Finlay warned that the notion one-in-three workers claim to feel “confident in spotting cyber threats,” and noted that it was a self-reported metric. I don’t doubt the numbers, to be honest, if anything, my experience suggests it could even be higher as people tend to overestimate their own capabilities. “In reality,” Finlay concluded, “it is likely they would struggle to recognize a sophisticated deepfake scam if confronted with one.”
Unit 42 Researchers Develop New Adversarial Machine Learning Algorithm That Could Help Gmail And Other Users Defend Against AI-Powered Malware
Newly published research coming out of the Unit 42 group based at Palo Alto Networks, has detailed how by developing an adversarial machine learning algorithm to employ large language models in order to generate malicious JavaScript code at scale, detection of these AI-powered threats in the wild can be reduced by as much as 10%.

One of the big problems facing both users and those who work to defend them against cyber threats, is that while “LLMs struggle to create malware from scratch,” Unit 42 researchers Lucas Hu, Shaown Sarker, Billy Melicher, Alex Starov, Wei Wang, Nabeel Mohamed and Tony Li, said, “criminals can easily use them to rewrite or obfuscate existing malware, making it harder to detect.” It’s relatively easy for defenders to detect existing off-the-shelf obfuscation tools because their fingerprints are well known, their actions already cataloged. LLMs have changed the obfuscation game, swinging the odds in favor of the attackers as using AI prompts they can “perform transformations that are much more natural-looking,” the report stated, “which makes detecting this malware more challenging.”

The ultimate aim is, with the use of multiple layers of such transformations, fooling malware classifiers into thinking malicious code is, in fact, totally benign.

Unit 42 managed to create an algorithm using LLMs themselves to rewrite malicious JavaScript code, continually applying a number of rewriting steps to fool static analysis models. “At each step,” the researchers said, “we also used a behavior analysis tool to ensure the program’s behavior remained unchanged.” Why is this important? Because, given the availability of generative AI tools for attackers, as we’ve seen in various attacks against Gmail users for example, the scale of malicious code variants and the difficulty in detecting them will continue to grow.

The Unit 42 work shows how defenders “can use the same tactics to rewrite malicious code to help generate training data that can improve the robustness of ML models.” Indeed, Unit 42 said that by using the rewriting technique mentioned, it was able to develop a new deep learning-based malicious JavaScript detector, which is currently “running in advanced URL filtering detecting tens of thousands of JavaScript-based attacks each week.”
What Gmail And McAfee Recommend You Do To Mitigate Ongoing AI Attacks
When it comes to mitigation advice, some can be more relevant than others. Take the recent advice from the Federal Bureau of Investigation, of all people, which suggested verifying phishing emails by checking for spelling errors and grammatical inconsistencies. This, as I have pointed out, is very outdated advice and, as such, pretty pointless in the AI-driven threatscape of today.

McAfee’s advice is to “protect yourself by double-checking any unexpected requests through a trusted, alternate method and relying on security tools designed to detect deepfake manipulation,” and is much better.

Best still, however, is the advice from Google itself when it comes to mitigating attacks against Gmail users and can be broken down into these main points:
• If you receive a warning, avoid clicking on links, downloading attachments or entering personal information. “Google uses advanced security to warn you about dangerous messages, unsafe content or deceptive websites,” Google said, “even if you don't receive a warning, don't click on links, download files or enter personal info in emails, messages, web pages or pop-ups from untrustworthy or unknown providers.”

• Don't respond to requests for your private info by email, text message or phone call and always protect your personal and financial info.

• If you think that a security email that looks as though it's from Google might be fake, go directly to myaccount.google.com/notifications. “On that page,” Google said, “you can check your Google Account's recent security activity.”

• Beware of urgent-sounding messages that appear to come from people you trust, such as a friend, family member or person from work.

• If you click on a link and are asked to enter the password for your Gmail, Google account or another service: Don’t. “Instead, go directly to the website that you want to use,” Google said, and that includes your Google/Gmail account login.
>> Read the original article here.
October 23, 2024

CAM 2024: Stay Safe Online When Using Artificial Intelligence

Source: Qwerty Concepts
Author: Stanley Kaytovich
Qwerty Concepts Inc. Website
October marks National Cybersecurity Awareness Month (CAM), an initiative dedicated to promoting the importance of cybersecurity across the nation. As technology evolves, artificial intelligence (AI) is becoming an increasingly powerful tool, offering everything from personalized marketing to advanced cybersecurity measures. However, AI’s potential to enhance our digital lives also brings new cybersecurity risks. Educating ourselves on these risks is essential to foster a culture of vigilance and proactive cybersecurity practices.
The Role of AI in Cybersecurity and Cybercrime
While AI, like ChatGPT or Claude, are robust for improving many aspects of our lives, they also raise concerns about artificial intelligence safety in cybersecurity.. Hackers and cybercriminals are quickly finding ways to exploit AI to create more sophisticated scams and attacks. As AI-powered systems become more prevalent, understanding how hackers might use these tools against us is crucial.

For instance, AI can create convincing fake voices or images, known as “deepfakes.” Cybercriminals use these to trick people into revealing sensitive information or sending money. Imagine receiving a call that sounds exactly like your boss asking you for urgent access to a company account. If you're unprepared, you could easily fall victim to such schemes.
Tip: Protect yourself by strengthening your defenses. Use strong passwords, enable multi-factor authentication (MFA), keep software updated, and report phishing attempts immediately. A robust digital hygiene routine is your first line of defense against cyber threats.
Protecting Your Privacy: Mind Your AI Inputs
Another significant risk arises when interacting with AI systems, such as chatbots, virtual assistants, or other AI-powered tools. These systems learn from the data they receive, making it critical to be cautious about what you share. It’s tempting to ask an AI assistant for advice or details about a personal situation, but you must always consider the information you're giving away.

For example, an AI system designed to improve customer service may retain information you’ve shared to learn and improve. However, if you share sensitive details, such as financial information, you might inadvertently expose yourself to risks, especially if the AI is compromised or if data privacy policies are not stringent enough.
Tip: If you wouldn’t post something on social media, don’t share it with artificial intelligence. Treat AI as a public space rather than a confidential conversation.
Be Privacy-Aware in a Connected World
Publicly available data is another valuable resource for AI systems, but it also raises concerns about artificial intelligence safety.. While this accessibility can be a boon for innovative technologies, it also means that AI can potentially scrape and use any information you post online. For instance, an AI tool designed to enhance online profiles or recommend job candidates might rely on publicly accessible information.

If you habitually share personal updates or photos online, be aware that AI tools could access this information. For example, a social media post that contains sensitive information about your location, family, or habits could be picked up by AI algorithms, leaving you vulnerable to targeted scams or phishing attacks.
Tip: Before posting anything online, ask yourself whether you'd be comfortable with AI having access to that information. If the answer is no, reconsider posting it..
The Bigger Picture: Cybersecurity as a Shared Responsibility
It’s easy to see AI as a futuristic concept, but its implications are very present today. From AI-powered phishing emails to deepfake scams, cybercriminals are already taking advantage of AI’s capabilities. This reality means that cybersecurity awareness isn’t just for IT professionals—it’s essential for everyone who uses digital technology.

For example, think about common scams today. Hackers use AI-generated emails to mimic legitimate businesses or people you trust, luring victims into providing sensitive information or making unauthorized payments. These emails often use sophisticated AI algorithms to craft messages that bypass traditional spam filters and reach your inbox looking entirely legitimate.

Education is key to mitigating these risks. Businesses must prioritize training employees on identifying phishing attacks and recognizing deepfake scams. Individuals should stay informed about emerging threats and the evolving tactics used by cybercriminals.
Creating a Culture of Cyber Vigilance
The goal of National Cybersecurity Awareness Month is to create a culture where everyone—from top-level executives to everyday users—is vigilant about cybersecurity. When it comes to AI, this means understanding both its potential benefits and the risks it can introduce.

AI is only as effective as the data and safeguards behind it. So, adopting best practices, educating employees, and staying informed about new threats are all essential steps toward creating a proactive approach to cybersecurity. Our managed IT services plans include managed security as well as a great cybersecurity awareness training platform to train employees on AI safety while keeping it fun.
Conclusion
As AI becomes an integral part of our digital lives, the potential cybersecurity challenges must be addressed head-on. National Cybersecurity Awareness Month offers a timely reminder to take these threats seriously. Strengthen your defenses with strong passwords, MFA, and regular software updates. Be cautious about sharing sensitive information with AI systems and be mindful of what you post online.

In an era of increasing digital risks, staying vigilant and proactive about cybersecurity is not just an option—it’s a necessity. By understanding how AI can be used against us and taking simple but effective precautions, we can navigate the digital world more safely and confidently.
>> Read the original article here.

Want to talk?

Feel Free to Reach Me

Copyright © 2025 Jorge Penuela
Privacy Policy
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram