Smarter Scams: How AI is Supercharging Phishing Attacks

AI has changed how we do business, but it’s also being leveraged by cyber criminals to transform the threat landscape. A cyber attack that’s significantly benefitted from these tools? Phishing scams.

Before the era of AI, phishing attacks were riddled with poor spelling and grammar. If you knew what to look for, and maybe even if you didn’t, these mistakes were obvious red flags signifying something was off. After all, it was highly unlikely the supposed sender (like your bank or large organisations like Microsoft and Apple) wouldn’t do a spell check for customer communications.

Now, with the help of generative AI tools like ChatGPT (a chatbot) or ElevenLabs (a text to speech and AI voice generator) these scams are becoming more sophisticated and much harder to spot, and the likelihood of falling victim is increasing. To help you stay secure, we’re putting a spotlight on AI phishing (including email, SMS, and voice forms), real world examples, and what you can do to protect your business. Let’s dive in.

How Generative AI is Changing the Scam Game

If you need a refresher on phishing scams, these attacks can take the form of an email, SMS (smishing), or an audio call (vishing). They imitate real businesses or people and prey on human error to succeed. They use a sense of urgency and appeal to emotions like fear or excitement to prompt immediate action, getting recipients to click on malicious links or attachments or provide sensitive information like login or financial details.

So, how is generative AI changing these scams?

Generative AI produces content based on prompts. Cyber criminals are using these tools to:

  • Quickly write trustworthy sounding emails with improved tone alongside correct spelling and grammar.
  • Quickly collect data about intended recipients from social media or other online sources, allowing them to personalise emails and create spear phishing attacks.
  • Generate deepfake audio and video, imitating trusted and authoritative team members like the C-suite.
  • Generate sophisticated scripts for vishing scams. 

The result? These scams sound like a real person – whether they’re communicated through email, a SMS, an audio call, or a video call. An example of AI phishing could be as simple as an email that appears to be from your supplier, asking for a payment to be redirected.

Why These Scams Work

These scams are easier to fall for because:

  • The language used is realistic, and email topics are more relevant (unlike the often random phishing emails of the past).
  • Communications come from familiar names and faces, so people are immediately less critical of directions and tone – even if it might seem out of the norm.
  • AI allows cyber criminals to work quickly, achieving sophisticated outcomes with less effort. As a result, the volume of these scams is increasing.

Real World Examples of AI Phishing

So far this year, 30,149 phishing attacks have been reported to ScamWatch costing Australians more than $14 million. That’s just in the past six months, and doesn’t account for attacks that haven’t been reported.  

A notable example of AI phishing was seen in 2024 when Arup, a British engineering company, fell victim to deepfake fraud that leveraged generative AI technology. The scammers sent a phishing email to a member of Arup’s finance team in Hong Kong, imitating the business’ UK-based CFO and asking for funds be transferred outside of normal practices. While this initial email was ignored because it seemed like a scam, the cyber criminals went a step further and contacted the staff member via video call. During this call they used AI-generated deepfakes of the CFO and other senior staff – and the impersonations were so realistic the employee was tricked into sending the funds totalling $37 million AUD. This shows that even employees who are aware of online risks can be fooled by AI scams.

What Businesses Can Do

So, how can you protect your business in the face of this threat?

  1. Prioritise cyber security, implementing technical solutions including email filtering, multi-factor authentication, and domain protection (DMARC and SPF). 
  2. Run regular staff training to increase cyber awareness, including updated phishing simulations.
  3. Create policies, including setting up payment approval workflows with verification steps.
  4. Work with an IT partner that focuses on cyber security, including AI-aware protection and education for your team. If you’re working with a Managed Services Provider, you should also regularly check-in to ensure you understand current risks and how you’re protected.

How Perth Support Can Help

AI phishing attacks will continue to become more convincing as technology advances, and protecting your business against these threats is critical. If you’re worried about whether your team could spot an AI-powered phishing attack, or if your cyber security measures can safeguard your business against this threat, let’s have a chat. We’re here to help you stay ahead of cyber threats, providing training, tools, and easy to understand advice. Just get in touch with us today here.