Skip to content
AI email writers: The dual role of AI in cybersecurity
Graham SmithMay 24, 20234 min read

Email scams: The dual role of AI in cybersecurity

In the fast-paced tech world, the term ‘AI’ creates quite a buzz.

It has effortlessly infiltrated many sectors, helping with data analysis and automation tasks. One of the fields increasingly utilising AI is cybersecurity.

AI is continually evolving, with innovations cropping up that automatically detect security threats, thus lessening the reliance on human intervention. But now we are seeing cybercriminals using AI email writers.

Many in the cybersecurity industry believe AI might be our secret weapon against phishing attacks, a sentiment echoed by those who think it could be ‘the great equaliser’ in phishing attacks. But perhaps this highlights the inadequacy of traditional methods to combat cybercrime, such as monitoring domain reputation scores.

Even the most advanced AI can't predict how a person might react to a phishing email

A proposed answer to the growing threat is Cloud-based machine learning engines. They could help individuals identify phishing emails in real time. 

However, this tool is a double-edged sword. The bad actors are also taking advantage of AI, using it to identify weak points in a company’s anti-malware detection system. They study patterns in password creation from leaked data, thus enhancing their password-cracking abilities.

The arrival of ChatGPT – AI for all

One notable development in AI-assisted cybercrime is the launch of ChatGPT, a potent language model capable of generating incredibly human-like text. While it’s been used for beneficial purposes like quote generation and research summarisation, experts worry about its potential misuse in crafting persuasive phishing emails. 

There have been cases where ChatGPT was used to create malware, allowing cybercriminals to launch attacks more quickly.

Researchers from cybersecurity firm Darktrace have reported a notable increase in creative email-based attacks since the start of the year. This rise may be linked to the ease of creating persuasive scams and phishing emails, made possible by AI email writers like ChatGPT.

The company noticed an astonishing 135% increase in malicious cyber campaigns via email across their extensive client base. These attacks showed significant variations in language use, indicating a possible connection with the broader availability of generative AI tools.

These AI email writers are becoming more sophisticated, using improved punctuation, longer sentences, and covering a more comprehensive range of topics to lure people into action. 

The current data suggest changes in the techniques used in email-based threats; one is a rise in personalised attacks (spear phishing) aimed at specific targets, which generative AI makes easy. 

A survey conducted by Darktrace found that about 70% of over 6,700 employees noticed an increase in scam emails and text messages over the previous six months.

While they couldn’t confirm a direct correlation, the timeline of this increase coincides with the broader availability of generative AI tools.

The humans fight back…

Despite AI's promise, relying on it alone to fight cybercrime may be an endless struggle. Defensive AI tools can only respond to threats they’ve already encountered, restricting their ability to predict new attack strategies. 

Scam phishing incidents 2019-2022

The solution? Humans. 

Even the most advanced AI can’t predict how a person might react to a phishing email. Investing in employee education to recognise the hallmarks of these scams can create a defence line independent of changing tech trends.

And it seems to be working. Figures from the Information Commissioner’s Office (ICO) show the number of reported successful phishing attacks has fallen during 2019-2022. By contrast, figures released by the Anti-Phishing Working Group (APWG) show the number of attacks has increased during the same period.

Commentators admit that AI is a safety net rather than a failsafe solution. Graham Smith, Head of Marketing at OryxAlign, stresses the crucial role of staff training in forming a solid cybersecurity strategy. “The aim is to equip people with the skills to detect and avoid phishing attacks while treating AI tools as helpful backups.”

… but the criminals are smart

There was a worrying finding from the Darktrace research. The aspects that employees viewed as warning signs are the very areas where AI is particularly capable of making improvements:

  • 68% of participants viewed requests to click a link or open an attachment as suspicious (now criminals request a call-back)
  • 61% were cautious about emails from unknown senders or containing unexpected content (criminals use social engineering)
  • 61% saw poor spelling and grammar as potential red flags (AI is great at grammer, sorry grammar)

For example, generative AI could craft context around unusual requests, making them appear less suspicious. Or convincingly impersonate an individual belonging to a particular organisation or industry.

“The most dangerous phishing emails are the ones that are custom-made, well-crafted, and tailored to the recipient,” Graham points out. These require considerable effort from attackers, who must research their victims and understand their activities.

Graham suggests that with generative AI models, attackers could merely scrape a victim’s social media profile or a company’s news feed. They then use this information to generate a plausible email. This would prompt the recipient to click on a link or make a call, thereby increasing the efficiency and scale of attacks.

To keep up to date with cybersecurity developments and other AI and IT issues, sign up for our Bulletin. There are just 8 issues a year, and you can unsubscribe anytime.

RELATED ARTICLES