Networks
Future ready, intelligent networks for critical environments.
Designing, securing and managing the critical infrastructure powering the leading data centres across the UK and Europe.
Partnering with the UK’s leading construction contractors in delivering tech services to power future facilities.
Partnering with landlords and agents to create engaging workplaces through innovative technology services.
Empowering mid-market success and streamlining operations with co-managed IT services.
Leveraging our expertise to implement transformative technologies and services, we enable our SMB clients to focus on their growth.
We are a happy, supportive community with a clear sense of purpose and a strong team ethic.
Partnership is not a posture but a process – a continuous process that grows stronger each year as we devote ourselves to common goals.
We will dedicate more of our time and our talent to do all we can to positively impact the environment, our workforce and our community.
We are always looking for new talent. If you're looking to become a part of something great, let us know.
We create true alignment between your ambitions and the technology you need to achieve them.
Latest posts on the technology ecosystem covering cutting-edge industry trends, expert advice, valuable insights and thought leadership.
From award wins to sustainability, team events and coverage in the media - stay up to date on everything OryxAlign with our latest news.
Explore current and future trends across the technology landscape with our comprehensive selection of videos, infographics and guides.
It has effortlessly infiltrated many sectors, helping with data analysis and automation tasks. One of the fields increasingly utilising AI is cybersecurity.
AI is continually evolving, with innovations cropping up that automatically detect security threats, thus lessening the reliance on human intervention. But now we are seeing cybercriminals using AI email writers.
Many in the cybersecurity industry believe AI might be our secret weapon against phishing attacks, a sentiment echoed by those who think it could be ‘the great equaliser’ in phishing attacks. But perhaps this highlights the inadequacy of traditional methods to combat cybercrime, such as monitoring domain reputation scores.
A proposed answer to the growing threat is Cloud-based machine learning engines. They could help individuals identify phishing emails in real time.
However, this tool is a double-edged sword. The bad actors are also taking advantage of AI, using it to identify weak points in a company’s anti-malware detection system. They study patterns in password creation from leaked data, thus enhancing their password-cracking abilities.
One notable development in AI-assisted cybercrime is the launch of ChatGPT, a potent language model capable of generating incredibly human-like text. While it’s been used for beneficial purposes like quote generation and research summarisation, experts worry about its potential misuse in crafting persuasive phishing emails.
There have been cases where ChatGPT was used to create malware, allowing cybercriminals to launch attacks more quickly.
Researchers from cybersecurity firm Darktrace have reported a notable increase in creative email-based attacks since the start of the year. This rise may be linked to the ease of creating persuasive scams and phishing emails, made possible by AI email writers like ChatGPT.
The company noticed an astonishing 135% increase in malicious cyber campaigns via email across their extensive client base. These attacks showed significant variations in language use, indicating a possible connection with the broader availability of generative AI tools.
These AI email writers are becoming more sophisticated, using improved punctuation, longer sentences, and covering a more comprehensive range of topics to lure people into action.
The current data suggest changes in the techniques used in email-based threats; one is a rise in personalised attacks (spear phishing) aimed at specific targets, which generative AI makes easy.
A survey conducted by Darktrace found that about 70% of over 6,700 employees noticed an increase in scam emails and text messages over the previous six months.
While they couldn’t confirm a direct correlation, the timeline of this increase coincides with the broader availability of generative AI tools.
Despite AI's promise, relying on it alone to fight cybercrime may be an endless struggle. Defensive AI tools can only respond to threats they’ve already encountered, restricting their ability to predict new attack strategies.
The solution? Humans.
Even the most advanced AI can’t predict how a person might react to a phishing email. Investing in employee education to recognise the hallmarks of these scams can create a defence line independent of changing tech trends.
And it seems to be working. Figures from the Information Commissioner’s Office (ICO) show the number of reported successful phishing attacks has fallen during 2019-2022. By contrast, figures released by the Anti-Phishing Working Group (APWG) show the number of attacks has increased during the same period.
Commentators admit that AI is a safety net rather than a failsafe solution. Graham Smith, Head of Marketing at OryxAlign, stresses the crucial role of staff training in forming a solid cybersecurity strategy. “The aim is to equip people with the skills to detect and avoid phishing attacks while treating AI tools as helpful backups.”
There was a worrying finding from the Darktrace research. The aspects that employees viewed as warning signs are the very areas where AI is particularly capable of making improvements:
For example, generative AI could craft context around unusual requests, making them appear less suspicious. Or convincingly impersonate an individual belonging to a particular organisation or industry.
“The most dangerous phishing emails are the ones that are custom-made, well-crafted, and tailored to the recipient,” Graham points out. These require considerable effort from attackers, who must research their victims and understand their activities.
Graham suggests that with generative AI models, attackers could merely scrape a victim’s social media profile or a company’s news feed. They then use this information to generate a plausible email. This would prompt the recipient to click on a link or make a call, thereby increasing the efficiency and scale of attacks.
To keep up to date with cybersecurity developments and other AI and IT issues, sign up for our Bulletin. There are just 8 issues a year, and you can unsubscribe anytime.