AI in cybersecurity: The double-edged sword reshaping digital defence

Kyle Lutterman and Jamie Schibuk, cyber risk experts at Arch Insurance, explore the dual nature of AI in cybersecurity, highlighting its role in both sophisticated attacks and advanced defence strategies.

AI technologies present new challenges for organisations and cybersecurity experts. But, as in any technological arms race, AI is a double-edged sword. While cybercriminals are leveraging AI to create more sophisticated and damaging attacks, it also offers unprecedented opportunities for enhancing cybersecurity defences.

The dark side of AI: The threat landscape

AI catapulted into public consciousness with the launch of ChatGPT in November 2022. This advanced chatbot marked a turning point in how we perceive and use AI technology. However, in cybersecurity, AI's influence has been growing for years – and not always for the better.

Cybercriminals and state-sponsored actors are increasingly harnessing the power of AI to launch more sophisticated, efficient and devastating attacks. Through large language models (LLMs) and natural language processing (NLP), among other AI capabilities, they have been able to enhance their attacks in several key areas, including:

  1. Phishing and social engineering: AI-generated content makes phishing emails and social engineering attempts more convincing and harder to detect.
  2. Cloud infiltration: AI tools scan for vulnerabilities in cloud environments and automate attacks.
  3. Ransomware: AI is enhancing ransomware operations, from target selection to data analysis and evasion techniques.

This shift from traditional cyber attacks to AI-powered offensives has raised significant concerns among businesses and cybersecurity experts while simultaneously accelerating the development of AI-based defence systems.

The bright side of AI: The advantage in cyber defence

Security vendors are at the forefront of the cybersecurity evolution, embedding AI capabilities into their products to detect and neutralise threats across a multitude of attack surfaces.

One of the most common uses of AI in cybersecurity defence involves establishing baseline normal activity and alerting security teams to anomalous behaviour. This approach allows security teams to spot potential threats that might otherwise go unnoticed, as AI detects subtle deviations from typical patterns.

Three key applications for this technology are email security, security awareness training and multi-factor authentication (MFA).

Email security

Intelligent filtering for evolving threats

Email remains a primary entry point for cyber attacks, and research has shown that the use of LLMs can reduce the cost of email phishing attacks for cybercriminals by more than 95 percent while achieving equal or greater success rates.

AI-powered email security solutions offer a robust defence against these sophisticated threats. These systems employ behavioural analysis, NLP and machine learning to detect suspicious patterns. By integrating real-time threat intelligence, these systems can adapt their algorithms on the fly, ensuring protection against the latest threats.

Take for example the $25mn AI voice scam whereby attackers used WhatsApp messages and an AI-generated deepfake video call to impersonate a company's CEO. Advanced AI-driven email security could detect similar schemes by analysing the context of communications, flagging unusual requests and identifying discrepancies in communication patterns. These systems can also detect subtle linguistic nuances that might indicate AI-generated content, providing an additional layer of defence against sophisticated impersonation attempts.

Security awareness training

Enhancing the human firewall

The human element remains a critical component of cybersecurity. AI-enhanced security awareness training prepares employees to recognise and respond to sophisticated threats that may bypass technical controls.

Modern AI-powered platforms use machine learning algorithms to analyse each employee's behaviour, role and past performance, offering personalised training based on individual risk profiles. These platforms can simulate AI-generated attacks and provide continuous assessment with adaptive learning paths.

This type of training helps to combat threats like hyper-personalised phishing attempts. By exposing employees to simulated attacks that mimic the writing style, interests and recent activities of trusted contacts, AI-driven training can help staff recognise even the most convincing phishing attempts. It can also educate employees on emerging threats like deepfake impersonation, teaching them to verify requests through multiple channels before acting on sensitive matters.

Multi-factor authentication

Introducing adaptive access control

AI is taking MFA to the next level with adaptive access control. AI-enhanced MFA continuously analyses various factors such as device information, location data, time of access and user behaviour to make intelligent authentication decisions.

This approach allows for a more nuanced and effective security stance, balancing user convenience with robust protection. If the system detects unusual patterns that might indicate an AI-driven attack, potential cloud infiltration, or ransomware activity, it can automatically enforce stricter authentication measures.

These AI-enhanced MFA systems are particularly effective against intelligent lateral movement tactics such as unusual file access patterns or attempts to encrypt large volumes of data in cloud networks. By constantly analysing user behaviour and network activity, they can quickly identify and block unusual movements within the network, even when attackers are using AI to mimic normal user behaviour.

A holistic approach to AI-enhanced cybersecurity

Each of these AI-powered security solutions offers a range of significant benefits including faster threat detection, improved response times and continuous learning abilities.

However, the real combat strength of these AI tools lies in their interconnectedness. Insights from email security can inform training content and adjust MFA risk assessments, while data from security awareness can influence MFA policies, creating a responsive security ecosystem. This integration enhances their collective impact.

To maximize AI's benefits in cybersecurity, organisations should therefore incorporate these solutions within a broader framework such as Arch CyPro's 8 Critical Controls. This holistic approach ensures that AI-enhanced tools complement other essential security measures, building a robust, multi-layered defence.

To stay ahead of evolving cyber threats and learn more about implementing these controls, visit archinsurance.com/cypro.