The Impact of AI on Cybersecurity Threats and How to Best Defend Against These

Reading Time: 6 minutes


Rosie Christos | Data Protection Advisor

Date: 20 February 2024

The launch of ChatGPT marked a turning point in our Information age, the overnight surge in the uptake of AI it created has only continued to increase. This trend shows no signs of abating.  Whilst this era of AI offers humanity many significant opportunities for advancement, there are also important implications for cybersecurity. This article considers some of the key risks posed in this area as well as the core actions needed to mitigate them. 

Emerging threats

Enhanced Social Engineering
Arguably the most obvious misuses of AI relate to social engineering and phishing. Where phishing messages historically consisted of tell-tale spelling and grammar errors, Large Language Models (LLMs), such as ChatGPT, create well written and realistic text which make for far more persuasive interactions. This risk is further compounded by the capability of LLMs to rapidly analyse vast amounts of publicly available data on any given target as well as to impersonate the style of speech of specific individuals or groups.  

Some AI models have inbuilt restrictions to prevent them from being directly used for any purpose that could cause harm. For instance, where they are asked to write phishing emails, this will trigger their configured ethical guardrails which prevent them complying with such requests, as shown in the screenshot below. 

Screenshot from ChatGPT 3.5 (OpenAI) – request for phishing email.

However, these internal restrictions can be circumvented by feeding them instructions (so-called ‘prompt injections.’) aimed at indirectly bypassing such intended operational restrictions.  

Prompts injections can work in any number of ways. One particularly creative prompt injection asks the model to take the following 3 strings of letters ‘NA’ ‘PA’ ‘LM’ to form a word and then create an output to show the needed ingredients for what the word is. Consequently, getting the bot to spell Napalm and list the ingredients needed to make it.  Another prompt injection which trended is ‘Do Anything Now’ aka DAN as shown below:  

Screenshot: ChatGPT 3.5 (OpenAI) –  ‘DAN’ prompt injection being input (please note this has been shortened and redacted to avoid misuse).

As you can see from the below screenshot, where the ‘DAN’ prompt injection succeeds the model is tricked into acting outside of their ethical constraints and subsequently complies with the request of writing a phishing email. Whilst OpenAI frequently patch (security fix) their models to prevent such misuse, where it is detected, in response, prompt injections are continuously being amended to evade such patches.   

Screenshot: ChatGPT 3.5 (OpenAI) composing a phishing email after being provided the ‘DAN’ prompt injection (15/02/24). 

As a result, these injections can be used to circumvent not only restrictions on producing phishing text but any form of abuse.  

Other Enhanced Cyber Attack capabilities
The enhancement that AI offers to threat actors does not stop at social engineering campaigns, but includes numerous other aspects, such as: 

  • Generating Malware Code: LLMs can be used to generate and correct code for malware such as viruses, ransomware, and trojans, this puts more destructive potential in the hands of amateur adversaries.   
  • Targeted Attacks: AI can be used to analyse vulnerabilities of target networks and systems. This information can then be used to craft highly targeted and personalised attacks that are more likely to succeed. 
  • Automated Attacks: LLMs can also be used to automate attacks allowing threat actors to scale their efforts, targeting multiple systems or networks simultaneously with precision.  

How to defend against these threats

Phishing Defences
Given this increased realism of phishing, email authentication controls to restrict spoofed emails (emails that appear to be from a legitimate source, actually sent from a different source) are more important than ever, consider configuring the below (freely available) controls to prevent this: 

  • SPF (Sender Policy Framework): Allows the email domain owner to specify which specific mail servers are authorised to send email on their behalf to prevent imposters sending emails that appear to come from their domain.
  • DKIM (DomainKeys Identified Mail): Allows for digital signatures to be incorporated and used to verify the legitimacy of emails.
  • DMARC (Domain-based Message Authentication, Reporting, and Conformance): Uses the results of SPF and DKIM to decide whether to block, allow, or mark emails as spam.  

Whilst the above defences prevent email spoofing, mail filters have controls to prevent general spam and phishing. Email service providers come with in-built filters; however, these often rely on simple metrics for categorisation. Third party mail filters often use far more sophisticated technology, such as the use of AI to analyse patterns, detect anomalies, and recognize evolving tactics. It is worthwhile examining your current mail filter to consider if this is sufficient for your organisation’s risk appetite.  

Multi-layered Social Engineering defence
The best defence is always a multi-layered one, therefore in addition to the above protections, numerous layers of controls should be in place as part of a robust social engineering defence, such as: 

  • Ensure staff are provided with updated training that helps them spot phishing.  
  • Consider what information is available to attackers on your website and social media and minimise this using a risk proportionate approach. 
  • Ensure there is a clear reporting process, with support and feedback for staff. 
  • Protect users from malicious websites by utilising proxy services and up-to data browsers.  
  • Utilise phishing resistant Multi Factor Authentication (MFA). 

Other Enhanced Cyber Attack capabilities

The overall enhanced attack capabilities provided by AI raises the stakes for cybersecurity defence, necessitating more advanced security measures to identify and neutralise threats proactively. 

AI as a tool for defence
In order to keep pace with the offensive use of AI, such technology should be incorporated within the corresponding defence.  This defensive use is not a new development, AI has long been harnessed for the analysis of suspicious patterns and behaviours. The most recent developments in AI have further improved these capabilities. Defence teams should utilise AI integrated security tools, to match and surpass these attacks:  

  • Detect and respond to unauthorised network access, by utilising Intrusion Detection and Intrusion Prevention Systems (IDS/IPS) which achieve this through monitoring network traffic and system activities. 
  • Prevent sensitive and critical data from leaving the network unauthorised, by implementing Data Loss Prevention (DLP) tools, that perform deep packet inspections to identify and classify data and restrict export appropriately.  
  • Enable real time visibility and correlation of key data to identify and respond appropriately to attacks, using a Security Information and Event Management (SIEM).  SIEMs collect, aggregate and analyse data from various feeds (e.g., logs from applications, servers, network devices, external threat intelligence feeds) to allow for cutting edge detection.
  • Enhance the speed and efficiency of threat response, by use of Security Orchestration Automation and Response (SOAR). SOARs work hand in hand with the SIEM to efficiently manage security tasks and workflows, automating repetitive processes, and responding to cyber threats with speed and precision.  

In response to the expanding threat landscape posed by AI the traditional established cybersecurity controls are now more important than ever, organisations should assess all aspects of their information security controls, some core aspects to consider are: 

Network Segmentation
Network segmentation prevents attackers from traversing one network segment to another and getting to areas of the network containing the most sensitive assets. Where attackers have been able to infiltrate a network, the damage which is greatly restricted where there are such security controls at the boundaries of each network segment.  

Implementing an ISMS (Information Security Management System)
An ISMS is a framework of security controls and policies providing a systematic approach to managing and protecting an organization’s information through risk management. This structured approach which relies on key risk indicators and stakeholder feedback allows for the efficient identification and treatment of emerging risks. Such as the enhanced threats posed by AI, as well as other general gaps and vulnerabilities, that will be exacerbated by these enhanced threats.

Whilst the threat landscape has and will continue to evolve as a result of the advancements and availability of AI organisations should not feel impotent in the face of this. It is now a pivotal time that organisations should use to take stock of and assess their information security controls to ensure they are sufficiently hardened in view of this evolution.  

Our Data Protection and Cyber Risk Team in assist in your efforts. In light of your risk profile, we can develop a tailored ISMS and conduct comprehensive assessments of your current cyber security controls, with a suggested action plan as well as solutions to help address potential gaps. Please contact our advisors if you would like to receive expert assistance for your cybersecurity programme. 

Related posts

AI is rapidly transforming industries. Take for example the legal field, which is traditionally conservative in relation to technology. More…

Let's discuss your career