GPT technology is a great productivity tool – unfortunately, not just for you but for hackers too. What does it mean when hackers have this at their disposal? For starters, we can expect better, more automated attacks and a lower barrier to entry. The implication for organisations is that there is never a better time to step up your defences than now.

 

The arrival of a new generation of large language models (LLMs) like ChatGPT and GPT-4 is a huge leap forward in AI. These tools are far more advanced than prior AI models and are easy and cheap to access. They promise significant productivity gains for many tasks - from writing marketing copy and programming code, to learning new subjects.

Of course, if you can use it for good, the question is, can you use it for nefarious purposes too? OpenAI (maker of ChatGPT and GPT-4) are on to this already, thankfully. They do have built-in protections to prevent misuse. However, these sadly aren’t foolproof and not too hard for the determined to circumvent.

Even if OpenAI manages to improve its protections, the genie is somewhat out of the bottle. Many people are working hard to emulate OpenAI’s technology and create open-source versions that can be used without restriction. There are already several such models in the wild and though not quite as good as GPT-4 yet, that is probably just a matter of time.  

 

What can criminals use this tech for?

Europol recently released its report on how these models will impact crime, and it is sobering reading.   The key scenarios they see criminals using the technology for are:

  1. Fraud and social engineering: GPT can generate very realistic text – even down to a specific style – which can be used to create more sophisticated phishing campaigns at scale.
  2. Disinformation: The models are ideal for creating disinformation at scale and spreading messages that reflect a specific narrative easily.
  3. Cybercrime: ChatGPT can produce code from simple text instructions. This opens the door to for criminals, even with little technical knowledge, to produce malicious code.

Putting cyber attacks on autopilot

The next level that isn’t mentioned but is somewhat inevitable, is automation with GPT in the chain.

If you think of GPT as a brain you can engage at any step of a process (via API, not just manually), you can see how it can help automate sophisticated multistep tasks. For example, in a process (like hacking a website), you can use GPT to write some code to find if a website has a flaw. You can then make an automation that runs that code, goes back to GPT with information gathered in step 1 and asks it to generate more code to exploit a weakness tailored explicitly to that site. By chaining GPT-enhanced steps together, you can automate things a human would otherwise need to do. And you can do it at scale.  

Indeed, any of these malicious activities can be scaled up and weaknesses that might not have been worth the effort before could now be in scope. Both because they are easier to do and more people with a lower skillset can do them.

Proactive action is needed

To address these emerging threats, organisations must act quickly and decisively. A comprehensive assessment of security measures across all levels is crucial to provide resilience against GPT-driven attacks. As attackers can easily access and target an organisation's external surface, relying on "security by obscurity" is no longer a viable option – if it ever was.

To counter this new breed of cyber threats, organisations should focus on enhancing their controls and maintaining robust security hygiene. This includes strengthening employee training and awareness, regularly updating software and hardware, and implementing multi-layered security measures.

As we navigate the potential for ChatGPT to automate cyber-attacks, it's essential for organisations to be proactive. It starts with acknowledging the risks and taking steps to protect digital assets and reputation. The time to adapt and reinforce our defences is now – let's rise to the challenge and ensure a safer cyber future for everyone.