Is your workforce ready to combat AI-powered cyber-attacks? Maybe not. Max Vetter at Immersive Labs explores this challenging issue
Generative AI tools like ChatGPT and Google Bard are now deep rooted into many daily business processes. According to recent research, these tools are seen as important priorities by 89% of tech companies in the US and UK. Almost all (97%) are planning to implement AI to support communication by 2025.
OpenAI only debuted ChatGPT3 in November 2022, while Google Bard took the stage in February 2023. Rarely has any new technology had such a transformative impact, so quickly.
However, the extremely accessible nature of these tools also means it’s easy to overlook the steeper learning curve regarding security and privacy.
So, what are the biggest issues and threats around generative AI tools, and what can you do to keep up and minimise the risks?
Primary threats from generative AI
Like most tools, AI has the potential to be misused or even turned into a weapon by cyber-criminals. As such, we’re seeing attackers deploy popular AI solutions to enhance their attacks. One of the most accessible avenues is the scripting of realistic phishing emails.
A threat actor could, for example, feed a large language model (LLM) tool like ChatGPT several examples of emails from a particular individual, ask it to learn the individual’s style and tone of voice and draft new emails convincingly mimicking the originals.
There are also a number of offensive tools with GPT elements, like PentestGPT and BurpGPT that could be used by attackers. Though initial testing shows they currently aren’t very good when it comes to automating attacks, we would expect these tools to improve over time, and hackers may look to use them as part of automated attacks or embed them in malware worms to assist moving throughout a network.
On the other hand, AI is becoming increasingly useful on the defensive side, too, and security professionals are continually finding new ways the tools can help detect vulnerabilities. This game of cat and mouse is not new, but the scale and complexity of this interplay will only grow as each side gains advantages and develops countermeasures in rapid succession.
It’s also important to remember that AI can pose a security risk without an outside aggressor, especially as even the largest organisations are still getting to grips with the nascent technology. In May, for example, Samsung banned its personnel from using AI after engineers inadvertently leaked sensitive information through ChatGPT.
You must be aware of the potential for issues like this, work on internal policies and staff awareness, and watch for external threats.
Resilience against AI-powered attacks
There are multiple steps you can take to improve organisational resilience against AI-powered cyber-threats. First and foremost is a thorough assessment to understand the potential risks, particularly social engineering. This is the foundation of any cyber-security strategy but is even more crucial as threat actors increase the speed and sophistication of their attacks.
Once the level of risk is made clear, the wider workforce must understand it, too. Ensure everyone knows how AI may impact them and the realism of fraudulent and malicious communications. Your workforce can be your best defence if they are equipped appropriately. Email security tools that work on behavioural analytics rather than signature detection can also help mitigate the chances of these malicious messages reaching your staff’s inbox in the first place.
The value of building AI knowledge
Once you have the basics covered, it’s time to consider more advanced approaches to building resilience against AI-based threats. This is best achieved through a hands-on approach, with exercises covering best practices, risks, and common mistakes.
These exercises need to be realistic to drive home the threat scenarios your teams are likely to face.
Proving resilience is one of the biggest challenges facing cyber-security leaders. That cannot be done with traditional methods of training and certification programs, which measure attendance rather than actual cyber-skills. Teams and individuals across the entire workforce must regularly complete real-life cyber-crisis simulations that are measured to identify skills gaps and fill them before it’s too late.
It’s also important to include executives and other senior decision makers in these exercises to build knowledge and help them act quickly in a real cyber-crisis.
Avoid becoming dependent on AI tools
Ironically, AI can be useful for generating these bespoke training scenarios against its misuse. And, of course, AI has also become increasingly critical in cyber-security solutions, helping to match the increased speed and volume of incoming threats.
However, while leveraging AI-driven cyber-security can enhance the capabilities of cyber-security professionals, it is crucial to maintain a balance and not become overly dependent on AI tools in the fight against such threats.
Instead, you should focus on educating individuals across all roles about the implications of AI and promoting better judgment and decision-making. Awareness activity should emphasise the importance of understanding the fundamental principles of cyber-security alongside the adoption of AI.
Don’t neglect internal processes
Alongside building awareness and skills around AI-powered cyber-threats, you should not neglect data and security issues from internal activity. One of the most important steps is to develop an internal policy related to using AI within your organisation.
This should manage expectations around the use of AI by individuals and departments, as well as provide a documented process for potential future implementation and evaluation. You should designate a lead or team to coordinate this area in your organisation to ensure that nothing slips through the cracks.
Emphasising appropriate data and information handling is particularly critical. People must understand what information is categorised as sensitive in your organisation, whether personal data or proprietary information. Make it clear what information can and cannot be shared with external AI systems and provide an easily accessible point of contact for any questions.
Finally, think beyond compliance and into ethics - just because you can, doesn’t mean you should. You should always consider whether you are using AI responsibly and feel confident in justifying this externally if needed.
Staying resilient in an uncertain future
Considering how far generative AI has come in a single year, we have barely scratched the surface of what it can do to enhance and defend against cyber-attacks.
AI is also a particularly fast-moving area from a regulatory perspective. It’s important to stay tuned to the regulatory landscape as this continues to evolve and keep abreast of updates. High-risk industries like financial services, healthcare, and the public sector should know that they are a primary target for cyber-criminals and will be the first to feel the impact of more aggressive and targeted AI attacks.
Nevertheless, assessing, building, and proving cyber-resilience against emerging threats will help ensure your workforce can keep pace.
Max Vetter is VP of Cyber at Immersive Labs
Main image courtesy of iStockPhoto.com
© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543