ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Auditing GenAI risks

Dr Srinivas Mukkamala at Ivanti explains why organisations needs an audit for generative AI risks. And why they need it now

 

It should take you perhaps four minutes to read this article. Five minutes from now, I’d love for you to be taking action. It really is that urgent. Why? Because we’re talking about generative AI risks. If you haven’t noticed, generative AI is advancing at an astonishing rate. (Okay, I know you’ve noticed.)

 

Malicious threat actors are advancing at the same pace because their success depends on it.

 

Your success depends on staying ahead of them. That requires, in part, a comprehensive understanding of your current security posture as it relates to generative AI.

 

You might be thinking, but we don’t really engage with generative AI (yet).

 

That’s a widespread — and dangerous — misconception. Your business model doesn’t need to be actively intertwined with generative AI for you to be at risk. Generative AI is more than an exploitable element of a company’s operations. It’s a new strategy for sophisticated threat actors — a new pathway in.

 

How do you identify your vulnerabilities? The short answer: a thorough audit. First, let’s start with a quick overview of three key focus areas. Some might surprise you.

 

Inadvertent risks employees pose

One of the most significant generative AI risks facing organisations today stems from a need for more employee awareness. Many workers don’t realise how advanced AI has become — and that’s very understandable given the mind-melting speed of AI evolution.

 

For example, Ivanti’s recent State of Cybersecurity report found that 54% of office workers were unaware that AI can now impersonate voices with a high degree of accuracy. This lack of awareness makes employees highly vulnerable to AI-powered social engineering attacks.

 

A very real illustration of this threat is an employee receiving a voice message that sounds exactly like it’s from their manager, instructing them to urgently wire funds or share sensitive data. A surprising number of employees would likely comply, not suspecting the voice was an AI-generated fake.

 

That’s on top of more well-known phishing schemes, such as realistic-looking emails indicating that an employee must take specific steps to be in “compliance” with corporate standards. As these AI-powered attacks become more sophisticated, employee vulnerability will only increase.

 

Friction between IT and security teams

Compounding the risks posed by employee vulnerability: many organisations face persistent misalignment between IT and security teams, despite best intentions on both sides. Ivanti’s research shows that 41% of respondents say their IT and security teams struggle to collaboratively manage cyber-security. The friction often stems from well-meaning but conflicting goals, such as IT prioritising speed and functionality while security prioritises threat mitigation.

 

Without close collaboration between the CIO and CISO, critical gaps emerge. Vulnerable points in cross-functional processes go unnoticed and unresolved. Opportunities to strengthen prevention, detection and response measures fall through the cracks. Generative AI threats, which are constantly evolving, require a united front from IT and security. No one has to “lose” here—a win-win is not only possible but imperative.

 

Data silos impair response times

Closely tied to the issue of IT-security misalignment is the widespread problem of data silos. An alarming 72% of organisations surveyed report that their IT and security data are siloed. This fragmentation of critical information has major security implications, especially when it comes to AI threats.

 

63% of respondents say these silos between IT and security slow down response times when threats emerge. 54% report that silos weaken their organisation’s overall security posture.

 

With AI-powered attacks on the rise, organisations cannot afford disjointed data and sluggish response times. Data silos severely impair an organisation’s ability to conduct effective AI risk audits and mount a vigorous defence — and they also cause a massive volume of rework, inefficiency, misdirected resources and headaches.

 

Now, onto the audit!

 

Steps to audit your GenAI risks

To effectively audit your organisation for generative AI risks, several key areas must be examined:

 

Assess employee awareness: Gauge your workforce’s current knowledge of AI capabilities and susceptibility to different AI-powered attack scenarios. Identify awareness gaps to inform tailored training. Anonymous surveys may help encourage candour.

 

Evaluate IT-security alignment: Critically assess the current working relationship between your IT and security functions. Identify friction points, gaps and opportunities to improve cross-functional processes and collaboration.

 

Determine data silo impact: Investigate the extent of data silos between IT and security. Determine how data fragmentation impacts incident response times and overall security posture, and identify areas of gaps and inefficient overlap.

 

Inventory AI/automation tools: Take stock of all AI and automation tools currently used across your organisation. Assess them for potential vulnerabilities bad actors could exploit and identify ones that need to be updated or replaced. This step may require additional expertise, but it’s important.

 

Audit third parties for AI risks: Scrutinise your vendors and entire supply chain for AI-related risks. Ensure they take a secure-by-design approach and hold themselves accountable for security outcomes. If your vendors have any pathways into your company’s sensitive data, you are only as secure as your least secure vendor.

 

Defences against rising AI threats

Armed with insights from your generative AI risk audit, you can take targeted actions to shore up your defences:

  • Provide tailored employee training focused on recognising and reporting AI-powered attacks and update the training regularly to incorporate emerging threats.
  • Actively foster greater collaboration between IT and security functions. Clearly define roles, encourage cross-department working sessions and seek win-win solutions.
  • Dismantle data silos to enable faster, more effective threat detection and response. Eliminating inefficient overlap should provide resources that can be redirected toward gaps.
  • Develop robust AI governance policies that cover responsible use of the technology and risk mitigation measures. Automated enforcement can help ensure compliance.
  • Hold vendors and partners accountable for their security practices. Optimally, expect them to take a secure-by-design approach to AI. 

All this is easier said than done. And don’t worry; when I said at the outset of this piece that I expect you to take action immediately, I didn’t mean you should knock out an audit by close of business today. I mean you should start shifting your awareness toward the need for an audit. What partners can you take? What resources do you need? How can you take the first step? This is worth your attention.

 

The next evolution of cyber-threats is already here, and getting caught flat-footed is not an option.

 


 

Dr Srinivas Mukkamala is CPO at Ivanti

 

Main image courtesy of iStockPhoto.com

 

Business Reporter

23-29 Hendon Lane, London, N3 1RT

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543

We use cookies so we can provide you with the best online experience. By continuing to browse this site you are agreeing to our use of cookies. Click on the banner to find out more.
Cookie Settings