ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Hidden AI risks and how to mitigate them 

Jan Wildeboer at Red Hat argues that businesses need to ensure that the AI investments they make - in systems, people and culture - consider both the opportunities and the risks

 

AI is seemingly everywhere, and becoming more pervasive. It didn’t start with ChatGPT, but its arrival on the scene in November last year has ratcheted up the stakes.

 

In the technology world, we haven’t seen nearly anything like this kind of hype or speed of adoption since blockchain. An IDC report published earlier this year estimates that global spending on AI will top $301bn by 2026, double what it is today. 

 

Everything from employee productivity, customer satisfaction and risk mitigation will benefit, so goes the pitch. Unable to ignore the hype, it would seem that many employees are diving in headfirst, seeking to be an AI pioneer and build a competitive edge in new ways. But in doing so, they could be inadvertently opening up their organisation to risks.

 

A question of control

The data narrative is that because data is valuable, the more of it you have, the more valuable it is. But that is not always true. The more data sets you input, the harder it becomes to oversee data provenance and accuracy - not to mention potential infringements of copyright an AI could inadvertently introduce.

 

Let’s take the AI du jour, ChatGPT. The caveat carried on OpenAI’s website is that it “may produce inaccurate information”, and there is plenty of evidence that proves this point. That’s not a big issue if you want to know how to make sponge cake; more serious if you use it for case law research.

 

Now translate that risk to a business setting. An AI tool that automatically manages stock levels; configures salaries with performance; forecasts quarterly cash flow; optimises FX trades and so on is making fundamental decisions that have a profound impact on business profitability.

 

That can be easily overseen with a mature data governance and AI strategy in place; until your staff begin using their own ‘shadow’ AI tools. You can’t control what you don’t know exists.  

 

So the issue is twofold - how do you ensure the integrity of the AI tools you know about; and how do you stop staff using unauthorised and unvetted AI tools to assist their work?

 

The answer is more conceptual than specific. The future of AI won’t be in the Large Language Models (LLMs) that are grabbing today’s headlines, and other generic solutions that seek to serve a multitude of users. Rather, businesses will want capabilities that are distinct to their industry, customers and tasks. 

 

Better, more relevant data

This new era of ‘domain-specific AI’ will be defined by the ability to build unique, differentiated services. That requires foundational models that are trained on private data and customised to the standards and practices of a specific business or industry.

 

Fed with well-organised, focussed and verified data, the foundational AI delivers capabilities that feel like you are working with an absolute expert; one you can trust, because you know that it has not been built with random datasets scraped together from disparate sources.

 

The decisions it generates will be more relevant, and so more effective. Consequently, employees won’t feel the need to shop around for their own shadowy solutions. Better still if they are brought into the build project, and feel a sense of ownership and loyalty to it.

 

Those who are really ‘in the know’ understand that this is where truly interesting innovations in AI are happening right now. The toolkits for domain-specific AI are being developed as we speak; and they already outpace the innovative power of the big players in AI. Smaller can be better, as the saying goes. 

 

But only if it comes with transparency and integrity. That puts AI development firmly in the remit of legal and compliance teams. Their collaboration with data scientists and DevOps colleagues will become a key feature of an effective AI strategy. 

 

They will need to create and enforce the guidelines on how and when to use AI. They will need to ask the same provenance questions of their data as regulators are likely to. Around the world, jurisdictions are now flexing their muscles, evidenced by the EU AI Act, the US AI Bill of Rights, the UK’s AI Regulation Policy Paper and the enforcement of China’s Algorithmic Recommendation Management Provisions.

 

Ignorance was a weak defence even when you could pass the buck to a vendor. It will be no defence when your AI has been built with your own data.  

 

Beware of the AI bandwagon. By pivoting away from general purpose AI to domain-specific AI, businesses empower themselves with a far more effective capability; and (if done right) more educated and compliant employees. Making that call will require strong leadership, as the media buzz around AI has created adoption urgency. 

 

AI governance needs a sensible approach. Business leaders should pause, evaluate, and ensure that the AI investments they make—in systems, people and culture—consider both the opportunities and the risks. 

 


 

Jan Wildeboer is EMEA Open Source Evangelist at Red Hat

 

Main image courtesy of iStockPhoto.com

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543