ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Risk management and unlocking potential of AI

Linked InTwitterFacebook

Fernando Henrique Silva at CI&T argues that, to unlock the true potential of AI, organisations will need to manage the associated risks

 

Over the past few years, digital transformation has been heralded as a game-changer for organisations regardless of size or industry, enabling them to stay competitive in an environment of constant innovation and disruption. 

 

It’s now close to two years since the launch of ChatGPT and the initial hype is giving way to a digital transformation hangover. As we enter this new phase of the hype cycle, how can organisations make the most of their AI investments?

 

According to CI&T’s recent report, most executives recognise digital transformation as critical to success, but only 7% of CIOs say they are meeting or exceeding their digital transformation targets. This stark discrepancy highlights the gap between vision and execution, and the challenge of overcoming this hurdle.

 

The initial roadmap for digital transformation was straightforward, emphasising agility, collaboration, customer-centricity, and a willingness to experiment. “Fail fast and learn fast” became the guiding principle. 

 

Then came the introduction of cutting-edge AI technologies like GPT-4 and DALL-E 2, which created an added level of complexity. Organisations reacted differently, some remaining wary of the potential risks, whilst others focused on getting ahead of the curve. But with Big Tech companies investing billions into this technology, AI has become a crucial component of companies’ broader digital transformation strategies.

 

As organisations continue striving towards agility and innovation through the power of digital, AI integration will increasingly become a key enabler in realising the potential of this technological evolution.  

 

Transitioning into the next stage of AI maturity

The initial phase of digital transformation laid the groundwork for agile methodologies and a culture of experimentation. Now, AI represents the next frontier in this journey, pushing the boundaries of what can be achieved through digital innovation. To fully leverage AI’s potential, organisations must overcome the fear of disruption and embrace the calculated risks necessary for AI deployment. 

 

However, fear of brand damage, business disruption, and reputational risk has gripped organisations and their boards, hindering widespread AI adoption. This reluctance is understandable, especially in light of the recent data breaches at OpenAI, where user data was inadvertently exposed due to a bug in the ChatGPT interface. Such incidents have heightened awareness of the risks associated with AI, prompting many companies to adopt a more cautious approach. 

 

The current state of experimentation reflects this fear. Most efforts remain siloed, focusing on internal proofs-of-concept that rarely translate into tangible customer-facing applications. A 2023 McKinsey report highlights that while many companies have successfully developed proofs of concept, few have fully scaled these projects. This risk aversion results in missed opportunities. 

 

Leveraging Generative AI for customer success

A successful Generative AI deployment strategy, like any effective digital transformation, requires calculated risks. While it’s important to explore and learn from emerging technologies such as Generative AI, it’s crucial to avoid developing solutions that are impressive but don’t actually generate value for the company. 

 

A smart risk-taking strategy must include building robust contingency plans, incorporating loss provisions, and crisis communications plans and employing best-in-class software engineering practices. For example, Google’s Bard AI project has demonstrated the importance of continuous testing and iteration. After the initial launch, which was met with mixed reviews, Google swiftly implemented feedback loops and A/B testing to refine the AI’s performance, demonstrating a commitment to both innovation and risk management. 

 

Generative AI models can be unpredictable because of their nature and frequent updates. Therefore, practices like A/B testing, canary deployments, DevOps, robust observability, and triaging systems are essential to ensure brand safety and minimise the risk of reputational damage. Additionally, machine learning operations (MLOps) to manage AI infrastructure changes automatically is vital. 

 

Targeting AI initiatives where the potential for harm is minimised is also essential. Companies must assess and research the types of risks to take based on their industry and potential consequences. For instance, while a retail brand may risk its brand loyalty among a set of customers, a tech error for a pharmaceutical company may result in severe consequences for patients.

 

By focusing on specific business areas and customer segments, we see regularly how organisations can maximise benefits while thoroughly managing risks. 

 

A foundation of trust and transparency in AI

Open and transparent communication builds trust with customers, which is vital for gaining acceptance of new AI-powered solutions. Salesforce data reveals a significant trust gap in AI, with only 45% of consumers confident in its ethical use. To bridge this divide, it is imperative to build strong customer relationships centred on understanding and meeting their needs. 

 

The reality is that competitors are actively exploring and deploying these technologies, potentially disrupting market share. For example, we worked with YDUQS, a Brazilian-based company in the education sector, to incorporate GenAI into its solutions and enhance the student journey.

 

As a result, the company was able to achieve efficiency gains, reduce lead time in operational activities, and position itself as an innovator in the industry. With big tech companies like Amazon integrating GenAI into retail operations, they are setting a new standard, leaving competitors little choice but to innovate or risk obsolescence. 

 

Navigating the AI landscape

The key challenge is finding the right balance between risk and reward. This involves taking calculated risks, understanding where to experiment, and building customer trust. Customer engagement is pivotal. Without a deep understanding of customer needs and preferences, it’s difficult to deploy AI solutions effectively and responsibly. 

 

So, is your organisation prepared to take risks? The rewards of successful AI integration are significant, but so are the risks. As the digital transformation hangover sets in, the question is not just about readiness but about the strategic foresight to navigate the complex landscape of AI responsibly. 

 


 

Fernando Henrique Silva is SVP Digital Solutions EMEA at CI&T

 

Main image courtesy of iStockPhoto.com and aydinynr

Linked InTwitterFacebook
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543

We use cookies so we can provide you with the best online experience. By continuing to browse this site you are agreeing to our use of cookies. Click on the banner to find out more.
Cookie Settings