Andrew Brown at Red Hat explores how businesses can create a strategic AI roadmap to set themselves up for the future
AI has quickly become a business imperative for enterprises to remain competitive and support customer demand, which means that all of their IT decisions must now take into consideration any implications to their AI roadmap.
Leaders need to ask whether any platform, infrastructure, or partner will be able to carry their AI workloads and drive the AI use cases they’re looking to implement – whether tomorrow, a year from now or ten years into the future.
Facing shifting market dynamics, new public policies and various economic impacts to business, the modern CIO must juggle these external forces while bolstering their business growth trajectory and embracing the opportunities presented by AI.
The swift pace required to adopt and evolve AI technology along with high management costs and skills shortages are additional barriers when it comes to AI implementation. With all businesses having nuanced requirements, many are striving to build processes that support seamless, flexible AI adoption, looking to integrate it into enterprise applications where the application and model lifecycle can be securely managed.
In this article, I will explore how enterprises can create a strategic AI roadmap and use open source to manage costs, security and resource challenges while remaining flexible and supercharging innovation.
Future-ready your AI roadmap
The AI tech and regulatory landscape is constantly changing so a roadmap must have space for adjustment. Businesses need to consider whether the platforms, infrastructures and partners they work with provide the flexibility to support AI workloads and drive AI use cases years into the future.
Consider virtualization as an example. In the wake of recent fluctuations in the market, many enterprises may need to migrate a significant footprint of virtual machines (VMs) to a different supporting infrastructure. This is a pressing issue that must be addressed today, but where and how they choose to move or modernise these VMs could directly influence how they implement AI-enabled applications. Everything has to be “AI-proofed”. In daily conversations with customers and partners, the topics of virtualization and AI planning are intertwined. In fact, insert any IT topic here and it will likely lead back to AI in some way.
Adopting an “automation-first” mindset is also foundational for identifying efficiencies and maintaining consistency when scaling AI, particularly when working across diverse tools, vendors and clouds. Implementing an enterprise-wide automation strategy that bakes in collaboration across teams, in contrast to silos of automation, will help IT leaders introduce centralised standards and guidelines for use of AI. It also helps manage the transition of proving out use cases then ready for production.
Cost-effective and explainable AI
Moving onto the practical reality of adopting AI, it takes many resources to train a single, general purpose AI model. The largest models can require some 10,000 GPUs, which need a significant and often costly power source to run. Models become out of date very quickly, requiring an always-on AI development approach.
The UK government projects that AI spending in UK businesses could increase to between £27.2 billion and £35.6 billion by 2025. However, AI models can be costly and resource-intensive. Not all businesses are able to fund the technology or a team of AI developers to create new models and applications.
Furthermore, at Red Hat, we believe that the future of the IT environment will likely be based on 50% applications and 50% models, which could add an additional layer of complexity. In other words, businesses will have as many AI models as applications, meaning training needs to be faster, teams need to be streamlined and costs need to be kept down to enable constant adoption of new technologies.
There are solutions to these challenges. Firstly, from an infrastructure perspective, a hybrid cloud approach can set organisations up for success with AI, just as it has done with other types of workloads. We’re already seeing how cloud-enabled, on-demand computing power can accelerate an organisation’s ability to experiment with and deploy AI.
Enterprises that adopt a hybrid cloud approach with the flexibility to move workloads between environments can reap additional benefits when it comes to managing and scaling AI consistently.
Secondly, open source paves the way for greater adaptability. Smaller, domain-specific AI and large language models (LLMs) built by a global community of users can offer greater transparency and explainability of sources, which enterprises require for compliance, security and effectiveness. When more players have access to the raw materials, everyone benefits by incorporating each other’s improvements into their AI, resulting in more knowledge sharing and greater innovation.
This can apply to model training with projects like InstructLab, which brings an open source development model to LLMs, making AI more accessible and customisable for organizations. A live example of open source supporting AI is Telenor, which is combining the latest in 5G and edge technologies using open source platforms and open APIs to handle heavy and complex calculations while being as simple, sustainable and cost-efficient as possible. Telenor has been able to demonstrate multiple AI-enabled solutions from emergency response to road safety to videogaming.
Addressing the AI skills gap
Among the multiple areas where the tech industry has a skills gap, recent research from Red Hat identified that AI is the most pressing, according to 81% of UK IT leaders. Implementing AI has traditionally required the skillset of a highly trained data scientist, which many organisations can only afford to employ in small numbers at best, whether it’s due to a scarcity of qualified talent or the cost of employing them.
This bottleneck can be tackled with open-source principles: by opening up AI model contributions to people with business skills and domain expertise, the work can get done faster and data scientists can focus on training innovation, not just inputting data.
Open source-licensed AI models have also gained popularity as they can increase flexibility and freedom of supplier choice, foster a wide ecosystem of partners, and offer more affordable performance for organisations. In the aforementioned IT manager survey, when asked about the reasons for adopting enterprise open source solutions for AI, respondents cited accelerated innovation (53%), cost-efficiency (50%), and trust and transparency (43%) as top advantages.
Once embedded into the organisation, generative AI can further help close the talent gap by increasing access to what have traditionally been highly-skilled, technical tasks, like querying AI models and deriving insights from data.
Developing an agile strategy for AI that can cope with high costs, skills shortages and rapid technological change while being interconnected with every other business function and addressing so many economic and regulatory implications, is not easy.
Enterprises that make use of open-source technologies and innovation principles to increase scalability and democratise AI use within their workforce can more effectively overcome resource limitations and enhance what AI can do for them. This better positions them to address emerging opportunities and thrive in the face of competitive pressures ahead.
Andrew Brown is senior vice president and chief revenue officer at Red Hat
Main image courtesy of iStockPhoto.com and akinbostanci
© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543