ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

The EU AI Act: necessary regulation or a barrier to innovation?

Linked InTwitterFacebook

The EU’s AI rules have received final approval. Nima Montazeri at Liberis argues that there is still much to consider to ensure that regulation doesn’t stifle innovation

 

As widely anticipated, the European Parliament has approved the world’s first comprehensive framework for constraining the potential risks of artificial intelligence (AI).

 

The AI Act is expected to officially become law by May or June 2024, after a few final formalities have been tidied up. Provisions will then start taking effect in stages, with EU member countries required to ban prohibited AI systems within six months of the rules being committed to the lawbooks.

 

The Act works by classifying products according to risk and adjusting scrutiny accordingly. The higher the risk is considered to be, the stricter the rules will become. Businesses operating throughout Europe will undoubtedly now be busy working out how they can best comply with the upcoming legislation.

 

From a financial services perspective, it’s worth noting the Act creates provisions to tackle the risks posed by the generative AI tools and chatbots that are being increasingly harnessed by embedded finance providers to boost user experience and customer service levels. These provisions will require the producers of the underlying AI systems to be more transparent about the data used to train their models, so that they comply fully with EU copyright law.

 

The ultimate aim is to make the technology more human-centric, and the new Act should be seen as the starting point for new governance built around technology, according to MEP Dragos Tudorache.

 

It will be interesting to see how other jurisdictions now react. The UK, for example, is committing to studying and understanding the risks behind AI adoption before committing to regulations. China has been introducing a patchwork of laws and guidelines in recent years. And President Joe Biden has issued an executive order requiring AI developers to share safety results with the US government.

 

The need to proceed with caution

The AI Act is a watershed moment for the application of AI across various sectors, including the financial services industry, and testament to the EU’s commitment to navigating the complex ethical, social, and economic implications of AI. It acknowledges the undeniable necessity of setting boundaries to protect individual rights without stifling technological advancement.

 

The Act’s requirement to integrate mechanisms for human accountability and oversight into AI processes is also commendable. This will help to ensure that emerging AI technologies augment human decision-making, rather than replace it, making AI a useful tool with which financial services providers can enhance the capabilities of their human talent.

 

But despite the positives, the Act is not without its drawbacks.

 

One primary concern is the underlying assumption that AI is inherently dangerous. While it’s prudent to approach new technologies with caution, this perspective risks stifling innovation by imposing overly restrictive measures on AI development and application.

 

The technology sector represents a significant growth area for the EU block, and regulations must be carefully crafted to ensure they don’t inadvertently hamper the potential for innovation and economic expansion.

 

The Act’s prescriptive nature, rather than being adaptive, could hinder the dynamic evolution of AI technologies. The fast-paced nature of technological innovation demands a regulatory approach that can adapt to new developments and challenges, ensuring that regulations remain relevant and effective without restricting progress.

 

Locking out the competition

Another significant concern is that early and heavy regulations tend to favour incumbents with the resources to navigate complex legal landscapes.

 

These entities can afford the legal and technical expertise required for compliance, potentially creating barriers to entry for startups and smaller companies. This could stifle competition and innovation, as smaller players are essential for driving forward technological advancements and diversification in the marketplace.

 

Whilst we should all welcome the EU AI Act for its attempt to regulate the ethical use of AI, we must also recognise, and address, concerns over its potential to hinder technological innovation and favour established companies.

 

It’s imperative that policymakers now continue the regulatory journey and address these concerns, fostering an environment where AI can be developed and applied ethically and effectively, without curtailing the dynamic innovation that characterizes the tech sector.

 

To use Tudorache’s language, the AI Act is indeed a starting point for new governance, but the next steps require further thought if we are to arrive at the right balance between regulation and innovation.

 


 

Nima Montazeri is Chief Product Officer at Liberis

 

Main image courtesy of iStockPhoto.com and Madmaxer

Linked InTwitterFacebook
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543