ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Is your data fit for AI integration?

Linked InTwitterFacebook

Paolo Platter at Agile Lab argues that UK businesses need to do more to prepare for the EU’s AI Act if they wish to trade in Europe

  

Pressure is mounting on organisations as the window to prepare for legislation preventing the misuse of AI begins to shrink.

 

With the UK, EU and US taking different stances, assimilating their respective requirements into a governance structure is no mean feat. But the toughest part will be applying those principles to the development and provision of every AI-powered product and service an enterprise offers, or uses internally. 

  

Although senior executives are generally in support of having such controls in place, there is still plenty of confusion about how to convert what can seem like a jumble of legislation and guidelines into best practices - without stalling AI innovation. Ideally, those tasked with compliance will need to deploy solutions that are flexible enough to meet current laws and can easily accommodate ongoing updates as AI evolves. 

  

A mix of legislation and guidelines 

To date, the UK has taken what the government termed a pro-innovation approach focusing on the outcomes AI is likely to generate in particular applications, rather than on the technology itself. One example is the use of an AI-powered chatbot: its use to triage customer service requests for an online clothing retailer should not be regulated in the same way as a similar application used as part of a medical diagnostic process.

 

As such, the UK requires organisations to classify and assess risk according to the harm AI may cause. Action against transgressions relies on existing legislation, including GDPR and the Data Protection Act. These well-established laws already mandate strong consumer protection obligations and require transparency concerning any type of automated decision-making like AI. 

  

The US takes another approach with its Executive Order. Each federal agency must regulate its own area of responsibility, exercising existing laws as necessary. It also advocates a risk-based approach, but it’s up to the organisation itself to devise internal standards to assess and mitigate them.

 

There is no overarching legislation. Instead, the AI Bill of Rights sets out a framework for developing and deploying AI responsibly. As yet, there are no specific powers of enforcement. Recently, further guidance related to the EO has been released by The Department’s National Institute of Standards and Technology (NIST) in four draft publications intended to help organisations improve the safety, security and trustworthiness of AI. 

  

Currently, the EU’s AI Act, coming into force in August 2024, appears to be the most robust and comprehensive, setting out substantial penalties for non-compliance. Fines can be as high as 35 million euros or 7 percent of worldwide annual turnover depending on circumstances.

 

The Act clearly defines risk levels, banning uses it deems present an unacceptable risk of harm. For high-risk cases, transparency must be maintained, showing how algorithms reach their decisions.

 

Importantly, the Act emphasises the need for responsible data practices, mandating that organisations must use high-quality data to train AI. It also requires the implementation of safeguards to protect user privacy, and resilient cyber-security measures to guard against hackers manipulating AI.  

  

Additionally, the EU is strengthening and expanding its NIS2 guidelines to include medium and large entities as well as more industry sectors. Organisations within scope must maintain sufficient cyber-defences and ensure their suppliers and third parties do the same, or face more fines. 

  

In the UK too, there’s fear that cyber-criminals are becoming more active. A report from GCHQ’s National Cyber Security Centre predicts cyber-attacks, particularly ransomware, will increase over the next two years. It suggests that AI tools will lower the entry barrier for novice cyber-criminals, enabling relatively unskilled threat actors to become proficient more quickly. 

  

Building a solid AI compliance framework 

Despite these warnings, recent research suggests that a quarter of UK firms have yet to make any plans to meet the EU’s AI Act. Nonetheless, with its solid framework and focus on transparency, the Act can serve as a solid foundation for UK companies that are committed to ensuring AI-powered applications are built securely and behave responsibly. 

  

Although the theory sounds good, putting it into practice throws up a plethora of challenges, as well as concerns. Many executives remain apprehensive that compliance obligations are more likely to slow down innovation and reduce competitiveness, rather than driving success. However, with a modern computational governance solution in place, this is not the case.  

  

Designed to enforce a comprehensive procedural framework, a computational governance platform can ensure that AI compliance is built into the entire data management lifecycle, along with all relevant legal, regulatory and internal standards. It enables those responsible for compliance to establish these standards as non-negotiable guardrails, not only meeting external obligations but also ensuring consistency across data quality, integrity, architecture and security. 

  

As highlighted by the EU AI Act, developing AI responsibly and meeting compliance obligations hinges on effective data lifecycle management. This encompasses the collection, storage and handling of data, as well as ensuring its accuracy, completeness and security. 

 

Enforcing governance 

While the concept of enforced computational governance might seem restrictive, in practice it enables data owners to take responsibility for maintaining the integrity of the data they generate and manage. Effectively, all departments and functions can turn data into a usable, compliant product that’s easily accessible by other data consumers in the organisation.  This supports more agile development and the mining of data intelligence - always with governance guardrails in place to maintain data quality standards. 

  

Not to be confused with traditional data management tools, a computational governance platform doesn’t create or duplicate data. Instead, it uses automation to facilitate data discovery, oversee development projects and tools, and apply rigorous data standards and pre-defined policies regardless of technology and formats. For data consumers, it streamlines processes, freeing up valuable time previously spent searching for and validating data. 

  

A computational governance platform is vital for transparency and due diligence when using data for AI as it ensures compliance with compulsory processes. It’s no longer possible for any function or individual to sidestep governance processes as data projects, whatever their size or scope, cannot be released into production unless all of the pre-defined policies are followed. 

  

In this new era of AI-powered development, the risks of non-compliance have never been higher. By implementing a computational governance platform, organisations can establish an enterprise-wide data management framework that supports responsible AI innovation. The alternative of trusting users to follow a piecemeal approach that’s easy to circumvent could be a courting disaster. 

 


 

Paolo Platter is CTO and Co-founder at Agile Lab and Product Manager on Witboost 

 

Main image courtesy of iStockPhoto.com and champpixs

Linked InTwitterFacebook
Business Reporter

23-29 Hendon Lane, London, N3 1RT

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543

We use cookies so we can provide you with the best online experience. By continuing to browse this site you are agreeing to our use of cookies. Click on the banner to find out more.
Cookie Settings