With the recent AI Safety Summit, the UK is making a serious attempt to become a global leader in AI safety. Many people have expressed concerns about the possible harms from AI. While the fears of some form of Terminator-style human extinction event are probably vastly overplayed, there are genuine reasons to fear AI. Disruption to democracy, the further embedding of inequalities, new risks to privacy and human autonomy – these are all harms that are not just possible, but seem to be happening already. AI safety is a genuine issue.
However, it may be hard for society to protect itself from AI harms without regulation. The technology behind AI is the real driver of the AI industry and the (often large and US-owned) corporations that are leading the way here cannot always be trusted to regulate themselves. Harms such as child sexual exploitation and suicide promotion arising out of social media provide proof. The UK has recognised this and is attempting to mitigate such harms with the recently passed Online Safety Act. Will this perhaps act as a template for future UK regulation of AI?
Regulating AI
Regulations will inevitably be an important part of promoting AI safety. Here, China and the EU have made most of the running until now.
China has already published a set of “interim measures” designed to regulate generative AI that can create text, audio and imagery (but not chemical or biological agents). These establish a set of principles that include preservation of users’ human rights (including physical and mental safety, privacy and equality), a requirement to tag automatically generated content and to moderate illegal content, quality standards for training data and a requirement to evaluate any generative AI services associated with public opinion.
The EU’s approach, which has yet to be finalised, is also a “horizontal” approach, designed to regulate AI technology as a whole (not just generative AI), as opposed to specific applications and harms. While the EU’s AI Act is currently stuck in negotiations between the European Commission and the European Parliament over foundational (multi-purpose) AI models, the risk-based principle behind it, which is designed to guarantee the safety and fundamental rights of people and businesses, is logical.
Types of application are designated as having different levels of risk: unacceptable levels of risk (such as real-time biometric identification in public spaces) – these applications are banned; high levels of risk (such as tools to mark exam scripts) – these are subject to strict requirements including risk assessment processes and human oversight; limited risk applications (such as the use of chatbots), where there are some transparency requirements; and minimal risk applications (such as video games), where self-regulation only is required.
While the principles behind AI regulation in both of these jurisdictions are sensible (although, we might argue, incomplete), both inevitably place a regulatory burden on AI companies. This may inhibit innovation. In contrast, the UK’s strategy is to hold off regulation for the moment, arguing that it is too early.
The UK approach is strengthened by its avoidance of indiscriminate horizontal regulation. Instead it is likely that individual applications will need to meet certain standards – around privacy, safety, equality and the like. The argument made in the UK is that there are already laws that protect people against these types of dangers and all that is needed is for the appropriate regulator to enforce them. Indeed, the ICO has already issued advice on AI and privacy.
Alternative approaches: standards and collaboration
Regulation isn’t the only strategy however. There are many other things societies can do to promote AI safety, such as promoting technical standards. Here the UK is very active, with the BSI contributing to a wide range of standards on AI.
International co-operation is another important initiative, and the AI Safety Summit really did present the UK as a leader in the area. The summit attracted senior politicians and technologists from around the world including US Vice President Kamala Harris, and the world’s richest person, Elon Musk, boss of X (Twitter to you and me), Tesla and SpaceX.
Perhaps more significant, though, was the presence of a Chinese delegation including Vice Minister of Science and Technology Wu Zhaohui. Some form of cooperation with China, a major AI power second only to the USA, will be essential if global agreement on controlling AI harms is to be reached.
A joint commitment by 28 governments subjecting advanced AI models to a battery of safety tests before release is at least a start.
The establishment of the UK’s AI Safety Institute is also an useful initiative. The AISI will undertake research and development that will inform policymaking and provide technical tools for governance and regulation. It has been set up to work closely with the UK’s strong commercial and academic base in AI. The UK is the third-largest AI market in the world after the USA and China, sitting on a current valuation of $21 billion: this should help ensure that the AISI succeeds.
Coming up on the outside…
It would be foolish to explore AI safety without mentioning the USA, the world’s biggest AI market. The nature of AI, with individual states holding onto considerable autonomy, makes the regulatory landscape complex. However, on the first day of the Safety Summit, Kamala Harris announced a new Federal Government plan for ensuring AI safety: an AI Safety Institute; the “operationalising” of the US standards organisation NIST’s AI risk management framework; guidance on government and military use of AI; and a new philanthropic institute to ensure AI advances the public interest.
More importantly, specific requirements in AI safety were recently (just two days before the UK’s Safety Summit!) announced in an executive order from the White House. These include an obligation that developers of the most powerful AI systems share their safety test results, new approaches for controlling the development of dangerous biological materials (this is exceptionally important), anti-fraud measures and an advanced cyber-security programme.
The UK’s role in AI safety
By many measures, the UK is number three in the world when it comes to AI, although well behind the USA and China. But this positioning does give it an opportunity to set the safety agenda. In third position, the UK certainly has the credibility, talent and commercial environment to play an important part. Like the US, the UK is hesitant to pass generalised regulation of the technology, preferring a rifle-shot approach designed to legislate against individual harms.
But instincts in the UK are probably also favourable to the EU’s risk-based approach, even if the AI Act is seen as heavy handed and damaging to innovation. And the appointment of the China-friendly Lord Cameron as Foreign Secretary may well be helpful in pulling that country into fruitful discussions – after all, there is nothing too wrong with China’s current approach.
Compared with its international competitors, the UK does have a very wide view of what will constitute responsible AI. Important issues such as accountability, explainability and contestability are placed poorly compared with safety and fairness in most regulatory strategies, indicating the UK’s holistic – and pragmatic – approach to AI safety.
Traditionally the UK has tried (and perhaps failed) to be a bridge between Europe and the USA for foreign policy. Perhaps AI is the area where this approach will finally succeed.
© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543