ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

AI and the Online Safety Act

There are a range of online harms that cause widespread concern to British society. These include: cyber-bullying; children being exposed to inappropriate content; encouragement of suicide and self-harm; cyber-fraud; and socially damaging misinformation – something that was demonstrated in the recent riots in the UK.

 

In a free society, preventing these harms is not easy. For instance, age verification processes to protect children, which could include biometrics and registration with a credit card, can be intrusive, inaccurate or easily sidestepped. Content such as pornography or suicide encouragement can easily be misclassified, meaning that legitimate information and opinion may be censored. Forced identification to increase accountability is an attack on freedom of speech as well as something that decreases the accessibility of websites substantially.

 

The UK’s Online Safety Act (2023) attempts to provide a balanced solution. Early proposals that websites should censor “legal but harmful” content have been excluded from the act: instead platforms will be required to provide optional tools to give users greater control over the content they see. This seems to be a practical fix and, in general, the act introduces stringent regulations aimed at ensuring online safety and protecting users, particularly minors, from harmful content.

 

There is plenty of criticism of the act as it stands, though. Some claim it will limit privacy and freedom of expression – and may not even be enforceable. However, the act passed into law on 26 October 2023 and social media platforms and other media owners will now need to comply with these regulations, under the watchful eye of the regulator Ofcom.

 

AI can play a crucial role in helping media owners comply with the requirements of the act, as well as helping consumers keep themselves safe.

 

Online platforms and the role of AI

 

Content moderation. Automating content moderation will in practice be essential for any large platform that wishes to avoid publishing illegal material. AI can be used to identify and remove harmful content such as hate speech, violence or misinformation.

 

Machine learning algorithms can analyse text, images, videos and other media types to detect content that violates the Online Safety Act. And this monitoring can happen in real time, as content is posted by users, enabling platforms to quickly respond to harmful content before it spreads widely. AI enables platforms to scale content moderation efforts effectively, processing vast amounts of content in a way that commercial organisations with millions or even billions of users could not reasonably achieve.

 

However, identifying harmful content is difficult. For example, medical images can be tagged as pornography, while an understanding of context is essential when trying to flag up hate speech. It is therefore unlikely that all decisions will be taken by a machine: rather, content that looks worrying will most probably be hived off for further analysis by a human. In addition, most media owners will have an appeals process in place.

 

Age verification. Age verification is important as a way of ensuring child safety while allowing adults greater freedom of expression. AI can assist through techniques such as facial or behavioural analysis, ensuring children are protected from inappropriate content and interactions.

 

These tools are not (and will probably never be) 100 per cent effective: they will inevitably identify some adults as children and vice versa. As with content moderation, some human interaction is therefore likely to be needed and an appeals process put in place.

 

Proactive measures. Online risks are constantly changing. For example, hate speech and other criminal activity such as drug dealing and child abuse often use memes, emoticons and emojis that have transient or secret meanings, known only to the gangs that are using them. AI can pick up on this, identifying suspicious new patterns of use and allowing platforms to take proactive measures to address harms swiftly.

 

Consumers and the role of AI

 

Consumers are rarely helpless and have a vital role to play in keeping themselves safe. Again, AI can help, although in the examples below it will generally be up to the user (or in the case of children, an adult) to decide what action to take.

 

Education. AI can create and deliver personalised educational content to users based on their behaviour (and inferred profiles and needs), helping them understand the risks of harmful online content and how they can protect themselves. This might include AI-driven content recommendations (similar to those seen on subscription TV channels) that steer (but don’t force) users away from unsafe or upsetting content areas.

 

Parental controls. AI-driven software can be set up to automatically filter out harmful content, such as violent or sexually explicit information, or to limit access to certain types of content during different times of the day. These tools can also analyse content in real time, blocking or warning users about potentially harmful material in social media conversations.

 

Identifying misinformation. Many people tend to believe what they see on computer screens uncritically (especially when they wish to believe it). AI tools can help consumers identify misinformation or fake news by analysing the credibility of sources and the tone and information in the content, alerting them to potentially unreliable information and pointing them to different and more reliable sources.

 

Anti-fraud tools. Online fraud is a huge problem in the UK. AI can help consumers identify and avoid phishing and other frauds by analysing websites (as well as emails and messages) for signs of criminal activity, preventing users from accidentally visiting sites that could expose them to data theft, malware, spyware and other online threats. Many people will already be protected by AI in this way. However, databases of suspicious sites can never be fully up to date and new AI systems scan analyse content on the fly, providing extra protection.

 

Digital wellbeing. Internet users in the UK spent an average of three hours and 50 minutes on their smartphones in 2023. Many people would say this is unhealthy! AI can help people manage their screen time by monitoring what they are looking at and suggesting breaks or alternatives to websites that might negatively impact mental health.

In addition, where users are experiencing online harassment, cyberbullying or other stressors, AI can point out that this shouldn’t be thought of as normal and reasonable, providing people with a reason to move elsewhere and even report malicious content.

 

Enhancing safety with AI

 

AI can help ordinary consumers improve their online safety, protecting themselves from a range of online harms, including privacy breaches, cyberbullying, phishing and exposure to harmful content.

 

While consumers should have the information to navigate the web safely (many don’t, in part because of failures in the school curriculum), these AI-powered applications provide an extra layer of security and awareness, empowering users to navigate the digital world more securely and confidently.

 

The danger is perhaps that an over-rigorous application of these tools will destroy freedoms as well as causing unnecessary inconveniences. In addition, implemented badly, some content controls (such as a requirement to prove identity) can even increase the risk of cyber-harm to consumers. Any controls must therefore be carefully thought through, and controls that are automated must allow for human agency (or choice) and redress, as well as requiring that human accountability at a senior level is included in any solutions rolled out.

 

However, despite these reservations, the stringent requirements of the Online Safety Act (2023) will be unachievable in any practical sense without the power and scalability that AI can provide.

Business Reporter

23-29 Hendon Lane, London, N3 1RT

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543

We use cookies so we can provide you with the best online experience. By continuing to browse this site you are agreeing to our use of cookies. Click on the banner to find out more.
Cookie Settings