ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Facial recognition in the city

Public facial recognition systems are fraught with controversy. They can be used to keep people in cities safe – to identify missing people or spot criminals in a crowd, for example. But the technology comes with many concerns about the privacy of individual citizens. The potential to use facial recognition technology (FRT) as a tool of social control is widely feared.

 

Meta’s decision to shut down facial recognition software on its platforms, removing the facial recognition templates of more than a billion users, is a symptom of this fear. The move suggested that Facebook recognised the need to address public concerns regarding privacy and consent in the use of facial recognition technology.

 

Concerns over the use of facial recognition technology in cities run even deeper and some in the USA have taken steps to ban its use. For example, cities including San Francisco and Seattle have implemented restrictions on the use of facial recognition technology by government agencies, including law enforcement. And the EU is edging closer to an all-out ban on facial recognition in public spaces.

 

The benefits of facial recognition

 

Are these concerns justified? After all, FRT can be used to improve public safety substantially. Known criminals can be identified. Police can be tipped off about people behaving suspiciously in crowded places. And access to controlled spaces such as government buildings and university campuses can be managed.

 

Public FRT can also be used to make city life more bearable. Facial recognition systems can be used to speed up the process of checking in and out of airports or sports stadia. Ride-sharing is far safer when FRT can let people know who they are sharing with. And FRT can even make it easier to go shopping, with faces, rather than bank cards which can be lost or stolen, used to make payments.

 

Another benefit of facial recognition is that vulnerable people can be assisted. Dementia patients or young children who go missing can be found and taken home. Drug users who experience difficulties can be identified and their medical notes provided to assist first responders.

 

Public concerns over facial recognition

 

But it is undeniable that facial recognition systems come with many worries. Privacy is especially concerning because FRT can collect a great deal of personal data which, as well as identifying them, can be used to track where people go and who they associate with, and even predict their behaviour. There is the potential for considerable abuse by overenthusiastic authorities.

 

In addition, there is the problem of bias and inaccuracy. When facial recognition systems are not accurate, individuals who have been misidentified could potentially have their freedoms harmed, perhaps being accused of crimes when they were innocently at home. This problem is particularly acute with certain ethnicities that may be habitually misidentified by FRT, due to the data used to train the system being insufficiently diverse. This can lead to discrimination as well as damage to trust in the authorities using the system.

 

There are also concerns about the indiscriminate use of data to train FRT systems. Most systems use data that is freely available online. However, the subjects of that data generally have no idea that their data – facial images – is being used in this way, and will have rarely given consent. Irrespective of any legal concerns about privacy or copyright, there are simple moral questions around whether someone’s image should be used to develop facial recognition systems without their permission.

 

An ethical framework

 

These problems will not be impossible to overcome. People who have suffered the consequences of an act of terror will often agree that public good can and should override private rights. But this should be done ethically. The UK government is leading in this area with proposals for an ethical framework to underpin the regulation of artificial intelligence (AI), including FRT. The principles proposed include:

 

  • Safety, security and robustness: applications of AI should function in a secure, safe, robust and accurate way where risks are carefully managed
  • Transparency and explainability: organisations deploying AI should communicate when and how it is used and explain a system’s decision-making process
  • Fairness: AI must comply with the UK’s existing laws, for example around privacy and equality, and must not create unfair outcomes
  • Accountability and governance: there should be appropriate oversight of the way AI is being used and clear accountability for the outcomes
  • Contestability and redress: there must be clear routes for people affected by AI to dispute harmful outcomes or decisions

 

These principles should be applied to any FRT system that is being used in public spaces. To do so it will be important to consider how they can be built into these systems.

 

Privacy and fairness can, for example, be enhanced by minimising data collection, ensuring that facial recognition systems only collect the data necessary for the purpose they are being used for. Bias can be largely eliminated by better development processes, especially in training data and algorithm choice. Regular testing before and after deployment can help to maintain system accuracy. And accountability can be provided by requiring a named senior executive be made accountable for the outputs of any system: no hiding behind excuses such as “it was the computer’s fault”.

 

It is true, though, that some of these principles will be difficult to implement. Transparency and explainability is a particular problem. While there are lessons from existing regulations such as GDPR about how people should be keep informed about technology, the principle of explainability is problematic, especially where machine learning has created a “black box” system where no one can be totally sure about how outputs are arrived at. But even here there can be solutions that help to increase explainability, even if the system can never be fully transparent.

 

The perspectives on the use of facial recognition technology in cities vary widely. For some, the technology is an unwelcome step towards a Big Brother society. For others, the wider good is the only consideration – after all, if you have nothing to hide, you’ve nothing to worry about.

 

The truth surely lies somewhere in the middle of these two extremes, in a solution where the benefits of public facial recognition are accepted but the dangers are managed through a combination of better technology and stronger constraints to its use.

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543