This blog on regulating facial recognition is the third installment of our series on facial recognition. Don’t miss our first blog by AboutFace PI Fay Bound Alberti, about what history can teach us about technological innovation, or our second by guest author Dr Sharrona Pearl, on human facial recognition, face blindness and super-recognisers.
Regulating facial recognition and other biometric technologies
Sara Wasson, Lancaster University
Our faces are unique and intimately connected to our sense of self and identity. Most of us are able to recognise a very large number of faces and take this quintessentially human ability for granted.
But this important skill is no longer limited to humans. Algorithms can do it too. Specific measurements, such as the distance between our eyes, nose, mouth, ears and so on, can be automatically captured and fed into AI systems. These systems are capable of identifying us within a database or picking us out from a crowd.
Biometric (‘biological measurement’) data is the term for any data derived from measuring our physical characteristics, and this includes our faces, fingerprints, walking style (gait) and tone of voice. Biometric technologies can be used to recognise and identify us, but they are also being used to categorise and make inferences about us.
These technologies were previously almost exclusively used within policing. However, they are now being used by a growing number of private and public actors, including employers, schools and retailers to identify but also to categorise.
This raises a number of legal, ethical and societal concerns. Our human rights, such as our rights to privacy, free expression, free association and free assembly, are potentially at risk.
Discrimination and Bias
There are also issues of bias and discrimination. Some biometric technologies – particularly facial recognition – function less accurately for people with darker skin. But even if the technology could be improved to accurately match faces from all racial groups, ethical problems would persist.
Discrimination and bias can also arise from the social context of policing and surveillance. Facial recognition may be disproportionately used against marginalised communities. Shops may disproportionately add people of colour to ‘watchlists’. Simply making the tech more accurate is not enough to make it harmless or acceptable.
To disentangle these challenges and investigate potential reforms, the Ada Lovelace Institute undertook a three-year programme of public engagement, legal analysis and policy research exploring the governance needed to ensure biometrics are used with public legitimacy.
Through in-depth public engagement research, we found serious public concerns about the impact on rights and freedoms.
Negative Impact on Rights and Freedoms
We began by conducting the first nationally representative survey on UK public attitudes towards facial recognition technology, Beyond Face Value. Respondents were given a brief definition of the technology and answered questions about its use in a range of contexts, such as policing, schools, companies, supermarkets, airports and public transport.
The survey found that a majority of people (55%) want the UK Government to impose restrictions on police use of facial recognition and that nearly half the public (46%) want the right to opt out. This figure was higher for people from minority ethnic groups (56%), for whom the technology is less accurate.
The Citizens’ Biometrics Council, a demographically diverse group of 50 members of the UK public, heard from experts about how they’re used, the ethical questions raised and the current state of regulatory oversight. After deliberating on the issues, the Council concluded that there is need for a strong legal framework to ensure that biometrics are used in a way that is responsible, trustworthy and proportionate.
However, an independent legal review, led by Matthew Ryder QC, has found that the legal protections in place are inadequate. The review shows that existing legislation and oversight mechanisms are fragmented, unclear, ineffective and failing to keep pace.
The review was commissioned by the Institute in 2020, after the House of Commons Science and Technology Select Committee called for ‘an independent review of options for the use and retention of biometric data’.
Building on the independent legal review and our public engagement research, we published a policy report setting out a series of recommendations for policymakers to take forward. A recording of our launch event is available on our website.
Firstly, there is an urgent need for new, primary legislation to govern the use of biometric technologies. The oversight and enforcement of this legislation should sit within a new regulatory function, specific to biometrics, which is national, independent and adequately resourced.
This regulatory function should be equipped to make two types of assessment:
- It should assess all biometric technologies against scientific standards of accuracy, reliability and validity.
- It should assess proportionality in context, prior to use, for those that are used by in the public sector, public services and publicly accessible spaces, or those that make significant decisions about a person.
Finally, we are also calling for an immediate moratorium on the use of biometric technologies for one-to-many identification in publicly accessible spaces (e.g. live facial recognition) and for categorisation in the public sector, public services and publicly accessible spaces, until comprehensive legislation is passed.
Biometric technologies impact our daily lives in powerful ways, and are proliferating without an adequate legal framework. Policymakers need to take action to prevent harms and ensure that these technologies work for people and society.
This blog on regulating facial recognition was written by George King. George is a Communications Manager at the Ada Lovelace Institute, with a focus on external relations and engagement. Prior to joining Ada, George worked at the Royal College of Psychiatrists as Communications Officer in their External Affairs team, working across press and public affairs. He has worked for a range of research-based organisations, including the Francis Crick Institute.