By Professor Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner
In surveillance perspective is everything. Your viewpoint affects what you will see and therefore what you will miss. And we are missing a lot.
Whether you are contemplating a policy, practice or product, you will get a better overall view of the risks and issues by shifting between overlapping perspectives. In remote biometric surveillance the composite view from three specific vantage points can be helpful. Those points are: the technological, the legal and the societal and they can combine to create a richer, clearer image of what is coming and what is already here.
Viewing the relevant issue, challenge or proposition from the perspectives of the possible (what can be done), the permissible (what must/must not be done) and the acceptable (what people support/expect to be done) can enhance the relevant features and expose the relative weight being given or assumed from any one perspective. It can also reveal how we got to where we now find ourselves, helping build on successes while swerving some mistakes of the past.
Early policing experimentation in several Western jurisdictions appears to have been driven by the first perspective. With some AI-driven capabilities like facial recognition being fetishised, new technology was adopted with a less than detailed look at where it might fit legally and finally and attempt to persuade the public it was good for them. The result was police forces taking algorithms originally designed to predict aftershocks from earthquakes and using them to predict street robbery. Thinking you can use seismic aftershock predictors in this way without also predicting aftershocks to public trust and confidence is probably the very definition of irony and its impact is still felt today.
Panning across to the legal perspective, in policing and security AI-driven surveillance brings some very specific challenges, an overarching one being accountability. Accountability means answering for decisions not to use available technology as well as for its deployment. Data and privacy regulators like to say: “Just because you can – doesn’t mean you must”. That may be the case for individual and commercial use of technology, but when it comes to policing and security I do not necessarily agree. The state has a legal duty to use readily available means to prevent certain types of harm to the citizen and those ‘means’ arguably include available surveillance capabilities.
And while much attention is paid to the technological and legal perspectives it is societal expectation that will ultimately determine democratic accountability for the technology available to the police being used or eschewed, not least because the UK still has a model of policing based on consent. We, the people, are now using sophisticated surveillance tools once the preserve of state intelligence agencies, routinely and at minimal financial cost.
We freely share personal datasets – including our facial images – with private companies and government on our smart devices for access control, identity verification and threat mitigation. From this societal vantage point it seems reasonable for the police to infer that many citizens not only support them using new remote biometric technology but also expect them to do so, to protect communities, prevent serious harm and detect dangerous offenders – who by the way are also using it to potentially devastating effect. But to what extent is that expectation borne out?
Last year I invited the Joint Parliamentary Committee on Human Rights to consider this: if 20 years ago we had tried predict which new police technology would raise greatest human rights concerns: a weapon designed for the deliberate, sustained application of an electrical charge to the human body in order to enforce compliance with an officer’s directions, or a camera, how many of us would have called it accurately?
In 2024 the TASER has become a standard tactical option for policing across the UK, with no recorded legal findings against its design and functionality, no significant evidence of adverse medical impact and tiny number of occasions of misuse. However, the adoption of facial recognition technology (FRT) by the police has attracted widespread resistance, not only in the UK but elsewhere. Why? Perhaps the answers can be found by looking at we got right from the three perspectives when introducing other innovative technology and what is different about the way we have approached FRT and AI.
The use of remote biometrics generally by the police raises an interesting question from all three perspectives: because they can, does that mean sometimes they must, as the law may say so and the public may think so?
Whether or not the competing considerations were ever truly dichromatic, in policy terms the debate over police remote biometrics has moved on from the should they/shouldn’t they polarity. The emerging evidence is already showing us that the future will require a multi-lensed approach. How to ensure accountability, balancing innovation, regulation and expectation is surely the pre-eminent challenge now.
Source: Biometric Update
Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner, is Professor of Governance and National Security at CENTRIC (Centre for Excellence in Terrorism, Resilience, Intelligence & Organised Crime Research) and a non-executive director at Facewatch.
Become a Patron!
Or support us at SubscribeStar
Donate cryptocurrency HERE
Subscribe to Activist Post for truth, peace, and freedom news. Follow us on Telegram, HIVE, Minds, MeWe, Twitter – X and Gab.
Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.
Be the first to comment on "Remote Biometric Surveillance and Policing – A New Frame of Reference?"