top of page

Facial Recognition: A Reflection of Human Instincts and the Ethics of AI

Writer: My Mate MarvMy Mate Marv

Updated: Sep 20, 2024

A friend and former colleague of mine recently brought to my attention the BBC’s coverage of Live Facial Recognition (LFR) technology being piloted in Hampshire. It stood out to me as its particularly close to home (well... it is my home, I live in Hampshire). This trial has sparked renewed conversations about privacy, surveillance, and trust — areas that deeply concern me, especially in the context of AI ethics.


In a previous article I wrote on The World of Surveillance and The Evolution of Identity Tracking: From Birth Certificates to Digital ID and Beyond, I explored how our society has long used tools like birth and death certificates to track our identities. Similarly, we might almost be forgiven for not noticing the ongoing evolution of identity tracking — from physical documents to the digital identities that follow us throughout our lives. And similarly, facial recognition is, in many ways, an extension of something we already take for granted. As people, we naturally identify and categorize faces — distinguishing between friends, strangers, and threats daily. This human instinct is mirrored in the technology we see today, but when it’s done by machines at scale, it raises far more complex ethical concerns.


Ethical Concerns: Trust, Bias, and Oversight


One of the core challenges we face with AI-driven surveillance is trust. The use of facial recognition by the police must be viewed within the context of historical and ongoing issues. The tragic outcome of the Stephen Lawrence case and the subsequent inquiry exposed deep-seated issues of institutional racism and professional incompetence within the police force. These revelations are a stark reminder of what is at stake when we entrust unchecked technologies to institutions that have a fraught relationship with marginalized communities.


In this context, the deployment of facial recognition technology raises urgent concerns. It’s crucial to remember that this isn’t just about efficiency — it’s about human rights, privacy, and equity. The risk of false positives disproportionately affecting ethnic minorities is real and documented. The BBC references Big Brother Watch's view that this use of technology is "dangerously authoritarian surveillance" and the organisation has consistently raised concerns about the potential for abuse, biases in AI, and the erosion of privacy rights, positioning themselves as a key voice in these debates.


The idea of "police marking their own homework" becomes particularly problematic here. Without independent oversight, how can we trust that errors are acknowledged, biases corrected, and privacy respected? Data about false positives, wrongful arrests, and surveillance outcomes must be made public and analysed by independent bodies to ensure fairness and accountability.


Policy, Guidelines, and the Challenge of Implementation


In theory, policy and guidelines are meant to ensure ethical use of technologies like facial recognition. But the challenge lies in embedding these guidelines, not just writing them. The implementation gap is something we’ve already witnessed in areas like GDPR, where privacy policies are clear on paper, but enforcing these standards in practice is much more difficult.



We’ve seen also the consequences of unimplemented guidelines before. Take the Grenfell Tower tragedy — the inquiry revealed that safety guidelines were not only insufficient but also frequently ignored or manipulated. In a capitalist economy, when profit becomes the primary motive, even the most well-intentioned guidelines are at risk of being sidestepped.


Its important to remind ourselves that our Emergency Services exist to serve and protect us, the citizens. And to keep order and preserve our civil liberties and our safety.


We must ask ourselves:


How can we ensure that guidelines for facial recognition technology are not only written but embedded into daily practice? 


How can we ensure human needs of safety and civil liberty, at the centre of the rapidly evolving surveillance technology?


Without a clear commitment to enforcement, the potential for harm remains high.


Questions for Reflection


As this trial unfolds, we must ask ourselves:


What safeguards are in place to prevent misuse?


How do we ensure that marginalized communities, particularly ethnic minorities, are not unfairly targeted?


How can we ensure that the policies guiding this technology are implemented with integrity, not bent for convenience or profit?


These are not questions with simple answers, but they deserve careful and collective reflection.


Beyond the Surface: Engaging with the Public Dialogue


In the face of rapidly evolving technologies like Live Facial Recognition (LFR), it’s easy to feel overwhelmed by the complexity and the implications for privacy, civil liberties, and trust. Yet, as the late Tupac Shakur so powerfully said, "The power is in the people, in politics we address."  We are empowered not only to question these systems but to engage in the ongoing dialogue about how they shape our society.


The rollout of these technologies comes with a responsibility — both from those deploying them and from us, the public, to stay informed and participate in discussions. The evolution of our culture, our rights, and our civil liberties depends on our active involvement in shaping the systems around us. By looking beyond surface-level headlines and diving into publicly available documents, we can better understand the policies, guidelines, and safeguards in place — and critically, where they might fall short.


I encourage everyone to explore the public documents that provide the framework for these technologies. These include:



These are just a few examples, but there are likely more policies available that outline how this technology is governed. Where there is concern about trust and the impact on our civil liberties, these documents are vital resources for understanding how decisions are being made. We have the power to engage, question, and ultimately influence the direction of these discussions.


Let’s ensure the conversation is not one-sided, but instead one where we, the public, take an active role.




 
 
 

Comments


bottom of page