Artificial Intelligence in Policing and Professional Services
The Critical Questions of Accountability, Training, and Public Trust
As artificial intelligence systems become increasingly embedded in critical public services, particularly in law enforcement, fundamental questions about accountability, competence, and justice demand urgent attention. The upcoming January 2026 judicial review of the Metropolitan Police’s use of live facial recognition technology represents far more than a single legal challenge. It is a watershed moment that will determine whether we embrace AI deployment with appropriate safeguards or allow technological advancement to outpace the fundamental principles of justice and human rights that underpin democratic society.
The case of Shaun Thompson, an anti-knife crime campaigner who was wrongly detained for 30 minutes despite providing multiple forms of identification, crystallizes the real-world consequences when AI systems are deployed without adequate oversight, training, or accountability mechanisms. His experience is not an isolated incident but rather a symptom of systemic failures that extend far beyond facial recognition technology into every sector where AI is being rapidly adopted.
The Foundation: How AI Systems Are Built and the Problem of Bias
At the heart of every AI controversy lies a fundamental truth that the public often does not grasp: AI systems are not neutral, objective arbiters. They are mathematical models trained on historical data, and they inevitably reflect and often amplify the biases, gaps, and distortions present in that data. When we discuss how AI is ‘programmed’ or ‘written,’ we must understand that modern AI systems—particularly those using machine learning—are not simply coded with explicit rules. Instead, they learn patterns from vast datasets, making the quality, representativeness, and fairness of training data absolutely critical.
The Training Data Problem
Facial recognition systems used by UK police forces have been found to perform significantly worse on the faces of Black people, women, and individuals under 40. This is not a programming error in the conventional sense but rather a consequence of training datasets that disproportionately contain images of white males. When a Home Office-commissioned review by the National Physical Laboratory revealed this bias in September 2024, police chiefs were forced to increase confidence thresholds to reduce the disparity. However, this reactive adjustment raises a troubling question: how many people had already been misidentified before the bias was acknowledged?
The magnitude of the problem is staggering. Between 2018 and 2024, UK police forces conducted over half a million facial recognition searches on the Police National Database, with researchers estimating that thousands of suspects could have been misidentified. In Detroit, the failure rate was even more dramatic: the police facial recognition system failed to correctly identify people 96 percent of the time. These are not marginal errors; they represent systematic unreliability that undermines the fundamental premise of using such technology.
The Black Box Problem
Many AI systems operate as ‘black boxes,’ where even their developers cannot fully explain why the system reached a particular conclusion. This opacity creates a profound accountability problem. If a police officer cannot explain to a court why an AI system flagged a particular individual as a match, and if the developer cannot provide a clear explanation beyond ‘the algorithm determined it,’ we have effectively outsourced crucial law enforcement decisions to inscrutable mathematical processes. This fundamentally contradicts principles of due process and the right to understand the evidence against you.
The General Data Protection Regulation (GDPR) recognizes this problem through its provision for a ‘right to explanation’ when automated decision-making significantly affects individuals. However, the practical implementation of this right remains contested, particularly in law enforcement contexts where authorities claim operational necessity. The tension between the public’s right to understand how they are being assessed and the perceived operational requirements of policing has not been adequately resolved.
The Accountability Crisis: Who is Responsible When AI Fails?
The deployment of AI systems in policing and other professional services creates a complex web of accountability that current legal and institutional frameworks are ill-equipped to handle. When Shaun Thompson was detained based on a false facial recognition match, who bore responsibility? Was it the officer who acted on the AI’s recommendation? The police force that deployed the technology? The developer who created the system? The procurement officials who selected it? The answer, disturbingly, is that current accountability mechanisms do not clearly assign responsibility in such scenarios.
The Chain of Responsibility
Multiple parties contribute to AI deployment, each potentially bearing some responsibility when systems fail. AI developers create the underlying algorithms and train the models, but they typically do not control how their systems are deployed or what decisions are made based on their outputs. Police forces and other organizations choose to adopt these systems, integrate them into their workflows, and establish policies governing their use. Individual officers or professionals interpret and act on AI recommendations. Procurement officials select systems based on claims that may or may not reflect real-world performance.
Current legal frameworks struggle to assign liability across this chain. The Metropolitan Police defended their use of facial recognition by pointing to established policies and oversight mechanisms, yet these same policies failed to prevent Thompson’s detention or the hundreds of other false alerts that have occurred. Meanwhile, AI developers can claim they provided a tool that was misused or deployed inappropriately. This diffusion of responsibility means that when individuals are harmed by AI systems, obtaining redress or even establishing fault becomes extraordinarily difficult.
Professional Standards and Codes of Conduct
Professional bodies in law, medicine, and other fields have begun to recognize the accountability challenges posed by AI. The Solicitors Regulation Authority’s 2023 Risk Outlook report highlights that AI hallucinations and biases could lead to miscarriages of justice, mislead courts, and cause harm at scale. The Bar Council has issued guidance acknowledging that while AI adoption will increase, barristers must maintain control, accuracy, and compliance with professional standards.
However, these professional standards are still evolving and remain largely advisory rather than mandatory. There is no systematic certification process ensuring that professionals using AI systems are competent to do so. There are no standardized auditing procedures to verify that AI systems meet basic accuracy and fairness thresholds before deployment. The governance infrastructure that would make accountability meaningful simply does not exist in comprehensive form.
Compensation and Redress
When AI systems cause harm, victims face significant barriers to obtaining compensation. Current frameworks for miscarriages of justice are already inadequate, with only 6.6 percent of applications succeeding since 2014. The government announced increases to maximum compensation limits in 2025, but these increases address the amount of compensation, not the accessibility of the scheme. Individuals harmed by AI errors face the additional challenge of proving that the AI system, rather than legitimate investigative work, was responsible for their treatment.
The Training Deficit: Are Officials Equipped to Use AI Responsibly?
Perhaps the most concerning aspect of rapid AI deployment in professional services is the absence of systematic, mandatory training for the officials and professionals who use these systems. The assumption that sophisticated AI tools can be deployed to front-line staff with minimal preparation is not only naive but dangerous. When officers act on facial recognition matches, do they understand the system’s error rates? Do they know that the technology performs worse on certain demographic groups? Are they trained to critically evaluate AI recommendations rather than accepting them as infallible?
Automation Bias and Over-Reliance
Psychological research has identified a phenomenon called ‘automation bias,’ where humans over-rely on automated systems and fail to question their outputs even when contradictory evidence is available. In Thompson’s case, officers repeatedly demanded fingerprint scans and threatened arrest despite being shown multiple forms of identification proving he was not the wanted individual. This suggests that the AI’s ‘match’ was given more weight than physical identity documents and the obvious visual differences between Thompson and the actual suspect.
The Metropolitan Police claims that facial recognition technology never replaces human judgment and that officers always make final decisions. However, the pattern of false detentions suggests otherwise. If officers were genuinely exercising independent judgment rather than deferring to the technology, the rate of wrongful stops should be far lower. The fact that even obviously incorrect matches lead to detention indicates that in practice, the AI’s recommendation effectively determines the outcome.
The Training Gap
There is currently no mandatory, systematic training on AI for legal professionals, police officers, or most other public sector workers. The Judicial College has identified ‘preparing for innovation and change’ as a key objective, but this falls far short of comprehensive AI literacy. The Ministry of Justice’s 2025 AI Action Plan commits to supporting staff as their roles evolve, but the details of this training remain vague. Meanwhile, the Crown Prosecution Service conducted a pilot of Microsoft Copilot with over 400 staff to assist with tasks like summarizing emails, but there is no indication that this pilot included training on AI limitations, biases, or critical evaluation skills.
The consequences of this training deficit are already visible. In California, prosecutors submitted legal briefs containing AI-generated citations to non-existent cases, revealing a fundamental lack of understanding about AI hallucinations. While the prosecutor’s office claimed staff had been reminded to verify AI outputs, the fact that such errors occurred in multiple cases suggests that training was inadequate or ineffective.
What Effective Training Would Require
Effective training for officials using AI systems would need to cover several critical areas. First, officials must understand the technical limitations of the specific systems they use, including error rates, known biases, and conditions under which the system performs poorly. Second, they must develop critical thinking skills to evaluate AI outputs skeptically rather than accepting them automatically. Third, they need clear protocols for what to do when AI recommendations conflict with other evidence or professional judgment. Fourth, they must understand the legal and ethical implications of AI-assisted decisions, including potential discrimination and privacy violations.
Such training cannot be a one-time event but must be ongoing, updated as systems change and as understanding of AI limitations evolves. It must be mandatory, not optional, and competence should be assessed before individuals are authorized to use AI systems in consequential decisions. The training must also address the organizational culture that can develop around AI, where pressure to demonstrate technological innovation or efficiency gains can override concerns about accuracy or fairness.
Wider Implications: From Policing to Healthcare, Finance, and Beyond
While much public attention focuses on facial recognition in policing, AI systems are being rapidly deployed across virtually every professional service and sector. The problems of accountability, training, bias, and transparency identified in policing are equally present—and often less visible—in healthcare, financial services, education, social services, and employment. The lessons from policing AI should serve as warnings for these other sectors, not reassurances that AI problems are confined to law enforcement.
Healthcare and Medical AI
AI systems are increasingly used to diagnose diseases, recommend treatments, and predict patient outcomes. Like facial recognition, medical AI systems have been found to perform worse for certain demographic groups, particularly women and racial minorities. An AI system trained predominantly on data from white male patients may fail to recognize symptoms or disease patterns in other populations. Unlike a wrongful police detention, these failures can directly result in death or permanent disability. Yet the accountability mechanisms for medical AI are similarly underdeveloped, and many clinicians using AI diagnostic tools receive minimal training on the systems’ limitations.
Criminal Justice and Risk Assessment
Beyond facial recognition, AI is used throughout the criminal justice system to assess risk of reoffending, determine bail and sentencing recommendations, and allocate prison resources. The UK’s OASys (Offender Assessment System) and OGRS (Offender Group Reconviction Scale) tools attempt to predict reoffending risk based on factors including age, criminal history, and current offense. These systems face the same challenge as all predictive AI: they are trained on historical data that reflects existing biases in the justice system. If Black individuals have historically received harsher sentences, AI systems trained on this data will ‘learn’ to associate race with higher risk, perpetuating and potentially amplifying discriminatory patterns.
Financial Services and Algorithmic Discrimination
AI systems determine creditworthiness, insurance premiums, and access to financial services for millions of people. These systems can discriminate based on factors that serve as proxies for protected characteristics. An AI system might not explicitly use race or gender, but if it uses zip code or employment history—factors that correlate with race and gender due to historical discrimination—it can produce discriminatory outcomes while appearing to be neutral. The opacity of these systems means individuals often do not know why they were denied credit or charged higher premiums, making it nearly impossible to challenge unfair decisions.
Employment and Hiring Algorithms
Many employers now use AI systems to screen job applications, assess candidates, and even make hiring decisions. These systems have been found to discriminate against women, older workers, and racial minorities. Amazon famously abandoned an AI recruiting tool after discovering it penalized resumes containing the word ‘women’s,’ such as ‘women’s chess club captain.’ The system had learned from historical hiring data that reflected Amazon’s male-dominated technical workforce. This example illustrates how AI can encode and automate discrimination that would be illegal if done explicitly by human recruiters.
Social Services and Welfare Systems
Perhaps most concerning is the deployment of AI in social services, where vulnerable populations have limited ability to challenge decisions. AI systems are used to detect welfare fraud, assess child protection risks, and allocate social services resources. These systems can systematically disadvantage already marginalized communities. If an AI system trained to detect fraud flags individuals based on characteristics common in low-income communities, it creates a feedback loop where poverty itself becomes a risk factor for investigation and sanction.
The Gap Between Marketing and Reality
A significant problem driving premature and problematic AI deployment is the substantial gap between how AI systems are marketed and their actual performance. Vendors naturally emphasize successes and capabilities while downplaying limitations and failure rates. Procurement officials and decision-makers, often lacking technical expertise, may not fully understand what they are purchasing. The pressure to demonstrate innovation and technological advancement creates incentives to adopt AI systems even when they are not ready for deployment at scale.
Accuracy Claims vs. Real-World Performance
AI developers often report accuracy rates based on controlled testing conditions that differ substantially from real-world deployment. A facial recognition system might achieve 99 percent accuracy in laboratory conditions with high-quality, front-facing images but perform far worse with surveillance footage, angled shots, or varying lighting conditions. When the Metropolitan Police report their false alert rate as 0.0003 percent, this figure is based on the total number of faces scanned. However, the more relevant metric is the accuracy rate for faces that trigger an alert—and here the numbers are far less impressive. At a single deployment at Oxford Circus, the system wrongly stopped five people while correctly stopping one, suggesting an accuracy rate closer to 17 percent for actual matches.
The Pressure to Adopt
Organizations face multiple pressures to adopt AI systems even when prudence would counsel caution. There is competitive pressure—the fear that other organizations are gaining advantages through AI adoption. There is political pressure to demonstrate innovation and efficiency. There is financial pressure from the promise of cost savings through automation. These pressures can override concerns about accuracy, fairness, or readiness. The result is deployment of systems that are insufficiently tested, inadequately understood, and poorly integrated into existing professional practices.
Transparency, Oversight, and Public Trust
The erosion of public trust in institutions deploying AI systems represents perhaps the most significant long-term consequence of premature or poorly governed AI adoption. When people lose faith in the fairness and accuracy of the systems that govern their lives—whether those systems determine police stops, medical diagnoses, credit decisions, or employment opportunities—the social contract that underpins democratic society is fundamentally weakened.
The Transparency Problem
Meaningful accountability requires transparency, but AI systems are often deliberately opaque. Developers claim that algorithmic details constitute proprietary trade secrets. Security agencies argue that transparency would allow criminals to game the system. These claims have merit but cannot be absolute. A balance must be struck between legitimate confidentiality concerns and the public’s right to understand how consequential decisions about their lives are being made.
Independent auditing represents one path toward transparency without full disclosure. Just as financial institutions are subject to external audits, AI systems used in consequential decisions should be independently evaluated for accuracy, bias, and compliance with relevant standards. These audits should be conducted by experts without financial ties to the AI developers and their findings should be made public in sufficient detail to allow meaningful oversight.
Current Oversight Mechanisms
The UK currently lacks comprehensive oversight infrastructure for AI deployment in public services. The Information Commissioner’s Office has some jurisdiction over data protection aspects. The Equality and Human Rights Commission can address discrimination. However, there is no single body with clear authority and mandate to oversee AI systems across sectors, ensure compliance with minimum standards, investigate failures, and enforce accountability.
The government attempted to abolish the Biometrics and Surveillance Camera Commissioner role through the Data Protection and Digital Information Bill, though this bill did not progress. The incumbent commissioner resigned in August 2024, leaving a gap in oversight. Meanwhile, the charity JUSTICE has called for creation of an independent central body to set minimum standards for AI use in policing and act as a repository of good practice. This recommendation should be extended beyond policing to cover all consequential uses of AI in public services.
Public Consultation and Democratic Participation
Decisions about how AI systems are deployed in public services should not be made solely by technical experts, police chiefs, or government officials. The public, particularly communities most affected by these systems, must have meaningful input into whether and how AI is used. This is not simply a procedural nicety but a fundamental requirement of democratic governance. When surveillance technologies or algorithmic decision-making systems are deployed without public consent or consultation, they undermine the legitimacy of the institutions employing them.
The government has announced consultations on facial recognition deployment, which is a positive step. However, consultation must be genuine, not merely a rubber-stamping exercise. It must occur before decisions are made, not after systems are already deployed. Communities must be given sufficient information to make informed judgments, including data on accuracy rates, demographic disparities, and alternative approaches that might achieve similar objectives without the same risks.
Legal and Regulatory Frameworks: The Current Vacuum
Perhaps the most striking feature of the current AI landscape is the absence of comprehensive legal frameworks specifically designed to govern AI deployment in high-stakes contexts. Existing laws—data protection, equality, human rights—apply to AI systems but were not designed with AI in mind and often prove inadequate to address AI-specific challenges.
Existing Legal Protections
The Data Protection Act 2018, Human Rights Act 1998, Equality Act 2010, and Police and Criminal Evidence Act 1984 all provide some protection against AI harms. However, these laws face challenges in the AI context. Proving that an AI system discriminates requires establishing causation and intent that are often obscured by algorithmic complexity. The right to explanation under GDPR has not been robustly tested in courts and may not require the level of transparency necessary for meaningful accountability. Human rights protections apply but must be balanced against claimed operational necessities.
The Need for AI-Specific Legislation
What is needed is legislation specifically designed to govern AI deployment in consequential contexts. Such legislation should establish minimum accuracy thresholds that AI systems must meet before deployment. It should require demographic parity testing to ensure systems do not perform significantly worse for particular groups. It should mandate transparency about AI use and provide meaningful rights to challenge AI-assisted decisions. It should establish clear liability frameworks assigning responsibility when AI systems cause harm.
The European Union’s AI Act provides one model, though it has been criticized as both too restrictive and not restrictive enough depending on perspective. The UK government has indicated preference for a more permissive, innovation-friendly approach. However, innovation that comes at the cost of fairness, accuracy, and public trust is not genuine progress. Effective regulation need not stifle innovation but rather channel it toward systems that are demonstrably safe, fair, and beneficial.
The January 2026 Judicial Review
The judicial review brought by Shaun Thompson and supported by the Equality and Human Rights Commission could establish important legal precedents. If the court finds that the Metropolitan Police’s use of live facial recognition is unlawful, this could require explicit legislative authorization for such surveillance, establish requirements for accuracy thresholds and bias testing, mandate public consultation and transparency in deployment decisions, and create frameworks for compensating victims of misidentification.
Conversely, if the court upholds current practices, it may signal judicial acceptance of the police’s approach, potentially accelerating facial recognition expansion across British society and beyond policing into other sectors. The stakes extend far beyond Thompson’s individual case and even beyond facial recognition technology itself. The principles established will influence AI governance across sectors and potentially internationally.
Ammunition for AI Opponents: Legitimate Concerns vs. Luddism
The failures and controversies surrounding AI deployment in policing and other professional services provide substantial ammunition for those who oppose AI integration entirely. It is crucial to distinguish between legitimate concerns that should be addressed and knee-jerk opposition to technological change. However, it is equally crucial to recognize that many concerns about AI are entirely rational responses to real problems rather than irrational technophobia.
The Risk of Backlash
When AI systems are deployed prematurely or without adequate safeguards, the resulting failures create public backlash that can make it more difficult to deploy even well-designed, beneficial AI systems. If facial recognition becomes synonymous with wrongful detention and racial discrimination, public opposition may prevent deployment of the technology even in contexts where it might be genuinely beneficial and accurate. This is not merely a public relations problem but a substantive issue of social trust and institutional legitimacy.
Distinguishing Legitimate Concerns
Not all opposition to AI deployment stems from ignorance or fear of change. Research shows that greater AI knowledge actually correlates with decreased trust in police facial recognition technology, challenging assumptions that public skepticism results from lack of understanding. People with technical expertise recognize the limitations, biases, and risks that enthusiastic adopters may overlook or minimize. Their concerns deserve serious consideration rather than dismissal as Luddism.
Legitimate concerns about AI include: demonstrated bias and discrimination against marginalized groups; lack of transparency and accountability mechanisms; inadequate training for professionals using AI systems; absence of meaningful consent or democratic participation in deployment decisions; insufficient accuracy for the consequences of errors; mission creep and function creep as systems adopted for limited purposes expand; erosion of privacy and civil liberties; and concentration of power in the hands of those who control AI systems.
The Path Forward
Those who genuinely support beneficial AI deployment should be the most vocal advocates for strong governance, accountability, and safeguards. Rushing to deploy systems that are not ready, resisting transparency and oversight, dismissing legitimate concerns, and prioritizing innovation over accuracy and fairness are counterproductive strategies that fuel opposition and undermine trust. The path to sustainable AI integration runs through robust safeguards, not around them.
Recommendations: A Framework for Responsible AI Deployment
Based on the analysis of current problems and gaps, the following recommendations outline what responsible AI deployment in professional services should require.
Legislative and Regulatory Framework
Parliament should enact comprehensive AI legislation establishing minimum standards for AI deployment in consequential contexts. This legislation should require explicit legislative authorization for deployment of surveillance technologies including facial recognition. It should mandate minimum accuracy thresholds and demographic parity testing before systems can be deployed. It should require transparency about AI use and provide enforceable rights to explanation and challenge. It should establish clear liability frameworks assigning responsibility across the chain from development to deployment to use.
Independent Oversight
An independent oversight body should be established with authority to set standards, conduct audits, investigate failures, and enforce accountability for AI systems used in public services. This body should have expertise spanning technology, law, ethics, and the specific sectors being regulated. It should have power to require disclosure of algorithmic details subject to appropriate confidentiality protections. It should publish regular reports on AI deployment, performance, and impact.
Mandatory Training and Certification
Professionals using AI systems in consequential decisions should be required to complete comprehensive training covering the specific systems they use, including technical limitations, known biases, error rates, and demographic disparities. Training should develop critical evaluation skills and provide clear protocols for when AI recommendations conflict with other evidence or professional judgment. Competence should be assessed and certification should be required before individuals are authorized to use AI systems. Training must be ongoing, updated as systems and understanding evolve.
Transparency and Public Participation
Deployment of AI systems in public services should require genuine public consultation, particularly with communities most affected. Sufficient information must be provided for informed judgments, including accuracy rates, demographic disparities, privacy implications, and alternative approaches. Consultation must occur before deployment decisions are finalized, not afterward. Organizations deploying AI must maintain public registries disclosing what systems are in use, for what purposes, with what accuracy rates, and with what demographic disparities.
Accuracy and Bias Testing
Before deployment, AI systems should undergo rigorous testing by independent evaluators. Testing must include real-world conditions, not merely laboratory settings. Systems should be required to demonstrate comparable accuracy across demographic groups. Significant disparities should trigger either system improvement or prohibition on deployment. Testing should be repeated periodically after deployment to ensure continued performance. Results should be publicly disclosed.
Clear Accountability Mechanisms
Legal frameworks should clearly assign responsibility when AI systems cause harm. Organizations deploying AI should bear primary liability for ensuring systems are accurate, fair, and properly used. Developers should be liable for known defects not disclosed to deployers. Individual professionals should be accountable for failing to exercise appropriate judgment when AI recommendations prove incorrect. Victims of AI errors should have accessible paths to compensation without requiring proof of innocence beyond reasonable doubt.
Professional Standards
Professional bodies should develop and enforce standards for AI use in their fields. These standards should be mandatory, not advisory. They should specify when AI use is appropriate and when it is not. They should require documentation of AI-assisted decisions. They should prohibit over-reliance on AI without independent verification. Violation of standards should carry professional consequences including suspension or decertification.
Research and Evidence Base
Substantial public investment should support independent research on AI impacts, particularly on vulnerable and marginalized communities. Research should examine not only technical performance but also social, psychological, and democratic effects of AI deployment. Evidence of harm or bias should trigger immediate review and potential suspension of systems. The burden of proof should rest with those deploying AI to demonstrate safety and fairness, not with affected communities to prove harm.
Conclusion: The Crossroads
We stand at a critical juncture in the integration of artificial intelligence into professional services and public institutions. The decisions made in the next few years will determine whether AI becomes a tool that enhances human judgment and serves the public good, or whether it becomes a mechanism for encoding and amplifying existing inequalities while evading accountability.
The case of Shaun Thompson and the January 2026 judicial review of Metropolitan Police facial recognition use are not isolated incidents but rather symptoms of systematic failures in how we are approaching AI deployment. These failures include: deploying systems without adequate accuracy or bias testing; failing to provide meaningful training for professionals using AI; lacking clear accountability when systems cause harm; resisting transparency and independent oversight; prioritizing innovation and efficiency over fairness and accuracy; deploying surveillance technologies without democratic consent or consultation; and allowing the gap between AI marketing and reality to drive decisions.
These failures are not inevitable. They result from choices—choices to rush deployment, to resist regulation, to prioritize technological enthusiasm over critical evaluation, to dismiss legitimate concerns as Luddism, and to treat affected communities as subjects of technology rather than participants in decisions about its use.
The alternative path is clear, though more demanding. It requires establishing robust legal and regulatory frameworks before widespread deployment, not after problems emerge. It requires independent oversight with real authority and resources. It requires comprehensive, mandatory training for all professionals using AI in consequential decisions. It requires transparency and public participation in deployment decisions. It requires accuracy and bias testing by independent evaluators with results publicly disclosed. It requires clear accountability mechanisms and accessible paths to redress when systems fail.
Most fundamentally, it requires recognizing that AI is not neutral technology but rather a set of tools that will reflect the values, priorities, and power structures of those who create and deploy them. If we deploy AI without adequate safeguards, we should expect it to encode and amplify existing biases and inequalities. If we deploy AI without transparency and accountability, we should expect it to evade responsibility when it causes harm. If we deploy AI without democratic participation, we should expect it to undermine rather than strengthen public trust in institutions.
The question is not whether to use AI in professional services but how to do so responsibly. Those who genuinely support beneficial AI deployment should be the strongest advocates for robust governance and safeguards. The alternative—continued deployment of inadequately tested, poorly understood, insufficiently governed systems—will inevitably generate the backlash that AI enthusiasts most fear.
When Shaun Thompson was detained based on a false facial recognition match despite presenting multiple forms of identification proving his identity, it revealed not just a technical failure but a systemic failure of judgment, training, accountability, and governance. The January 2026 judicial review will determine the immediate legal status of Metropolitan Police facial recognition, but the broader questions it raises will persist regardless of the court’s decision.
Will we deploy AI systems with adequate safeguards, or will we continue to allow enthusiasm to override prudence? Will we ensure that officials using AI are properly trained, or will we expect sophisticated technology to compensate for lack of understanding? Will we establish clear accountability for AI failures, or will we allow responsibility to dissolve in the complexity of technological systems? Will we provide meaningful transparency and public participation, or will we allow AI deployment to occur in the shadows? Will we prioritize fairness and accuracy, or will we tolerate systems that work well for some but fail others?
These questions have no easy answers, but they demand answers nonetheless. The choice between effective governance and technological free-for-all will determine not only the success of AI deployment but also the character of the society we are creating. We can build AI systems that enhance human judgment, serve the public good, and strengthen democratic institutions. But doing so requires wisdom, humility, and commitment to principles that transcend the latest technological excitement. The path we choose in the months and years ahead will shape our society for decades to come.

