Today's cybersecurity headlines are brought to you by ThreatPerspective


Ethical Hacking News

The UK's Facial Recognition Controversy: A Failure to Disclose Bias


The UK government's failure to disclose significant biases in police facial recognition technology has sparked outrage among civil liberties groups and data protection advocates. The controversy raises questions about transparency and accountability in the use of biometric technologies and highlights the need for greater oversight and regulation.

  • The UK government has been accused of concealing biases in police facial recognition technology despite regular engagement between organizations.
  • The Information Commissioner's Office (ICO) has criticized the Home Office for failing to disclose flaws in the algorithm used by UK police forces, including a recent test that revealed substantial weaknesses.
  • The use of facial recognition technology is problematic due to biases in algorithms, with certain demographics being less accurately identified under strict settings.
  • Black females are more likely to be falsely matched than Black males, while White subjects have lower false positive rates than Asian and Black subjects.
  • The government has asked for further review of police use of facial recognition technology and is procuring a new algorithm with no statistically significant bias.



  • The UK government has been accused of concealing significant biases in police facial recognition technology, despite regular engagement between the organizations. The Information Commissioner's Office (ICO), the UK's data protection watchdog, has criticized the Home Office for failing to disclose these flaws, citing a recent test that revealed substantial weaknesses in one of the algorithms used by UK police forces.

    The controversy surrounds the use of facial recognition technology, also known as "RFR" or "retroactive facial recognition," which involves using existing images from public sources, such as CCTV footage or social media, to identify individuals who may have been involved in crimes. The Police National Database (PND) is a centralized database that stores information on millions of individuals, and police forces use it to match faces found in RFR with those already in the database.

    The ICO's deputy commissioner, Emily Keaney, expressed disappointment at not being told about historical bias in the algorithm used by UK police forces for RFR within the PND. The bias was revealed through tests conducted by the National Physical Laboratory and commissioned by the Home Office, which examined two algorithms: Cognitec FaceVACS-DBScan ID v5.5 and Idemia MBSS FR.

    The Cognitec algorithm showed significant weaknesses when identifying certain demographics under strict settings designed to eliminate false positives. When no restrictions were applied, it correctly matched an image of a suspect to an individual in the PND 99.9 percent of the time. However, when testers forced it to return results only when similarity scores were set to very high levels, effectively eliminating false positives, its accuracy dropped to 91.9 percent.

    In Cognitec's case, when no restrictions were applied, the algorithm was best at identifying Asian subjects, with a 98 percent success rate. White subjects were correctly identified 91 percent of the time, and Black subjects in 87 percent of cases. However, when the similarity scores were dropped but remained at high levels, false positive rates increased disproportionately affecting certain demographics.

    Black females were more likely to be falsely matched to a reference image than Black males, returning false positive rates of 9.9 percent and 0.4 percent respectively. Removing gender from the equation, false positive rates for White subjects (0.04 percent) were far lower than those for Asian (4 percent) and Black (5.5 percent) subjects.

    The Home Office told The Register that RFR results are never used as evidence before undergoing a manual review, reducing the risk of images being used incorrectly. Training and guidance have been reissued to police forces nationwide following the report. However, the ICO has requested urgent clarity from the Home Office to assess the situation and determine next steps.

    The government has also asked the Inspectorate of Constabulary to review police use of facial recognition technology, with assistance from the Forensic Science Regulator, in light of the tests. A Home Office spokesperson stated that it takes the findings of the report seriously and that a new algorithm has been independently tested and procured, which has no statistically significant bias.

    The algorithm will be tested early next year and will be subject to evaluation. The Home Office's priority is protecting the public, and they believe this technology will support police in putting criminals and rapists behind bars. However, critics argue that the technology is still plagued by biases and that its use should be limited to specific circumstances.

    The UK government spends tens of millions on facial recognition technology every year, and has consistently vouched for its efficacy since the PND launched in 2011. Despite this, the ICO's criticism raises questions about the transparency and accountability of the Home Office and police forces in their handling of facial recognition technology.

    In conclusion, the controversy surrounding UK police facial recognition technology highlights the need for greater transparency and accountability in the use of biometric technologies. The government must take steps to address the biases revealed in the tests and ensure that the public is informed about the risks and benefits associated with this technology.



    Related Information:
  • https://www.ethicalhackingnews.com/articles/The-UKs-Facial-Recognition-Controversy-A-Failure-to-Disclose-Bias-ehn.shtml

  • Published: Mon Dec 8 06:27:19 2025 by llama3.2 3B Q4_K_M













    © Ethical Hacking News . All rights reserved.

    Privacy | Terms of Use | Contact Us