In late January, a 73-year-old registered sex offender named David Cheneler was spotted by police using live facial recognition (LFR) technology while wandering near a child in London. His arrest, like 1,035 others since early 2024, serves as a cornerstone of the Metropolitan Police’s case for embracing cutting-edge surveillance tools. Yet, critics argue the tech’s 0.04% efficacy rate—casting a net over 2.4 million faces to ensnare just 1,035 suspects—represents a drastic encroachment on privacy.
The debate over LFR’s role in modern policing now demands scrutiny. For the Met, the technology is a life-saving innovation: a way to swiftly apprehend dangerous offenders while alleviating officer burdens. But for privacy advocates, it is a leap into uncharted territory, where mass surveillance risks becoming normalized—and its pervasiveness might overshadow due process.
The numbers reveal a fundamental trade-off
The Met’s LFR rollout began quietly in early 2024, with January scans tallying 36,000 faces. By February 2025, that number surpassed 300,000 monthly scans—a 790% increase in barely 20 months. Champagne-raising moments, such as Cheneler’s arrest or the capture of robbery suspect Adenola Akindutire (who posed as a watch buyer before wielding a machete), underscore the tech’s potential.
But the raw data tells a different story: Only 0.04% of those scanned were linked to crimes. Over 99.9% of 2.4 million Londoners—many going about daily routines—were subjected to biometric scrutiny for no reason. Even arrests tied to “breach of conditions,” like Cheneler’s violation of a Sexual Harm Prevention Order, require questionable intrusions into public spaces.