The Met's rollout of the technology appears to be in contradiction of an independent study conducted by academics with access to the Met's systems. In July 2019, two academics from the University of Essex, Daragh Murray and Pete Fussey, published a report that raised serious concerns about the use of facial recognition in London. The researchers attended six trials of the technology where it had correctly identified people 19.05 per cent of the time. Or, to put it another way, it was inaccurate 81 per cent of the time when the system believed it had made a match.
The study was the most detailed report on the use of live facial recognition technology to date and involved interviews with Met Police officers and access to their systems. The researchers concluded it was "highly possible" that courts could decide the technology was unlawful and that it was likely to be "inadequate" under human rights laws.
Both academics said it is unclear why people were being added to the police watchlists, with a later definition of being "violent" included as a reason for including certain individuals. They also said people had been included on watchlists incorrectly. In one trial in Romford in 2018, a 15-year-old boy who was identified had already been through the criminal justice system.
The research also said it was not clear why police had picked some locations for facial recognition trials and there wasn't a simple way to avoid the technology. Tests near the Stratford shopping centre in East London required people to take an 18 minute detour if they didn't want to be scanned by facial recognition cameras. In another case seen by the researchers, people reading information boards about the technology were already in range of the facial recognition cameras.
The Met Police, which had commissioned the study, distanced itself from the findings. At the time of its release a spokesperson said the research had a "negative and unbalanced tone".