AI News

Automatically collected by AI

Palantir’s Policing Role Sparks London Showdown

A Metropolitan Police experiment with Palantir software has widened into a political battle over who should build the digital tools of British policing

The Metropolitan Police’s use of software from Palantir to flag potential misconduct among officers has prompted investigations into hundreds of cases and opened a new front in Britain’s argument over artificial intelligence, surveillance and the values attached to public contracts.

What began as an internal pilot to identify possible rule-breaking inside Scotland Yard has quickly grown into a broader confrontation after London’s mayor, Sadiq Khan, signaled that he may oppose any larger deal with the American data company. His intervention raises the stakes for a proposed expansion of Palantir’s role from internal standards monitoring into the far more sensitive realm of criminal-intelligence analysis.

The police force said last week that a one-week deployment of the software had helped uncover potential misconduct involving attendance violations, undeclared interests, suspected corruption and other serious allegations. Three arrests were made in cases tied to serious misconduct, according to reporting on the pilot. The force has described the technology as a way to sift through data it already holds, including patterns in sickness, absences and overtime, to identify officers who may warrant further scrutiny.

But the results, rather than settling the case for broader use, have sharpened unease among civil liberties advocates, police representatives and now City Hall. Critics say the episode underscores how rapidly data tools can move from administrative support into a form of algorithmic surveillance, often before the public has had a clear account of what information is being analyzed, how decisions are reached, or what safeguards exist against error.

Khan signals resistance

Mr. Khan’s warning has transformed the issue from an operational decision inside the Met into a test of political authority over the future of AI in policing.

The mayor’s office said he had concerns about spending public money on companies that act “contrary to London’s values,” a pointed formulation that appeared to reflect mounting controversy over Palantir’s work with agencies and governments that have drawn protest in Britain and abroad. The company has faced criticism over its role in supporting the Trump administration’s immigration enforcement operations through Immigration and Customs Enforcement, as well as over work linked to Israel’s military.

For the Met, the timing is awkward. The force has already been in talks with Palantir over a much larger contract to use its tools in criminal-intelligence work, in a deal reported to be potentially worth tens of millions of pounds. Any such expansion would carry far greater implications than the internal misconduct pilot, because it could involve the analysis of intelligence records and other highly sensitive policing data.

Mr. Khan may have an avenue to intervene. Procurement decisions above £500,000 can face scrutiny through the Mayor’s Office for Policing and Crime, the body that oversees the Met on behalf of Londoners. It remains unclear whether he would seek to block or delay a formal proposal if one reaches that stage, but his comments suggest that what might once have been handled as a technical procurement matter could now become a high-profile political fight.

From pilot project to “automated suspicion”

The Met first confirmed in February that it was testing Palantir tools to detect possible standards risks by analyzing internal workforce data. Even then, the plan drew objections. The Police Federation, which represents rank-and-file officers, criticized the approach as “automated suspicion,” arguing that officers could find themselves swept into investigations based on patterns inferred by software rather than evidence of actual wrongdoing.

That concern has only deepened with the disclosure that the pilot helped trigger inquiries into hundreds of officers. The Met has said the system does not replace human judgment and that any cases identified are subject to further review. Yet many of the central questions remain unanswered: how many alerts proved meaningful, how many were false positives, what thresholds were used to flag officers, and what independent oversight exists to test the tool’s accuracy and fairness.

Those questions matter not only for officers but for the public. If a police force uses AI to monitor its own employees with limited transparency, critics argue, it becomes easier to imagine the same logic extending outward into the policing of suspects, victims and communities.

A bigger contract, and bigger stakes

The wider procurement framework into which these discussions fall, known as “Precise Policing 2,” was published by the Met in October 2025 and explicitly covers AI- and data-enabled products and services. That framework has become a focal point for campaigners worried that policing agencies are acquiring increasingly powerful data tools faster than democratic oversight can keep pace.

A broader Palantir role in criminal intelligence would touch especially sensitive ground in London, where the Met has spent years trying to rebuild trust after scandals involving racism, misogyny and abuse within its ranks. Supporters of more advanced analytical tools argue that a force under pressure to prevent violence and identify corruption needs better ways to connect fragmented information. They say software that helps investigators search and organize existing data could improve both internal accountability and public safety.

But opponents counter that intelligence systems are not neutral filing cabinets. They reflect choices about which datasets are combined, what patterns are treated as suspicious and whose lives are subjected to greater scrutiny. Without published details on what a new contract would automate, which records would be processed, or how information about victims and suspects would be protected, they say the public is being asked to trust both the force and the vendor on faith.

Palantir’s expanding footprint in Britain

The row in London also lands as Palantir faces a wider backlash over its growing presence across the British state. The company, long known for supplying data analysis platforms to military, intelligence and law enforcement clients, has won major public-sector business in Britain, including contracts involving the National Health Service and the Ministry of Defence. Its total U.K. public contracts are estimated at about £600 million.

That expanding footprint has drawn organized opposition. Petitions against Palantir’s public-sector work have attracted more than 330,000 signatures, according to recent reporting, reflecting a coalition of campaigners who argue that the company’s politics, clientele and corporate rhetoric should weigh in procurement decisions, especially where sensitive personal data are involved.

Recent comments by Palantir leadership have also intensified scrutiny among British politicians, some of whom have questioned whether the company is an appropriate steward of sensitive public information. For critics, the Met’s interest in expanding its relationship with Palantir has become emblematic of a larger concern: that government agencies are outsourcing core functions of judgment, intelligence and administration to powerful technology firms before the public has settled where the boundaries should be.

What comes next

For now, much remains uncertain. The Met has not announced a final decision on a broader Palantir contract. Nor has it publicly detailed the exact functions that would be covered, the datasets involved or the safeguards that would apply. It is also not yet known whether Mr. Khan would attempt to use his oversight powers to block a deal outright or instead seek conditions, reviews or greater transparency.

What is clear is that the disclosure of the misconduct pilot has altered the politics around any future agreement. In a city where trust in policing remains fragile, the debate is no longer just about whether software can identify rogue officers or sort intelligence more efficiently. It is about who gets to define the values embedded in public technology — and whether London is willing to let one of the world’s most contentious data companies play a deeper role in the daily exercise of police power.

Sources

Further reading and reporting used to add context:

Leave a Reply

Your email address will not be published. Required fields are marked *