A few weeks ago, the Pentagon asked Anthropic, the company behind the AI assistant Claude, to modify an existing $200 million contract and remove two key surveillance instruments. This was a ban on using its technology for mass surveillance at home and completely autonomous weapons. Anthropic declined and the contract went to OpenAI instead.
This controversy has given rise to a question that most of us probably haven’t thought much about yet: can AI really do these things? And if so, how worried should we be?
The short answer, according to the experts I spoke to, is that this is not science fiction. It already exists. But the picture is more complex – and in some ways more disturbing – than the killer robots we’re used to seeing on the big screen.
The article continues below
Many people are already employed
“Mass surveillance doesn’t just work, it’s happening,” James Wilson, Global AI Ethicist and author of Artificial Negligence, told me. “Technologies like Palantir and CCTV have been making this possible for years. It’s up to individual states whether they choose to do it.”
The US government’s PRISM program – exposed by Edward Snowden a decade ago – was the first example of mass surveillance.
“Advances in AI have only made it easier to do this at a higher level,” Wilson said, “and our highly connected existence means that there are far more sources of data that they can access, with or without people’s consent.”
The recent controversy over Ring doorbell cameras and Flock license plate readers used by police after the Super Bowl is just the latest example.
This is important for ordinary people, not just political opponents. Jeff Watkins, an AI consultant who specializes in governance and security tells me that this type of surveillance points to a pattern that is already being seen in the UK.
“We’ve seen a lot of recent articles about how people can be misidentified by supermarkets’ facial recognition systems, with long-standing concerns that this misidentification could have a negative impact on women and ethnic minorities,” Watkins said.
The cumulative effect is a change in the way society works. “Being subject to the algorithmic use of surveillance technology is moving the dial to an ‘auto-suspicious’ society, where innocent groups, going about their daily lives, have their rights trampled on by the alienation of AI,” Watkins said.
Autonomous weapons are now available
The same is true of lethal autonomous weapons. “The first recorded use was by Turkey against a Libyan target using the Kargu Drone in 2021,” Wilson said. Since then, technology has moved quickly. “Advances in AI mean this is now possible on a much larger scale, and incredibly cheap.”
But the key issue here is accuracy – and what it means to be inaccurate when the stakes are life and death. “Computer vision for human facial recognition is only 90% accurate at the best of times, and if the program uses AI for production, it will show, because it’s a non-disruptive feature of the technology,” Wilson said.
The Israeli Defense Forces’ AI-guided system, Lavender, which was used to identify suspected Hamas members, has since been admitted to be wrong 10% of the time. Even the best language models are still visible at a rate of 5-10%, according to Vectara’s Hugging Face’s benchmark leaderboard. That ten percent may sound small. But on the scale of these applications, of course indeed this is not the case.
You might think the answer is more human oversight. But that’s exactly what some military applications are designed to mitigate. Therefore, removing the will of a person who is a target is a moral minefield,” Wilson said. “At the most basic level, removing the will of a person from a series of killings removes any form of human dignity.”
It also removes responsibility, Watkins said. “If no one is going to press the ‘fire’ button, who can be held responsible if people die, rightly or wrongly? AI is not a legal entity and cannot hold itself accountable.”
Should we be worried about the Terminators?
For anyone who grew up watching the Terminator movies, the latest robot videos from Boston Dynamics and Chinese tech companies like Xpeng probably feel uncomfortable.
But Wilson, who has spent time with similar models, encourages the idea. “Despite all the beautiful, well-planned robot videos coming out of China and the US – they’re not really there. They still need a lot of work to get them to the stage where they can interact independently with our world.”
The most pressing concern, he says, is not humanoid robots. “I’m very concerned about autonomous drone weapons. This technology already exists, and it’s cheap enough that it can be mass-produced today, by literally anyone.”
But a broader warning comes from Watkins, and it goes beyond the military context. “When organizations and governments delegate too many decisions to flawed and immature systems that are not fully understood or explained, without rigorous audits, it can undermine human rights and muddy the waters of accountability.”
The Anthropic disagreement was less about one company’s contract and more about a question we all have to answer everywhere: who decides how much we trust these systems — and who will be held accountable if they get it wrong?



