The era of killer robots
We've entered the age of killer robots. Here's what you need to know.
When I first started writing about killer robots several years ago, there were some autonomous weapons in development, but it felt like we were talking about something that was a problem for the future. It appeared as if there was some time to figure out what to do about the threat of robots on the battlefield. This is no longer the case.
Killer robots aren't something we can think of as a sci-fi concept or something that could threaten humans years from now. They are increasingly a line item in a country's defense budget, and they have started being deployed in wars around the world.
When you imagine a killer robot, you might be thinking of the Terminator or something like that, but that's not typically what we're looking at here. Rather than humanoid robots, we're talking about small tanks, flying drones, submarines, ships and things like that.
Recent news suddenly made killer robots, also called autonomous weapons, a major topic of conversation. It was reported in late February that there were tensions between the AI company Anthropic and the Pentagon. Anthropic had stipulated that its AI systems, which the Pentagon was using, could not be utilized for mass surveillance or to operate autonomous weapons.
In the run-up to the war in Iran, this seemingly upset leadership at the Department of Defense, despite the fact that Anthropic had held this position for some time. They wanted Anthropic to allow its AI to be used for these purposes, and the company stood its ground. Secretary of Defense Pete Hegseth started threatening to essentially ruin Anthropic by designating it as a supply chain risk, and when the company didn't bend the knee, he reportedly went ahead and did that.
Anthropic is now suing the Department of Defense over this decision. What this shows is that the Pentagon appears to be interested in using autonomous weapons, and we've already seen the U.S. military using AI in this war with Iran.
"The Pentagon clearly has plans to use AI in these ways," Peter Asaro, vice-chair of the International Committee for Robot Arms Control (ICRAC), tells me. "We don't really know what the technical debates might have been within all of this, but it's not encouraging. I think it kind of makes people aware that this is happening very quickly."
Laura Nolan, a software engineer who resigned from Google in 2018 over the company providing AI tools to the Pentagon, tells me that the Anthropic situation is refreshing—with a caveat.
"I think it is great to see Anthropic making a principled stand [over mass surveillance and autonomous weapons]," Nolan says. "Their line on not wanting their tech used in autonomous weapons is a little bit morally softer. It’s not based on autonomy in weapons being wrong, per se; it’s that they think the technology isn’t ready yet. Having said that, I’ve often made very similar arguments myself, with the caveat that I don’t believe the technology will ever be good enough for completely autonomous targeting of dual-use objects or personnel."
AI has been used in the war in Iran in two primary ways: to target bombs and in suicide drones. These drones are called Low-cost Unmanned Combat Attack Systems (LUCAS), which are ironically modeled on Iranian drones called the Shahed. We've seen Iran use its drones to fight back.
Horrifically, it appears AI might have been behind the unintentional bombing of a girls' school in Iran by the United States. Many of the bombs that were launched in the early days of the war were targeted by Palantir's Maven Smart System.
"They had thousands of strikes on the first day, and they wouldn't have been able to develop that many verified targets by hand in a short amount of time," Asaro says. "So they were likely machine-generated."

Prior to the war in Iran, AI has been used to target bombs in the conflict in Gaza, and we've seen suicide drones used in the war in Ukraine. As I said, we're no longer waiting for the era of killer robots. We are now in it. The Pentagon has programs like the Replicator Initiative that are pouring billions of dollars into autonomous weapons.
In terms of what can be done about the use of killer robots in war zones, folks like Asaro are part of an effort by the Campaign to Stop Killer Robots to ideally ban the use of autonomous weapons on the battlefield. They're working on this at the UN. Primarily, they want to make sure that autonomous weapons won't be used to kill or otherwise threaten human beings.
"We're aiming to negotiate a treaty that would be legally binding international law," Asaro says. "The main principle is to establish the requirements for meaningful control by humans... I think we're probably at our closest point ever to actually getting a treaty."
There are still roadblocks ahead for this effort, but Asaro says they've been making progress. In the past, countries like the United States and Russia have made it difficult to get a treaty done.
"At least 120 countries support a... treaty," Nolan says. "156 countries backed a UN General Assembly resolution in November last year to work towards a treaty."
If we don't ban the use of these weapons to kill human beings and restrict their use in other important ways, we could end up seeing mass killings or other atrocious things occur during times of war, and countries might evade accountability since what was done was carried out by robots. That's not a future I want to see, so the work being done to prevent such a future is obviously quite crucial.
Before this article was ready, I did a little overview of the killer robot situation on YouTube, if you'd like to give that a watch. Make sure to subscribe to the channel.
