The city of San Francisco has been changed by its killer robot plan.


Putting Humans in the Police: A Reconciling State Violence and Human Rights in City Hall, and How to Educate the Public

We will not support other people to weaponize advanced mobility general-purposerobots, or the software we develop that allows advanced robotics. We will carefully review our customers intended applications to prevent potential weaponization. We also pledge to explore the development of technological features that could mitigate or reduce these risks. We are not taking issue with technologies that nations and their government agencies use to defend themselves and uphold their laws.

“There are a whole lot of reasons why it’s a bad idea to arm robots,” says Peter Asaro, an associate professor at The New School in New York who researches the automation of policing. He thinks the decision is part of a broader movement to put more weapons in the police. He says that a potential use case is useful in the extreme, such as hostage situations, but there is more to it than that. “That’s detrimental to the public, and particularly communities of color and poor communities.”

A week is a long time in politics—particularly when considering whether it’s okay to grant robots the right to kill humans on the streets of San Francisco.

The reversal is a result of the huge public outcry and lobbying that followed the initial approval. Concerns were raised that removing humans from key matters relating to life and death was a step too far. A protest took place outdoors of San Francisco City Hall on December 5, while at least one supervisor who initially approved the decision said they regretted their choice.

“Despite my own deep concerns with the policy, I voted for it after additional guardrails were added,” Gordon Mar, a supervisor in San Francisco’s Fourth District, tweeted. I regret it. Our vote sets a precedent for other cities that do not have a strong commitment to police accountability. I do not think making state violence more remote, distanced, & less human is a step forward.”

The value of a life is the question being posed by supervisors in San Francisco according to Jonathan Aitken. He says the action to apply lethal force has deep consideration both in police and military operations. Those deciding whether or not to pursue an action that could take a life need important contextual information to make that judgment in a considered manner—context that can be lacking through remote operation. “Small details and elements are crucial, and the spatial separation removes these,” Aitken says. “Not because the operator may not consider them, but because they may not be contained within the data presented to the operator. This can lead to mistakes. And mistakes, when it comes to lethal force, can literally mean the difference between life and death.

The use of bombs in a civilian context could not be justified, according to Asaro, who downplayed the suggestion that guns could be replaced with bombs. The Dallas Police Department used a bomb-carrying robot to kill a suspect in a terrorism case in 2016 in what experts are calling the “unprecedented” moment.