top of page
Search

Why Cheap Autonomous Weapons Should Be Banned — Interview with Stuart Russell

Updated: Feb 28, 2023



In Trends in AI from Zeta Alpha - from the Responsible AI in the Military (REAIM2023) conference that took place in The Hague on 15 and 16 February 2023 - we had the pleasure to interview professor Stuart Russell from the University of California in Berkeley. Thanks for joining us, Stuart. What was the most surprising thing that you have heard over the last two days?


 I went to a session on AI and nuclear weapons and the question was: will there be involvement of AI in the nuclear chain of command? Would you want to turn over the decision to launch nuclear weapons to an AI system? So of course the people in that panel who are working for nonprofits and, and peace organizations, were pretty clear that this would be a bad idea. But now the United States said today that this is their official position: there will not be AI intervening in the chain of command over the launch of nuclear weapons. So that was a very positive announcement that I was pretty happy to hear.


So, among your multiple blood groups, how would you describe your own distribution between techie and policy influencers these days?


Well, I think of myself as a hundred percent techie. But I have sort of been dragged into this policy arena partly because most of what I heard didn't make any sense. I thought this was clearly a serious issue.


We were discussing whether we should allow fully autonomous, lethal weapons — weapons that can locate and select and attack human targets with no human involvement.


This conference must have felt like a vindication of your long-term ideas, since you started advocating within our field among AI and robotics professionals in 2015 and 2016 about this topic.

What brought me there was kind of a strange accident. I was, and still am, a member of Human Rights Watch which has been a very effective organization for several decades talking about human-generated atrocities and how to prevent them. Around 2013, they sent an email out to the Northern California Council, saying they were creating a new campaign. They had previously been talking about how bad all those humans were, but actually no, humans are good. It's the robots who are really bad. We've gotta do everything we can to stop killing robots.


Since you were in the same lab as the roboticist at Berkeley, you thought "I have to do something about it?"


Well, at first I was a bit confused, you know, for Human Rights Watch to say that no, humans are actually good on the battlefield and it's the robots who are bad. That was a bit of a change in their fundamental position. But also when you look at the details of the argument, it had to do with the inability of robots to distinguish between civilians and legitimate competence who could be targeted. And to an AI researcher, that's sort of an insult, right? I said, "Well, I bet you we can solve that problem." But I'm not buying this campaign, it's not making a lot of sense to me. So I thought about it some more, and I realized how the technology would horribly evolve. As with many other technologies, it would evolve towards smaller and more agile lethal weapons, like a swarm of drones.


For instance, I had just bought my kids quad coppers on a trip to China a couple of years ago, which were one inch in diameter or two and a half centimeters wide. They were radio-controlled and cost about $2.50. So you could imagine that that could become a lethal weapon, and because of its property of autonomy — the human doesn't need to find the target or pilot it towards the target — it leads to the idea that you can launch very large swarms of lethal weapons, which effectively amount to weapons of mass destruction.


Then I wrote back to Human Rights Watch saying, well, I don't really agree with the arguments you're making, but I think that nonetheless, we should put all our efforts into banning these kinds of weapons because of this tendency that they will end up as weapons of mass destruction.


That was a very foresighted moment, I think, and at that moment, not many people were thinking about this.


No, I didn't know that anybody was talking in those terms. But not just weapons of mass destruction, but weapons of mass destruction that would be very easily acquired by all kinds of, not just countries, but terrorist groups even criminal gangs.


And that would basically render large parts of the world, sort of uninhabitable for humans. And this is not the future that we want, right? We're trying to go to a world where everyone has security, and this would take us in the wrong direction entirely. So as you say, in 2015, the AI community started to get its act together with some help from some non-profit institutes like the Future of Life Institute.


We put together this open letter. I had originally posted an article in Nature making this argument and then that became this open letter that tens of thousands of researchers signed. We wrote another letter directly to President Obama and Prime Minister Dave Cameron in the UK from a much smaller group of leading scientists. All of the main senior scientists in artificial intelligence signed these letters.


It's now seven years later, and obviously, the topic is now very much in everybody's attention. Is it the right kind of attention and has it stayed? What has happened since? Let's focus on two areas. What has happened in the technology space? What do you know about the current state of AI in the military, in lethal weapons? And what has happened on the policy side? Why are people gathering here in The Hague to discuss this topic?


On the technology side, AI has progressed rapidly. And if you wanna think about the feasibility of these kinds of weapons, think about self-driving cars. The task of a self-driving car involves navigating, detecting humans in the environment, and making tactical decisions about where to go, etc. So it wouldn't be that difficult to reprogram the self-driving car so that when it sees a pedestrian, it actually runs them over.


That would fit the definition. The basic difficulties in creating a lethal weapon are already there, and the important characteristic of weapons compared to self-driving cars is that a weapon that works 50-60% of the time is considered pretty good. For example, during World War II, more than a thousand, possibly as many as 10,000 bullets were fired for every casualty. So bullets as a weapon of war are less than 1%, maybe 10th of a percent or less.


Effectiveness - something that you send out 50% of the time, it reaches its target and kills somebody - that's an incredibly effective weapon. A self-driving car, however, has to be 99.999999% reliable. So the challenges in self-driving technology are actually far greater than we face in the weapons area. To give you an example of how the technology is moving along in 2017, still driving cars are not on the road yet.


Well, they are in San Francisco, Phoenix in trials, and several cities in China, but not on a large scale. So how is it with AI in weapon systems? In November of 2017, we released a short film called Slaughter Box which tried to illustrate the threat we are concerned about, which are large swarms of anti-personnel, autonomous weapons. We released that in Geneva, at the meeting called the GGW, where countries are negotiating about whether there's going to be a ban on autonomous weapons.


I remember very clearly, a whole bunch of ambassadors were in the audience and the Russian ambassador said, "Why are we even discussing this kind of weapon? They won't exist for another 30 years. This is just science fiction's pointless." Three weeks later SDM in Turkey announced the Kargu, with its capability for fully autonomous hits on human targets based on face recognition and figure tracking, moving targets, and all the rest of it. So everything that we depicted in the movie was already being sold as a weapon.


What hasn't happened so far is a large-scale, Manhattan project-like effort to produce and manufacture on massive scale really effective lethal autonomous weapons. And in fact, several manufacturers including TM have dialed down the claims, saying, "Of course, you know..."


We've seen manufacturers like Boston Dynamics actually kind of making these pledges as well. They pulled out and said they'll be operated remotely by a human operator and stopped talking about the fully autonomous mode.


So there's this meaningful human control?


Yeah, that's an important concept. There's a fair degree of consensus on this phrase, "meaningful human control". In Geneva, they have tried to find common ground because any country can put a stop to discussions, so they can only make very weak agreements unless there's an obvious international consensus. At the moment, their positions are all over the map, partly because the politicians and policymakers don't understand the technical issues. So they're reluctant to commit to a decision.


If we talk about consensus and controversy, how would you quickly summarize the points we have consensus about in this audience, and where do you see the points that are more controversial?


When you look at the details of what people are actually agreeing on, it tends to evaporate. I would say there's a tentative agreement that about 70 countries have put their names to something saying that weapons that operate completely outside of humans should be banned.


But the idea of an AI system, a weapon that operates completely outside of human control - what does that mean?


It seems to mean weapons that by themselves can wake up in the morning and say, "Hey, we're gonna start a war with Lutu and we're gonna all go and attack, attack Lutu".


There's consensus that that's a bad thing, but no one is even talking about making such a thing. We're talking about weapons that are given a mission. So the mission might be to go into an area of a city and wipe out anyone you find.


That's quite similar to what cruise missiles do in some sense, right?


Yes, it could be. The difference is that with cruise missiles, you designate the GPS coordinates that the missile is gonna land on. You have to have a good reason to believe that that's a legitimate military clause.


Yeah, which is in the human chain of command, the human accountability. If you launch a cruise missile without having enough information and not having good reason to believe that there are no civilians there, you would be criminally liable for a war crime.


So what is the international consensus and what is this meeting about? This is basically assuming that the weapons are going to exist and how can we make sure that they're used responsibly?


But imagine if we were sitting here talking about the responsible use of biological weapons in the military, right? Yes, we've created these biological weapons, which can infect millions of people with horrible diseases that cause them to die in agony. We're gonna find responsible ways to use these weapons. That would be ridiculous, right? What an unethical thing to do. And what concerns me is that people are only thinking about ethics as applied to use and not as applied to existence. There's an ethical decision that has to be made:


Do we allow these types of weapons to come into existence?

I think the ethical decision there is different depending on whether you're talking about anti-personnel weapons, which can be multiplied by the millions and launched as such, versus for example an autonomous submarine. There aren't any civilians under the water and no one is gonna be buying millions of submarines because they're still gonna be really expensive. The issues here are completely different, having more to do with perhaps accidental hostilities emerging because the two sides' submarines run into each other or misinterpret the other side's behavior.


What I'm trying to do here is to point to the need for making a decision about existence. There's an ethical case and even a common sense case that if we create what will amount to weapons of mass destruction that are cheap, easy to use, and don't leave behind a huge radioactive crater, they will be available in all the armed supermarkets of the world.


When I say cheap, I mean stuff like the fact that you can buy landmines for about $6. They have a lot more explosives in them than we need for a fully autonomous anti-personnel weapon. So that's the kind of price that I think we might be looking at in a few years if we go down this route. That puts it well within the financial resources of terrorist groups to buy millions of weapons, and that would be a disaster for the world.


The only way we're gonna prevent it is not by saying, "Oh let's make sure the software is ethical," right?


So the” responsible” in this conference is a bit of a wrong perspective on the question. You think it should be more of an existential discussion, and not so much on where we exactly put those gray lines of responsible decisions about AI in the military. You would've been much happier, and then you would've said, "There's only one responsible decision in some sense." But obviously, AI and autonomy are not the same thing, right?


So there are lots of other kinds of AI that you could use — intelligence analysis, logistics, military planning, management of operations, and all kinds of things that could make military campaigns more effective and have very significant strategic implications. And I'm not talking about that, and I actually believe that the AI community has some obligation to help the defense community because after all, our taxes are paying for them, and they're willing to die to protect us.


Right. But creating this category of weapons would end up reducing security for everyone on earth. And one thing I'm curious about — so obviously, in 2023 we have a major war going on in Ukraine, in Europe. We have a very rapidly shifting geopolitical landscape. Has that influenced your thinking on this topic, or do you think it's a matter of principles or does the current day reality creep into your thinking to change your opinions?


That's a great question. I think one thing it's certainly done is make it ridiculously hard to make any progress in Geneva because Russia is continually trying to shut down the discussions. And I think probably the fact that China has largely cited with Russia has made it much more difficult to establish trust between the US and China, and that issue of trust between the US and China is the major problem that we face.


Even if the US were to agree that we ought to ban a category of weapons, they don't trust that other countries, particularly China, would abide by any such agreement. So without that trust and without the willingness or even the ability at the moment to talk to each other, we can't, for example, take the steps we need to establish verification mechanisms, which are really important for these kinds of treaties.


The Chemical Weapons Convention has many of the same characteristics in the sense that there's already a big chemical industry in the world, and many of the chemicals they make can be used as weapons. How on earth have they managed to control this?


Well, the answer is they created a verification regime where all manufacturers who produce chemicals on a long list of precursor or directly usable lethal chemicals have to account for their production. They have to check the bonafide of their customers, they have to know where the chemicals are going and what they're being used for. And that's been very effective.


But AI technology is much harder to control, with its dual-use aspects. Anyone can download a computer vision library, which is very powerful.


I don't anticipate that any methods will be able to constrain the proliferation of software. We see that now in the Ukraine war, where the chips in the tanks are still coming from our factories to some extent.


But what about the mass-scale manufacturer of the physical weapons platforms? Well, they are, for example, small quadcopters. There are a few manufacturers of quadcopters in the world, the most popular ones being in China. And you can require that if someone is trying to buy very large numbers of quadcopters, you have to find out who they are, and what they're gonna do with them. If it's some post office box in Libya, then you're gonna say, "Well, sorry we can't send you 5 million quadcopters." Because we can't verify that you are a legitimate customer.


So it's really that aspect of the technology plus the mass production that you would like politicians to regulate?


Yes, so there are a lot of technical questions. How do you prevent the repurposing of remotely piloted weapons? That's a potential risk that people could say, oh yeah, we've got all these remotely piloted weapons. But you know, we just flip a switch in a software update, and now they're fully autonomous. One thing you wanna do is separate onboard computing: physical separation between onboard computing and any kind of firing circuit. So the ability to actually engage in the attack can only be delivered by a remote link and by any onboard computing. You also wanna establish like a cryptographic separation in some sense, or, or some physical separation where you require a human to actually is there is no wire connected. Or local radio, or optical link. There's no physical connection between the onboard computing, which is doing stabilization of the airframe, and whatever else it might, possibly navigating, but any decision about firing or self-destruct would have to be coming from a remote link from a human. You wanna establish that.


And where are the human control stations to go with them? And where are the trained pilots?


Those kinds of measures, they're a little different from some other arms control measures, but you'd be surprised. Actually, when arms control treaties assign a lot of detailed agreements, a lot of people actually work to make them. For example, New START, which is about nuclear missiles between the US and what was then the Soviet Union, now Russia, involves up to 18 visits every year from each side to pretty much any nuclear weapons facility on the other side to check that things are not being manufactured, et cetera.


So it's I think it's a good idea to start discussing those questions. Could we establish a verification regime? Could we agree on certain types of inspections and measures to ensure that industrial production is not diverted?


These measures I think can be effective, and they can be discussed without any commitment to a treaty or a ban on any particular cataract weapons. But as happened, for example, with the nuclear test ban treaty, before that treaty came into existence in 1996, there were two decades of discussions among scientists from mainly the US and Russia about how you might verify that treaty.


So you think that to some extent that this is a discussion for engineers and researchers to engage in even before informing the policymakers — heaven forbid that the policymakers are the ones carrying out these discussions. If you see people in our own profession robotics AI engineers and researchers in the large industrial research labs or in academia, what would you recommend that they get connected with in terms of knowledge or that they do in order to promote this point of view?


There's lots of information available on at least two main websites: stopkillerrobots.org and autonomousweapons.org. I think it's important to get active in the chapter of your professional society, whether it's IEEE or ACM, or AAAI. Because I think fairly soon there are going to be serious discussions about whether those professional societies will actually declare a policy on lethal autonomous weapons, just as the major chemical societies have declared their support for a treaty banning chemical weapons.


As a member of those societies, you are not allowed to work on chemical weapons, just as you are not allowed as a doctor to assist in executions and so on.


So you think self-regulation in our profession is the number one step for a scientist?


Though you can also talk to your local political representative, it's not really on the radar of most politicians at the moment — it's on some defense ministers and foreign ministers, but the typical local member of parliament probably not.


The other thing you can do, particularly if you find it exciting to work on the challenges of developing these weapons, is to remember one thing: if you are working on this, probably your counterpart in another country is also working on this. How would you feel about that weapon arriving at your house to attack your family?



90 views

Recent Posts

See All
bottom of page