In 1899, the world’s most powerful nations signed a treaty at The Hague that banned military use of aircraft, fearing the emerging technology’s destructive power. Five years later the moratorium was allowed to expire, and before long aircraft were helping to enable the slaughter of World War I. “Some technologies are so powerful as to be irresistible,” says Greg Allen, a fellow at the Center for New American Security, a non-partisan Washington DC think tank. “Militaries around the world have essentially come to the same conclusion with respect to artificial intelligence.”
Allen is coauthor of a 132-page new report on the effect of artificial intelligence on national security. One of its conclusions is that the impact of technologies such as autonomous robots on war and international relations could rival that of nuclear weapons. The report was produced by Harvard’s Belfer Center for Science and International Affairs, at the request of IARPA, the research agency of the Office of the Director of National Intelligence. It lays out why technologies like drones with bird-like agility, robot hackers, and software that generates photo-real fake video are on track to make the American military and its rivals much more powerful.
New technologies like those can be expected to bring with them a series of excruciating moral, political, and diplomatic choices for America and other nations. Building up a new breed of military equipment using artificial intelligence is one thing—deciding what uses of this new power are acceptable is another. The report recommends that the US start considering what uses of AI in war should be restricted using international treaties.
New World Order
The US military has been funding, testing and deploying various shades of machine intelligence for a long time. In 2001, Congress even mandated that one-third of ground combat vehicles should be uncrewed by 2015—a target that has been missed. But the Harvard report argues that recent, rapid progress in artificial intelligence that has invigorated companies such as Google and Amazon is poised to bring an unprecedented surge in military innovation. “Even if all progress in basic AI research and development were to stop, we would still have five or 10 years of applied research,” Allen says.
In the near-term, America’s strong public and private investment in AI should give it new ways to cement its position as the world’s leading military power, the Harvard report says. For example, nimbler, more intelligent ground and aerial robots that can support or work alongside troops would build on the edge in drones and uncrewed ground vehicles that has been crucial to the US in Iraq and Afghanistan. That should mean any given mission requires fewer human soldiers—if any at all.
The report also says that the US should soon be able to significantly expand its powers of attack and defense in cyberwar by automating work like probing and targeting enemy networks or crafting fake information. Last summer, to test automation in cyberwar, Darpa staged a contest in which seven bots attacked each other while also patching their own flaws.
As time goes on, improvements in AI and related technology may also shake up balance of international power by making it easier for smaller nations and organizations to threaten big powers like the US. Nuclear weapons may be easier than ever to build, but still require resources, technologies, and expertise in relatively short supply. Code and digital data tend to get cheap, or end up spreading around for free, fast. Machine learning has become widely used and image and facial recognition now crop up in science fair projects.
The Harvard report warns that commoditization of technologies such as drone delivery and autonomous passenger vehicles could become powerful tools of asymmetric warfare. ISIS has already started using consumer quadcopters to drop grenades on opposing forces. Similarly, techniques developed to automate cyberwar can probably be expected to find their way into the vibrant black market in hacking tools and services.
You could be forgiven for starting to sweat at the thought of nation states fielding armies of robots that decide for themselves whether to kill. Some people who have helped build up machine learning and artificial intelligence already are. More than 3,000 researchers, scientists, and executives from companies including Microsoft and Google signed a 2015 letter to the Obama administration asking for a ban on autonomous weapons. “I think most people would be very uncomfortable with the idea that you would launch a fully autonomous system that would decide when and if to kill someone,” says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, and a signatory to the 2015 letter. Although he concedes it might just take one country deciding to field killer robots to set others changing their minds about autonomous weapons. “Perhaps a more realistic scenario is that countries do have them, and abide by a strict treaty on their use,” he says. In 2012, the Department of Defense set a policy requiring a human to be involved in in decisions to use lethal force, but it expires later this year.
The Harvard report recommends that the National Security Council, DoD, and State Department should start studying now what internationally agreed-on limits ought to be imposed on AI. Miles Brundage, who researches the impacts of AI on society at the University of Oxford, says there’s reason to think that AI diplomacy can be effective—if countries can avoid getting trapped in the idea that the technology is a race in which there will be one winner. “One concern is that if we put such a high premium on being first, then things like safety and ethics will go by the wayside,” he says. “We saw in the various historical arms races that collaboration and dialog can pay dividends.”
Indeed, the fact that there are only a handful of nuclear states in the world is proof that very powerful military technologies are not always irresistible. “Nuclear weapons have proven that states have the ability to say ‘I don’t even want to have this technology,’” Allen says. Still, the many potential uses of AI in national security suggest that the self-restraint of the US, its allies, and adversaries is set to get quite a workout.