You have /5 articles left.
Sign up for a free account or log in.

EuroSWARM is a European Union-funded research project that has experimented with drones, remote-controlled cars and other sensors to create an autonomously behaving “swarm” of bots that can communicate with each other.

In a demonstration scenario, researchers set the swarm to check out a “suspicious-looking” vehicle, explained Hyo-sang Shin, a reader in guidance, navigation and control at Britain's Cranfield University, one of the project partners. The idea is that the swarm could be used for scouting an area before troops are deployed, he said.

The project, which came to an end in November last year, did not equip any of the drones or cars with weapons. The swarm is rather about “maximizing the information you can collect,” said Shin.

But EuroSWARM’s military uses have critics worried. It is one of the first trial projects in a new era of E.U.-funded military research; the budget for similar activities is set to explode over the next decade.

This funding splurge, triggered by fears of European backwardness in military technology, has seen the global debate around research into lethal autonomous weapons -- colloquially known as “killer robots” -- move to Brussels.

“Although the E.U. hasn't given any funding (yet) to ‘killer robots’ in the strict sense,” said Bram Vranken, a researcher at Vredesactie, a Belgian peace organization, “it is clearly prioritizing robotic systems which are pushing the boundaries towards increasingly autonomous systems,” such as swarm systems or “integrated and autonomous surveillance technology.”

Vredesactie is one of several groups, hailing from Britain, Germany, Italy and Spain, that have formed Researchers for Peace to campaign against what they call the “further militarization of the European research budget." The group accuses the E.U. of developing autonomous weapons “without any public debate.” So far, more than 600 researchers have signed a petition in support.

Aside from EuroSWARM, Vranken said, he was also worried about Ocean 2020, a 35-million-euro ($41 million) project that aims to “integrate drones and unmanned submarines into fleet operations.” The project, led by Leonardo, an Italian weapons contractor, involves several European ministries of defense, plus the Fraunhofer Society, a German applied research network.

These projects are potentially just the beginning. Earlier this month, the E.U. announced that billions will be set aside explicitly for research, a huge leap in resources compared with now, with the rest spent on development.

This will place the E.U. “among the top four” defense research and technology investors in Europe, according to the European Commission. However, this will still be peanuts compared with the United States, where the Department of Defense is spending about $16 billion a year on science and technology.

The fight in Brussels is now over how this money should be used. Back in 2014, the European Parliament was one of the first bodies to take seriously warnings about “killer robots,” calling on member states to “ban the development, production and use of fully autonomous weapons which enable strikes to be carried out without human intervention.”

In February this year, members of the European Parliament amended proposals from the commission -- the E.U.’s executive arm -- to prevent E.U. funds being spent on “fully” autonomous weapons that “enable strikes to be carried out without meaningful human intervention and control.” Asked whether it supports this prohibition, a European Commission spokeswoman declined to comment on the record. For now, it is not clear if the prohibition will stand.

Those pushing for increased E.U.-wide military research point out that the Continent lags behind rivals when it comes to developing new military technologies such as drones.

But this is not an argument that impresses Laëtitia Sédou, E.U. program officer at the European Network Against Arms Trade. “One of the reasons [for the creation of the E.U.] is to try and prevent going back into this arms race,” she said.

Despite an international effort by the Campaign to Stop Killer Robots, governments have yet to agree to a ban on weapons where humans no longer have “meaningful control” over the use of force. What, if anything, can universities and researchers do in the meantime?

One option is to boycott institutions seen to be taking their research too far. In March, dozens of researchers threatened to boycott the Korea Advanced Institute of Science and Technology, in South Korea, after it opened a Research Center for the Convergence of National Defense and Artificial Intelligence with an arms company. This spurred a pledge from KAIST’s president that the university would avoid developing “autonomous weapon lacking meaningful human control.”

But this poses the question of how far scientists should collaborate with research projects that get close to -- but stop short of -- creating a fully autonomous weapon; there are a huge range of processes that can be automated beforehand, some more ethically challenging than others.

The “biggest ethical issue” is automating the decision to fire, said Stuart Parkinson, executive director of Scientists for Global Responsibility, a U.K.-based organization with about 750 members. But automatic takeoff and landing for drones is arguably “less problematic,” he said.

These complexities mean that “it’s hard to say this project is ethical; this is not,” Parkinson added. For this reason, universities need to make sure that researchers are ethically trained, while ethicists should be included on research teams, he said.

As with any area of fast-developing research, a decent proportion of research spending should be devoted to looking into how the technology might be misused, Parkinson argued. And at the moment “we don’t have that,” he said.

And when in doubt over the ethics of a project, just look at the funders, Parkinson advised. If the backers are military, “whatever you do will be sucked into that world,” he said.

Once an electrical engineer, Parkinson left the field after concluding that it was simply too dominated by military research funders. For some academics, “maybe it’s time to look for a different direction,” he said.

But there will be no shortage of young researchers willing to take the place of the disenchanted -- hence the need for the military funders themselves to abide by proper research ethics guidelines, Parkinson pointed out.

For his part, Shin acknowledged that his EuroSWARM project might one day be a building block of a lethal autonomous weapon system, but he argued that “any technology can be dangerous.”

He said that he would “probably” agree to work on a research project that actually involved weapons. “But I would restrict myself to things that might benefit or reduce risk to human troops or [reduce] civilian casualties,” Shin added. He is against drones ever using their own judgment to fire.

Proper regulation, rather than academic boycotts such as the one proposed against KAIST, are likely to be more effective, Shin said.

It will be “years rather than decades” before drones are able to fire on their own initiative, said Parkinson, although then their “reliability will be in the eye of the beholder.”

But in a sense, fully autonomous weapons are already with us: the Korean border already has machine-gun turrets that can in theory fire automatically on movement, Parkinson said (although the South Korean military has reportedly made sure that a human has to authorize any attack). He warned, “That’s an example of where something is already happening.”

Next Story

Found In

More from Global