💾 Archived View for gmi.noulin.net › mobileNews › 1346.gmi captured on 2023-06-14 at 17:40:24. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2023-01-29)

➡️ Next capture (2024-05-10)

-=-=-=-=-=-=-

Call for debate on killer robots

2009-08-04 11:19:50

By Jason Palmer

Science and technology reporter, BBC News

An international debate is needed on the use of autonomous military robots, a

leading academic has said.

Noel Sharkey of the University of Sheffield said that a push toward more

robotic technology used in warfare would put civilian life at grave risk.

Technology capable of distinguishing friend from foe reliably was at least 50

years away, he added.

However, he said that for the first time, US forces mentioned resolving such

ethical concerns in their plans.

"Robots that can decide where to kill, who to kill and when to kill is high on

all the military agendas," Professor Sharkey said at a meeting in London.

"The problem is that this is all based on artificial intelligence, and the

military have a strange view of artificial intelligence based on science

fiction."

'Odd way'

Professor Sharkey, a professor of artificial intelligence and robotics, has

long drawn attention to the psychological distance from the horrors of war that

is maintained by operators who pilot unmanned aerial vehicles (UAVs), often

from thousands of miles away.

"These guys who are driving them sit there all day...they go home and eat

dinner with their families at night," he said.

"It's kind of a very odd way of fighting a war - it's changing the character of

war dramatically."

The rise in technology has not helped in terms of limiting collateral damage,

Professor Sharkey said, because the military intelligence behind attacks was

not keeping pace.

Between January 2006 and April 2009, he estimated, 60 such "drone" attacks were

carried out in Pakistan. While 14 al-Qaeda were killed, some 687 civilian

deaths also occurred, he said.

That physical distance from the actual theatre of war, he said, led naturally

to a far greater concern: the push toward unmanned planes and ground robots

that make their decisions without the help of human operators at all.

The problem, he said, was that robots could not fulfil two of the basic tenets

of warfare: discriminating friend from foe, and "proportionality", determining

a reasonable amount of force to gain a given military advantage.

"Robots do not have the necessary discriminatory ability," he explained.

"They're not bright enough to be called stupid - they can't discriminate

between civilians and non-civilians; it's hard enough for soldiers to do that.

"And forget about proportionality, there's no software that can make a robot

proportional," he added.

"There's no objective calculus of proportionality - it's just a decision that

people make."

Policy in practise

Current rules of engagement to which the UK subscribes prohibit the use of

lethal force without human intervention.

Nigel Mills is aerial technology director at defence contractor QinetiQ, who

make a number of UAVs and ground robots for the armed forces.

He told BBC News that building in autonomy to the systems required assurances

of the importance of human input.

"The more autonomous a system is, the more effort you have to put into the

human/machine interface because of the rules of engagement.

"Complete autonomy - where you send a UAV off on a mission and you don't

interact with it - is not compatible with our current rules of engagement, so

we're not working on such systems."

The US air force published its "Unmanned Aircraft Systems Flight Plan

2009-2047" in July, predicting the deployment of fully autonomous attack

planes.

The document suggests that humans will play more of a role "monitoring the

execution of decisions" than actually making the decisions.

"Advances in AI will enable systems to make combat decisions and act within

legal and policy constraints without necessarily requiring human input," says

the report.

However, it concedes that "authorising a machine to make lethal combat

decisions is contingent upon political and military leaders resolving legal and

ethical questions.

"Ethical discussions and policy decisions must take place in the near term in

order to guide the development of future UAS capabilities, rather than allowing

the development to take its own path apart from this critical guidance," it

continues.

While the US's plans are vague, Professor Sharkey said the mere mention of

ethical issues was significant.

"I'm glad they've picked up on that, because if you look at any previous plan,

they hadn't done so," he told BBC News.

However, he warned that work toward ever more autonomous killing machines is

carrying on, noting the deployment of Israel's Harpy - a fully autonomous UAV

that dive-bombs radar systems with no human intervention.

He cautioned that an international debate was necessary before further

developments in decision-making robots could unfold.