By
On a bright fall day last year off the coast of Southern California, an Air Force B-1 bomber launched an experimental missile that may herald the future of warfare.
Initially, pilots aboard the plane directed
the missile, but halfway to its destination, it severed communication with its
operators. Alone, without human oversight, the missile decided which of three
ships to attack, dropping to just above the sea surface and striking a 260-foot
unmanned freighter.
Warfare is increasingly guided by software.
Today, armed drones can be operated by remote pilots peering into video screens
thousands of miles from the battlefield. But now, some scientists say, arms
makers have crossed into troubling territory: They are developing weapons that
rely on artificial intelligence, not human instruction, to decide what to target
and whom to kill.
As these weapons become smarter and nimbler,
critics fear they will become increasingly difficult for humans to control — or
to defend against. And while pinpoint accuracy could save civilian lives,
critics fear weapons without human oversight could make war more likely, as easy
as flipping a switch.
Britain, Israel and Norway are already
deploying missiles and drones that carry out attacks against enemy radar, tanks
or ships without direct human control. After launch, so-called autonomous
weapons rely on artificial intelligence and sensors to select targets and to
initiate an attack.
Britain’s “fire and forget” Brimstone
missiles, for example, can distinguish among tanks and cars and buses without
human assistance, and can hunt targets in a predesignated region without
oversight. The Brimstones also communicate with one another, sharing their
targets.
Armaments with even more advanced
self-governance are on the drawing board, although the details usually are kept
secret. “An autonomous weapons arms race is already taking place,” said Steve
Omohundro, a physicist and artificial intelligence specialist at Self-Aware
Systems, a research center in Palo Alto, Calif. “They can respond faster, more
efficiently and less predictably.”
Concerned by the prospect of a robotics arms
race, representatives from dozens of nations will meet on Thursday in Geneva to
consider whether development of these weapons should be restricted by the
Convention on Certain Conventional Weapons. Christof Heyns, the United Nations
special rapporteur on extrajudicial, summary or arbitrary executions, last year
called
for a moratorium on the development of these weapons.
The Pentagon has issued a directive requiring
high-level authorization for the development of weapons capable of killing
without human oversight. But fast-moving technology has already made the
directive obsolete, some scientists say.
“Our concern is with how the targets are determined, and more importantly, who
determines them,” said Peter Asaro, a co-founder and vice chairman of the International Committee for Robot Arms Control, a
group of scientists that advocates restrictions on the use of military robots.
“Are these human-designated targets? Or are these systems automatically deciding
what is a target?”
Weapons manufacturers in the United States
were the first to develop advanced autonomous weapons. An early version of the
Tomahawk cruise missile had the ability to hunt for Soviet ships over the
horizon without direct human control. It was withdrawn in the early 1990s after
a nuclear arms treaty with Russia.
Back in 1988, the Navy test-fired a Harpoon
antiship missile that employed an early form of self-guidance. The missile
mistook an Indian freighter that had strayed onto the test range for its target.
The Harpoon, which did not have a warhead, hit the bridge of the freighter,
killing a crew member.
Despite the accident, the Harpoon became a
mainstay of naval armaments and remains in wide use.
In recent years, artificial intelligence has
begun to supplant human decision-making in a variety of fields, such as
high-speed stock trading and medical diagnostics, and even in self-driving cars.
But technological advances in three particular areas have made self-governing
weapons a real possibility.
New types of radar, laser and infrared sensors
are helping missiles and drones better calculate their position and orientation.
“Machine vision,” resembling that of humans, identifies patterns in images and
helps weapons distinguish important targets. This nuanced sensory information
can be quickly interpreted by sophisticated artificial intelligence systems,
enabling a missile or drone to carry out its own analysis in flight. And
computer hardware hosting it all has become relatively inexpensive — and
expendable.
The missile tested off the coast of California, the Long Range Anti-Ship
Missile, is under development by Lockheed Martin for the Air Force and Navy. It
is intended to fly for hundreds of miles, maneuvering on its own to avoid radar,
and out of radio contact with human controllers
In a directive published in 2012, the Pentagon
drew a line between semiautonomous weapons, whose targets are chosen by a human
operator, and fully autonomous weapons that can hunt and engage targets without
intervention.
Weapons of the future, the directive said,
must be “designed to allow commanders and operators to exercise appropriate
levels of human judgment over the use of force.”
The Pentagon nonetheless argues that the new
antiship missile is only semiautonomous and that humans are sufficiently
represented in its targeting and killing decisions. But officials at the Defense
Advanced Research Projects Agency, which initially developed the missile, and
Lockheed declined to comment on how the weapon decides on targets, saying the
information is classified.
“It will be operating autonomously when it
searches for the enemy fleet,” said Mark A. Gubrud, a physicist and a member of
the International Committee for Robot Arms Control, and an early critic of
so-called smart weapons. “This is pretty sophisticated stuff that I would call
artificial intelligence outside human control.”
Paul Scharre, a weapons specialist now at the
Center for a New American Security who led the working group that wrote the
Pentagon directive, said, “It’s valid to ask if this crosses the line.”
Some arms-control specialists say that
requiring only “appropriate” human control of these weapons is too vague,
speeding the development of new targeting systems that automate killing.
Mr. Heyns, of the United Nations, said that
nations with advanced weapons should agree to limit their weapons systems to
those with “meaningful” human control over the selection and attack of targets.
“It must be similar to the role a commander has over his troops,” Mr. Heyns
said.
Systems that permit humans to override the
computer’s decisions may not meet that criterion, he added. Weapons that make
their own decisions move so quickly that human overseers soon may not be able to
keep up. Yet many of them are explicitly designed to permit human operators to
step away from controls. Israel’s antiradar missile, the Harpy, loiters in the
sky until an enemy radar is turned on. It then attacks and destroys the radar
installation on its own.
Norway plans to equip its fleet of advanced
jet fighters with the Joint Strike Missile, which can hunt, recognize and detect
a target without human intervention. Opponents have called it a “killer
robot.”
Military analysts like Mr. Scharre argue that
automated weapons like these should be embraced because they may result in fewer
mass killings and civilian casualties. Autonomous weapons, they say, do not
commit war crimes.
On Sept. 16, 2011, for example, British
warplanes fired two dozen Brimstone missiles at a group of Libyan tanks that
were shelling civilians. Eight or more of the tanks were destroyed
simultaneously, according to a military spokesman, saving the lives of many
civilians.
It would have been difficult for human
operators to coordinate the swarm of missiles with similar precision.
“Better, smarter weapons are good if they
reduce civilian casualties or indiscriminate killing,” Mr. Scharre said.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.