There Will Always Be Another Moonrise - Computer
Technology and Nuclear Weapons
Henry Thompson and
Computing and Social Responsibility
It is commonplace that computer technology is already playing a
substantial role in our lives, from cashpoints and the Driver and
Vehicle Licensing Centre computer at Swansea to the Apple personal
computers in use by the Army of the Rhine.
There is every indication that our dependence on computer technology
is increasing, and that the complexity and sophistication of the
computer systems involved is also increasing. In particular, it is
clear that the imagination of both civilian and military planners has
been captured by the apparent promise of techniques for automated
decision making which have recently emerged from the research
laboratories of workers in Artificial Intelligence, techniques grouped
together under names like expert systems or intelligent
knowledge-based systems. Such systems embody in a computer the
diagnostic expertise of a human specialist in the form of rules which
reproduce his or her knowledge of a particular task. For example, an
expert system is currently in active commercial use which assists in
locating mineral deposits using rules developed by geologists and
computer scientists working together, and a project has recently begun
to build an expert system to help people in ascertaining their
entitlements under DHSS regulations.
Under the circumstances then, it is as well to consider whether such
techniques are actually capable of playing the role envisaged for
them. It is our professional opinion that they are not in fact
capable of doing so, and that the consequences of an over-optimistic
assessment of their capabilities may be exceedingly dangerous. It is
our purpose in this article to set out the necessary and inescapable
limitations of expert systems techniques, and to illustrate why it
would therefore be a serious mistake to lay plans dependent on an
over-enthusiastic estimate of what they can do.
We are particulary concerned about existing and planned uses of
computer technology as part of nuclear weapons systems. Public
debates on nuclear weapons policy often rest on a number of uncritical
assumptions about the nature and capability of computer systems. One
such assumption is that expert systems can be constructed to play
crucial roles in the decision making process which leads up to the
firing of nuclear missiles. In the European theatre the pressure to
automate the decision making process has recently gone up another
notch, with the installation by both NATO and the Warsaw pact of
strategic medium-range nuclear missiles within eight minutes flying
time of their targets.
This is not the only area of public policy where concern about
over-optimistic expectations as to the capabilities of computer
systems is merited. Proposals for automatic nuclear power station
safety control systems and automatic railway trains have recently been
in the news.
The common thread in all such proposals, whether civil or military, is
the belief that expert systems can be relied on to make critical
decisions rapidly on the basis of a large amount of imprecise
information. In such situations, where one may rightly question the
ability of human beings to perform satisfactorily, can one really
escape from the dilemma by relying on computer-based expert systems to
make the decisions? This seems like an ideal solution, replacing as
it does a human agent known to be limited in capacity and fallible
with an inhuman agent believed to be virtually unlimited in capacity
and infalliable. Such a line of reasoning is extremely attractive to
planners, because it allows them to overcome a wide range of
objections which would otherwise appear to prohibit the construction
of the systems they wish to have, such as safe nuclear power stations
and reliable Launch on Warning systems for nuclear missiles. Such
arguments gain strength from the fact that few if any on either side
of the debate have any clear idea of the power of computer systems.
This uncertainty makes it easy to propose reliance on computers, and
to defend one's proposal by saying "Indeed we couldn't do this if we
had to depend on a person to do it, but as we will depend on a
computer instead it will work." The same uncertainty makes it
difficult to argue against such proposals.
But if we look carefully for the technical results necessary to
support such arguments, they are now there. No computer system now in
existence has the capacity to reliably make decisions of the required
kind and in the required circumstances, nor can one ever be
constructed.
Consider the case of automatic Launch on Warning. This is a proposed
method of controlling nuclear missiles whereby in order to defend
oneself against the possibility of pre-emptive first strike, one's own
missiles must be prepared to fire immediately on detecting that the
other side's missiles have been fired. It is claimed that within the
very short time available only computers can be relied on to
accurately and reliably judge whether the other side has in fact
launched its missiles, and one's own should be launched in reply.
It is in fact reasonable to entrust such a truly momentous decision to
a computer?
On October 5, 1960, the North American Aerospace Defense Command's
central defense room received a top priority warning from the Thule,
Greenland, Ballistic Missile Early Warning System station indicating
that a missile attack had been launched against the United States.
The Canadian air Marshal in command undertook verification, which
after some 15 to 20 minutes showed the warning to be false. The
radars, apparently, had echoed off the moon.
This is by no means an isolated incident. There are numerous other
documented cases of false alerts of this nature. One might argue that
with this large amount of experience behind them, the people who build
such decision making systems have learned to correct their failings,
that the problems have been solved, and that we can safely trust our
lives and those of the rest of the world to the infallibility of
wholly computerised Launch and Warning systems.
It is our contention that, on the contrary, infallible automated
decision making can never be build, because of inherent
limitations of expert systems technology. The principle which
underlies our argument is easily understood. It is summarised in the
title of this article - there will always be another moonrise. That
is, computer systems for making reliable judgements depend on the
prior exhaustive characterisation of all the circumstances
which may affect that judgement, but such an exhaustive
characterisation is in principle impossible in the cases under
discussion.
There is a qualtative difference between the Launch on Warning problem
and the sorts of problems which expert systems technology has been
successfully applied to. But even in the kind of ordinary computer
systems we are already familiar with, illustrative problems can be
found. If one stops and thinks a moment about one's experience with
computers, it should be clear that this is not surprising. Almost
everyone has had the experience of having some problem with a bank
account, DHSS payment or shop bill explained with the words "We're
terribly sorry, the computer made a mistake." Does this mean the
computer has added 2 and 2 and come up with 5? Of course not - rather
the computer system has applied some rule in an inappropriate context.
The terms it has been provided withi.for discriminating between
different situations have proved inadequate, and it has therefore,
although performing as specified, done the wrong thing.
In the case of a billing system, such a mistaken application of a rule
is inconvenient, embarassing, and potentially even costly. But it is
not fatal. Apologies can be made, the 'bug' in the system identified
and corrected, and the chances of the system failing again
consequently reduced.
In the case of Launch on Warning systems, once the missiles are
launched and find their targets, there will be no none left to
identify and correct the bug.
What do we mean when we say that it is in principle impossible to
apply expert systems technology to this problem of judgement? What
exactly distinguishes the weapons control problem from the wide range
of problems where such technology is appropriate?
Before considering this point in more detail, it is necessary to make
it clear at exactly what point we are aiming our criticisms. We must
make a distinction between hardware, software and
systems. Hardware is the actual material that a
computer is made of. Software is the set of instructions that
the computer has to obey in the execution of the task. In the case of
a billing program the rule that the computer is obeying might be:
Subtract the number of units perviously paid for from the total
number of units used (this gives the unbilled units) - then send a
bill for these units.
The system is the thing as a whole - the hardware, the
software, the connections to the outside world, the context of
operation, the role of and means for human interaction, etc.
It is not our contention that it is impossible to construct reliable
computer hardware, although it may in fact be very difficult. Like
any other mechanical object, the chips and wires etc. which make up a
computer can malfunction or break altogether. It is not even our
contention that the construction of 'correct' software is in principle
impossible, where by correct we mean that it accurately conforms to
its specification.
Rather it is our point that 'correct' software running on reliable
hardware would still not produce a system adequate to the kinds of
tasks we are concerned with.
The problem is that the specification itself, the set of
characterisations of situations and the actions required therein, is
necessarily imperfect. The necessary inability of the system
designers to exhaust possible scenarios in advance leads to the
necessary fallibility of resulting systems. Not having anticipated
the possibility of a full moon rising in the same place where missiles
rise leads to an inability to discriminate between the two.
Returning to the simple billing example given above, even if the
software correctly implements the specification given, and runs on
sound hardware, it will still not always do the right thing. As
specified the rule would apply in the context where the person being
billed has used no more units, and the system would send a bill for
£0.00. That is, although the rule itself is correct it has been
applied in an inappropriate circumstance, one in which its application
was unforeseen by the system designers.
Of course in the limited context of billing programs, it is reasonable
to expect that (at least after a bit of trial and error) it will be
possible for the system designers to accurately foresee most possible
contexts of use, so that specification can be made servicable. But
this is not in general the case once one enlarges the context of
application suffficeintly. In the cases of concern, such as power
station safety systems or Launch on Warning, the range of possible
situations with which the planned expert system would be confronted is
vast. The elements which make up that situation are not simple,
unambiguous and discrete things like account balances and numbers of
units ordered. They are complex, ambiguous and imprecise things like
the readings of infrared sensors and the and the signals on an
oscilloscope. In order to correctly identify the actual events an
objects which are provoking the readings and signals, it is crucial to
know the context in which they occur, the readings and signals alone
are not sufficient. Is it a flight of missiles, or just the moon?
A real life example underlines this point, that
failure to anticipate all possible situations
which might provoke a certain set of readings
in a system, is the faaous blackout of New York
in 1965. All the computer systems involved in
monitoring and controlling the operation of the
power grid in the Northeastern United States
and adjoining areas of Canada functioned exactly
according to specification. The catastrophe
was the direct result of the failure of this
specification to anticipate the particular
conjunction of generator problems which triggered
the process. This leads to inappropriate rules
being applied, which lead to further problems,
again not anticipated, and so it went until the
whie grid was a shambles.
Two features have emerged from our discussion
so far which together characterise the
situations in which we argue it is foolhardy to
depnd on automatic decision making based on
expert systems techniques:
-
Enough complexity and sensitivity to context to make exhaustive
characterisation extremely difficult,
coupled with
-
Circumstances which make it impossible to correct the resulting
mistakes via the ordinary cycle of use, failure and modification.
The point is that with reactor safety systems or Launch on Warning
systems there is no second chance. The odds are too high that the
problems will only be apparent after the reactor meltdown or the
missile launch, at which point it is too late.
That is, unlike existing expert systems, there can be no live testing
of the system in its actual context of use, and no chance to correct
failures that are detected in use. Testing under simulation is not
sufficient, as the simulation necessarily recapitulates the
specification on which the system was based, which ex hypothesi
is incomplete. This is why all existing systems of any complexity are
used only as advisors - their decisions are always monitored
and filtered by human judgement.
We have set out the reasons why we think this is not just a sensible,
but a necessary precaustion. It follows that automated decision
making systems which operate in complex, context-sensitive
applications cannot be allowed to operate without human supervision,
and so they cannot be deployed in circumstances where such supervision
cannot be effectively exercised. This was precisely the position we
started out with, namely that discussions were proceeding which seemed
to imply that one could reasonably do just that. The technical case
is clear, as we hope to have demonstrated above. The consequences are
substantial, if disasteful to many. We cannot have reliable automatic
safety control systems for nuclear reactors. We cannot have reliable
automatic Launch on Warning systems for nuclear missiles. Not now,
not in five years, not ever. Anyone who proposes to rely on such
systems, without first answering the arguments presented here, is
behaving irresponsibly indeed given the grave consequences of error.
|