The risk that ethically
aligned machines could
fail, or be turned into
unethical ones.
(Failure and Corruptibility)
Charging machines with
ethically important
decisions
Carries the risk of
reaching morally
unacceptable
conclusions.
That would have been
recognized more easily by
humans.
The simplest case of this is if
the machine relies on
misleading information about
the situations it acts in
if it fails to detect that
there are humans
present which it ought to
protect.
if the moral principles or training examples that
human developers supply to a system contain
imperfections, or contradictions, this may lead
to the robot inferring morally unacceptable
principles.
Many currently existing machines
without the capacity for ethical
reasoning are also vulnerable to error
and corruptibility.
Definite facts as to what a morally
correct outcome or action would be
Definite facts as to what a
morally correct outcome or
action would be
Risks that the morally correct outcome or action
might not be pursued by the automated system
for one reason or another.
Most humans have a limited sphere of influence, but the
same may not be true for machines that could be
deployed en masse, while governed by a single
algorithm.
The risk that ethically
aligned machines
might marginalize
alternative value
systems.
Value Incommensurability,
Pluralism, and
Imperialism.
Pluralism maintains that
there are many different
moral values, where
“value” is understood
broadly to include duties,
goods, virtues, or so on.
Most humans have a limited sphere
of influence
The risk of creating
artificial moral
patients.
Creating moral
patients
While machine ethicists may be
pursuing the moral imperative of
building machines that promote
ethically aligned decisions and
improve human morality
This may result in us treating
these machines as intentional
agents, which in turn may lead
to our granting them status as
moral patients.
Humans are both moral
agents and moral patients.
moral agents: the ability of
knowingly act in compliance
with, or in violation of, moral
norms we are held
responsible for our actions
(or failures to act).
moral patients: we have
rights, our interests are
usually thought to matter,
and ethicists agree we
should not be wronged or
harmed without reasonable
justifications.
The risk that our use
of moral machines
will diminish our
own human moral
agency.
Undermining
Responsibility
This means undermine
our own capacity to
make moral
judgements
three strands to this problem:
Automated systems “accommodate
incompetence” by automatically
correcting mistakes.
Even when the relevant humans are
sufficiently skilled, their skills will be eroded
as they are not exercised.
Relevant to circumstances in which either the
goal of the automated systems was for a
machine to make ethical decisions alone
Relevant in cases where the decision-making
process is entirely automated
(including cases where the system is intended
to function at better than human level).
Automated systems tend to fail in particularly unusual, difficult or
complex situations, with the result that the need for a human to
intervene is likely to arise in the most testing situations.
These machines would also be able to recognize their own
limitations, and would alert a human when they encounter a
situation that exceeds their training or programming.