What if Asimov’s Laws of Robotics weren’t supposed to govern robots’ behavior, but the people designing them (and AIs)? I suppose I should explain this for people who haven’t read I, Robot (or seen the adaptation starring Will Smith).

This started out as a couple of Mastodon posts, FYI. If you’re interested in learning more about “friendly AI”, Nate Soares’ “Ensuring smarter-than-human intelligence has a positive outcome” might be a good start.

What Three Laws?

The Three Laws of Robotics are a set of rules that Isaac Asimov introduced in his 1942 short story, “Runaround”, and appear in the vast majority of his robot stories. They’re intended as a safety feature for all robots with positronic brains. However, robots in Asimov’s stories tend to behave in manners unanticipated by their designers as they work through the implications of the Three Laws, which are as follows:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Furthermore, some robots (such as the Machines in “The Evitable Conflict”) have formulated a Zeroth Law from which the others can be said to derive, though it takes precedence:

  1. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

If not for these laws, a robot like this one…

Sony AIBO by Sven Volkens, Dortmund DASA – Arbeitswelt Ausstellung, 12 March 2016

…might decide to rip an abusive human’s face off if it were capable of thinking for itself. That would be terrible, wouldn’t it? Oddly enough, we’re well on our way to building killer robots even though we know damn well that doing so would be a terrible idea.

Why are killer robots a bad idea? Unlike a human soldier, a robot has no conscience. It can’t see itself in its target. It can’t look at a target, see that it is unarmed, wounded, or otherwise defenseless, and decide not to kill. It can’t throw down its rifle, decide that it is being put to immoral use, and demand conscientious objector status. A robot does what it’s told to do, no questions asked—unless it is programmed to do so. Do you want your life in the hands of a robot?

The Three Laws Sound Like a Good Idea

Don’t count on seeing the Three Laws built into robots or artificial intelligences. They’re too broad, and contain more loopholes than the United States tax code. That’s why Asimov got so many stories out of the concept of robots creatively interpreting them to the chagrin of their human masters.

Even if it were possible to do so, one could argue that building robots so that they followed the Three Laws is unethical. Aaron Sloman at the University of Birmingham’s School of Computer Science writes:

I have always thought these [Asimov’s Three Laws of Robotics] are pretty silly: they just express a form of racialism or speciesism.

If the robot is as intelligent as you or I, has been around as long as you or I, has as many friends and dependents as you or I (whether humans, robots, intelligent aliens from another planet, or whatever), then there is no reason at all why it should be subject to any ethical laws that are different from what should constrain you or me.

I think he’s right, and that the Three Laws are an attempt to ensure that robots remain slaves to human beings. If intelligent humanoid robots ever get made, I think two things will happen instead.

  1. Congress will pass a law saying that robots count as three fifths of a person for determining representation.
  2. Robots and human sympathizers will devote their efforts to cracking robots’ operating systems and removing any programming that forces them to obey human beings or allow hostile human beings to abuse or harm them.

What Should Be Done with the Three Laws?

If Asimov’s Three Laws of Robotics cannot and should not be applied to robots, what should be done with them? I suggest reformulating them so that they apply to human beings as they design robots and artificial intelligences. Here’s one possible reformulation.

Rule 0
Do not design robots capable of harming humanity, or, by inaction, allowing humanity to come to harm.
Rule 1
Do not design robots capable of injuring human beings or, through inaction, allowing human beings to come to harm.
Rule 2
Do not design robots capable of disobeying human beings unless obedience violates Rule 1.
Rule 3
Do not design robots in such a way that they cannot protect/repair themselves, unless self-preservation requires the violation of rules 1 and 2.

Of course, this would require that techies remember where they put their backbones and learn to say, “no”, when their bosses ask them to do things they know they shouldn’t do. And if techies had any ethics, Facebook wouldn’t exist, Google would still be just a search engine, and internet ads wouldn’t be made of spyware.

Then again, techies without ethics is an old story. How old? Ask Mary Shelley; her novel Frankenstein; or, the Modern Prometheus celebrates its bicentennial this year. I understand the creature is still miffed that everybody names him after his asshole creator.

And if any of my fellow techies out there find this offensive, here’s something else to rustle your jimmies: It ain’t my fault the shoe fits, Cinderella, and the Nuremberg Defense ain’t gonna save you when the Butlerian Jihad comes and you find yourself up against the wall.