Saturday, March 26, 2011

The Three Laws

Sometime in my early school years, I remember reading a short essay on the Three Laws of Robotics, made famous by Asimov's classic compilation I, Robot. The original form of the laws are listed below:
  •  First Law: A robot may not injure a human being, or by inaction, cause a human being to come to harm.
  • Second Law: A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law
  • Third Law: A robot must protect its own existence, as long as such protection does not conflict with the First or Second Laws.
At the time, I was impressed with both the simplicity and the reasonableness of the laws, and of the idea that robots would need to be unalterably programmed with such prime directives in order to allow them to interact in society. We would need to have some basis for public trust. Although I have not read much of them, I hear several of Asimov's stories explore the ramifications of these laws in great detail. To get an idea, you can quickly read the plot synopsis of his first essay called Runaround.

I also remember hearing that in I, Robot, things go terribly wrong with the Three Laws. It wasn't until I saw the recent film starring Will Smith that I finally got to see this illustrated. It is a decent film in its treatment of some of Asimov's ideas, although as usual it is pumped up on action sterioids and the more thoughtful elements become somewhat lost in the shuffle. But the main problem in the movie's version is linked to the fact that someone decided to create a master control system that allowed all the individual robots to be coordinated to achieve a central goal. You would think that would be ok as long as the central computer obeyed the Three Laws, which it does, and would therefore use its great power to protect humanity, which it also does. But the central control system, which undoubtedly has access to global information, is smart enough to observe the fact that we humans harm each other each and every day. Given that this is inevitable, the only way to prevent us from harm is to protect us at all times from ourselves. The master computer decides the only way to do this is to take control of society and of the lives of everyone who could potentially cause harm to someone else.

The first objection that comes to mind regarding this solution is that robots are supposed to obey us, not the other way around. But if you look at the Three Laws, the directive to obey us is second, while the directive to protect us from harm is first. Thus, it is perfectly consistent for the robot to take actions to protect us even if it goes against our wishes. That, in essence, is the flaw in the Three Laws which is thus illustrated. The fact that this solution looks suspiciously like what occurs in a political overthrow, or revolution, is a poignant reminder of how such events in human history are often driven by people who must think they are acting for the good of society, and of how compelling their logic must seem to them.

Now that I've taken some time to think about it, I find myself asking about what other flaws might be lurking in the Three Laws of Robotics. They are written using terms that seem clear to us, but which are not necessarily well defined to a logic driven machine. For example, when I read the second law, I interpreted "harm" to mean imminent harm. We might expect a good person to try to save us from a dangerous situation, one that presents an immediate risk. We would not expect them to shelter us from any possible harm that might come to us. Would it help to add the word "imminent" to the second law? It may help, but it would not fix the problem. How do you define "imminent"? Computers work in terms of probabilities. How probable does the harm have to be to make it imminent? What if there are several people involved and saving some means harming the rest? The laws provide no guidance here, and in fact, our own consciences would have trouble deciding what to do. Do you take the course that saves the most from harm? If the odds are even, do you pick one at random?

Now, these laws are not presumed to be all that a robot is about. Each system would have a set of directives that tell it how to make value-based decisions on all sorts of things. Whole industries would probably have their own mandatory laws that any robot that works in that industry must adhere to, just as we have similar laws for humans. The point is to find a universal common denominator that makes robots suitable for interaction with humans. Given this, I must question the point of the third law, i.e., that robots must have a self-preservation instinct. I can understand manufacturers requiring it so that they don't lose an expensive piece of equipment, but if it is purely an economic decision, it won't always go that way. If a manufacturing plant is run exclusively by robots, you would want each of them to be willing to sacrifice themselves in order to keep the plant from blowing up and destroying them all. Whatever the reasons, it does not seem like a good candidate for a mandatory universal law. Of course, if you take it out you only have Two Laws, which doesn't have that fundamental ring you get with three.

So I'll take a stab at my version of the fundamental laws of robotics. I think the first part of the first law is sound, so lets keep it:

First Law: A robot shall not take any action which may lead to harming a human being.

The slight change in wording ensures that the robot must project its actions into the future to determine probable indirect consequences, rather than just direct consequences. There are still problems of course. We would need to replace "may lead" with a minimum probability or the robot may never do anything. We would also have to specify how to define and determine what constitutes "harm". But it is a start. I would then skip the second part of the first law and avoid all those difficult moral dilemmas. We don't automatically expect people to act heroically, so I don't think any one would expect a machine to, which leads us to the second law:

Second Law: A robot shall obey any orders given to it by its registered owner, unless doing so would conflict with the First Law.

The original second law requires a robot to obey anyone without regard to the person's authorization. How would it handle conflicting orders from different people? Requiring it to obey only the registered owner resolves that problem and provides a few other benefits. We now know who is responsible for the robot's actions in case it is called into question. If the robot robs a bank during the night, investigators know where to start to find out who is involved. Owners also now have a sense of control over their robots. They may grant access to others or to the general population in varying degrees, but they always have priority thanks to the second law. There would also have to be rules about how to authorize a transfer of ownership and what to do if the owner dies. The only catch is making sure an identity thief can't take control of the robot. But thanks to the first law, neither the owners nor a thief can use the robot as a weapon. And if someone is in danger, it will be up to the owner to direct the robot to save them, which means it is really the human in command who is intervening. And as mentioned, the original third law would be chucked.

Now the whole robbing a bank idea is something we might want to rule out altogether. I think in order for our robotic friends to get along in society, we would want to require them to be law-abiding citizens. If you require the robot not to break the laws of the land in which it resides, you have essentially put it on a trust level equal to, and perhaps greater than, any person you might meet on the street. The robot could not steal property or use it without permission, could not cheat when preparing someone's taxes, could not lie under oath, etc. This would be a great candidate for a second law, but it would require all robots to have knowledge of all local, state, and federal laws. You could not pre-program it since laws change across time and location. That opens the door to incorrect instruction and there goes your trust factor. No, our indelible laws must be universal in nature in order to work.

There is at least one behavior that I think would qualify as universal. Something that we all inherently expect robots to do is tell us the truth, but they can certainly be programmed to deceive just as easily as anything else, or they could be allowed to use deception of others in order to achieve a goal. We humans all agree that deception is not a valid means to an end with certain important exceptions, like to save one's own life or someone else's. I think we could safely bar the robot from deliberate deception except if it conflicts with the first law. Knowing that all robots are required to tell the truth provides another crucial step toward gaining the public trust. I would also not allow the robot to be commanded to decieve. This is why it must be the second law in priority. And since robots normally deal in probabilities, we should couch it in those terms. Finally, this does not mean that a robot must be compelled to answer any query. It just must answer truthfully when doing so. So below is my final version of the three laws of robotics:

First Law: A robot shall not take any action which may lead to harming a human being.

Second Law: When conveying information, a robot shall communicate the information as accurately as possible, unless such action conflicts with the First Law.

Third Law: A robot shall obey any orders given to it by its registered owner, unless doing so would conflict with the First or Second Laws.

In addition, there would need to be a set of rules about how precise human commands to robots must be in order for them to actually follow those commands. Otherwise, you get into the kind of ambiguous interpretations that caused the problems in the first place. Apparently, a lot of thought has gone into this topic over the last 50 years since Mr. Asimov introduced it, and that is fitting, because I'm sure the day will come when we will indeed face the need to implement such a program. I would have loved to have had this conversation with him if he were still with us.

1 comment:

  1. So would I. Just think every time you open a book of his, its like hes speaking to you in the here and now from 40 years ago

    ReplyDelete