Saturday, March 26, 2011

The Three Laws

Sometime in my early school years, I remember reading a short essay on the Three Laws of Robotics, made famous by Asimov's classic compilation I, Robot. The original form of the laws are listed below:
  •  First Law: A robot may not injure a human being, or by inaction, cause a human being to come to harm.
  • Second Law: A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law
  • Third Law: A robot must protect its own existence, as long as such protection does not conflict with the First or Second Laws.
At the time, I was impressed with both the simplicity and the reasonableness of the laws, and of the idea that robots would need to be unalterably programmed with such prime directives in order to allow them to interact in society. We would need to have some basis for public trust. Although I have not read much of them, I hear several of Asimov's stories explore the ramifications of these laws in great detail. To get an idea, you can quickly read the plot synopsis of his first essay called Runaround.

I also remember hearing that in I, Robot, things go terribly wrong with the Three Laws. It wasn't until I saw the recent film starring Will Smith that I finally got to see this illustrated. It is a decent film in its treatment of some of Asimov's ideas, although as usual it is pumped up on action sterioids and the more thoughtful elements become somewhat lost in the shuffle. But the main problem in the movie's version is linked to the fact that someone decided to create a master control system that allowed all the individual robots to be coordinated to achieve a central goal. You would think that would be ok as long as the central computer obeyed the Three Laws, which it does, and would therefore use its great power to protect humanity, which it also does. But the central control system, which undoubtedly has access to global information, is smart enough to observe the fact that we humans harm each other each and every day. Given that this is inevitable, the only way to prevent us from harm is to protect us at all times from ourselves. The master computer decides the only way to do this is to take control of society and of the lives of everyone who could potentially cause harm to someone else.

The first objection that comes to mind regarding this solution is that robots are supposed to obey us, not the other way around. But if you look at the Three Laws, the directive to obey us is second, while the directive to protect us from harm is first. Thus, it is perfectly consistent for the robot to take actions to protect us even if it goes against our wishes. That, in essence, is the flaw in the Three Laws which is thus illustrated. The fact that this solution looks suspiciously like what occurs in a political overthrow, or revolution, is a poignant reminder of how such events in human history are often driven by people who must think they are acting for the good of society, and of how compelling their logic must seem to them.

Now that I've taken some time to think about it, I find myself asking about what other flaws might be lurking in the Three Laws of Robotics. They are written using terms that seem clear to us, but which are not necessarily well defined to a logic driven machine. For example, when I read the second law, I interpreted "harm" to mean imminent harm. We might expect a good person to try to save us from a dangerous situation, one that presents an immediate risk. We would not expect them to shelter us from any possible harm that might come to us. Would it help to add the word "imminent" to the second law? It may help, but it would not fix the problem. How do you define "imminent"? Computers work in terms of probabilities. How probable does the harm have to be to make it imminent? What if there are several people involved and saving some means harming the rest? The laws provide no guidance here, and in fact, our own consciences would have trouble deciding what to do. Do you take the course that saves the most from harm? If the odds are even, do you pick one at random?

Now, these laws are not presumed to be all that a robot is about. Each system would have a set of directives that tell it how to make value-based decisions on all sorts of things. Whole industries would probably have their own mandatory laws that any robot that works in that industry must adhere to, just as we have similar laws for humans. The point is to find a universal common denominator that makes robots suitable for interaction with humans. Given this, I must question the point of the third law, i.e., that robots must have a self-preservation instinct. I can understand manufacturers requiring it so that they don't lose an expensive piece of equipment, but if it is purely an economic decision, it won't always go that way. If a manufacturing plant is run exclusively by robots, you would want each of them to be willing to sacrifice themselves in order to keep the plant from blowing up and destroying them all. Whatever the reasons, it does not seem like a good candidate for a mandatory universal law. Of course, if you take it out you only have Two Laws, which doesn't have that fundamental ring you get with three.

So I'll take a stab at my version of the fundamental laws of robotics. I think the first part of the first law is sound, so lets keep it:

First Law: A robot shall not take any action which may lead to harming a human being.

The slight change in wording ensures that the robot must project its actions into the future to determine probable indirect consequences, rather than just direct consequences. There are still problems of course. We would need to replace "may lead" with a minimum probability or the robot may never do anything. We would also have to specify how to define and determine what constitutes "harm". But it is a start. I would then skip the second part of the first law and avoid all those difficult moral dilemmas. We don't automatically expect people to act heroically, so I don't think any one would expect a machine to, which leads us to the second law:

Second Law: A robot shall obey any orders given to it by its registered owner, unless doing so would conflict with the First Law.

The original second law requires a robot to obey anyone without regard to the person's authorization. How would it handle conflicting orders from different people? Requiring it to obey only the registered owner resolves that problem and provides a few other benefits. We now know who is responsible for the robot's actions in case it is called into question. If the robot robs a bank during the night, investigators know where to start to find out who is involved. Owners also now have a sense of control over their robots. They may grant access to others or to the general population in varying degrees, but they always have priority thanks to the second law. There would also have to be rules about how to authorize a transfer of ownership and what to do if the owner dies. The only catch is making sure an identity thief can't take control of the robot. But thanks to the first law, neither the owners nor a thief can use the robot as a weapon. And if someone is in danger, it will be up to the owner to direct the robot to save them, which means it is really the human in command who is intervening. And as mentioned, the original third law would be chucked.

Now the whole robbing a bank idea is something we might want to rule out altogether. I think in order for our robotic friends to get along in society, we would want to require them to be law-abiding citizens. If you require the robot not to break the laws of the land in which it resides, you have essentially put it on a trust level equal to, and perhaps greater than, any person you might meet on the street. The robot could not steal property or use it without permission, could not cheat when preparing someone's taxes, could not lie under oath, etc. This would be a great candidate for a second law, but it would require all robots to have knowledge of all local, state, and federal laws. You could not pre-program it since laws change across time and location. That opens the door to incorrect instruction and there goes your trust factor. No, our indelible laws must be universal in nature in order to work.

There is at least one behavior that I think would qualify as universal. Something that we all inherently expect robots to do is tell us the truth, but they can certainly be programmed to deceive just as easily as anything else, or they could be allowed to use deception of others in order to achieve a goal. We humans all agree that deception is not a valid means to an end with certain important exceptions, like to save one's own life or someone else's. I think we could safely bar the robot from deliberate deception except if it conflicts with the first law. Knowing that all robots are required to tell the truth provides another crucial step toward gaining the public trust. I would also not allow the robot to be commanded to decieve. This is why it must be the second law in priority. And since robots normally deal in probabilities, we should couch it in those terms. Finally, this does not mean that a robot must be compelled to answer any query. It just must answer truthfully when doing so. So below is my final version of the three laws of robotics:

First Law: A robot shall not take any action which may lead to harming a human being.

Second Law: When conveying information, a robot shall communicate the information as accurately as possible, unless such action conflicts with the First Law.

Third Law: A robot shall obey any orders given to it by its registered owner, unless doing so would conflict with the First or Second Laws.

In addition, there would need to be a set of rules about how precise human commands to robots must be in order for them to actually follow those commands. Otherwise, you get into the kind of ambiguous interpretations that caused the problems in the first place. Apparently, a lot of thought has gone into this topic over the last 50 years since Mr. Asimov introduced it, and that is fitting, because I'm sure the day will come when we will indeed face the need to implement such a program. I would have loved to have had this conversation with him if he were still with us.

Monday, March 14, 2011

Notes on the Battle of L.A.

I went to see Battle Los Angeles this weekend. As I was checking times, a young lady buying tickets asked the teller, "Is it true that it really happened?". The guy behind the glass looked up and, holding back a smirk, confirmed to her that it indeed actually happened. I'm sure she figured it out about 10 minutes into the film. The real Battle of Los Angeles was an interesting incident in 1942 that has captured the imagination of U.F.O. followers ever since, but it has absolutely nothing to do with this film. I really didn't expect to be saying anything about it here, but it caught me by surprise in a way that I thought should be commented on. You see, I was expecting a science fiction film. What I saw was a very nicely directed war film. There wasn't the slightest trace of what sci-fi fans usually go to see, and that is a good thing. Who needs to be teased? If this is a war movie, then by God, let me put on my marine helmet and enjoy it as such. In other words, the movie earned my respect for NOT trying to be science fiction.

It was a very clever idea. Being from L.A. myself, the idea of seeing a bona-fide war film set right in my own backyard is something that I'll admit can draw me in. Even if I believe it will be cheesy, the familiarity factor is enough to interest me. But you could not pull off an invasion from another country of the world without also offending entire neighborhoods in this town. In a city where you find just about every nationality there is, the only invader that could get them all rooting for the same team would have to be from outer space. To create a war scenario, the invader would need to be of hostile intent from the first moment so there is no time to bother trying to communicate with them. The cadets at Camp Pendleton would be called upon immediately to engage. And this is exactly what happens. Any information that is obtained about the alien's biology or weaponry is used as intelligence to better strike against them. Although their weapons are advanced, none of it is all that unusual. The intent of the creators is to produce the same experience as might be expected when fighting any new enemy where one must learn how they think and what their military capabilities are. On top of that backdrop, you have all the elements that make war films worth all the carnage. Heroic sacrifices, camaraderie born out of shared suffering, dealing with the ghosts of past memories, and getting a bunch of people to work together to overcome seemingly impossible circumstances. There are also some scenes with a family they are trying to rescue that were touching enough to bring me to tears. Like I said, it's no Saving Private Ryan, but I think war movie buffs will eat it up.

So my hat goes off to John Liebesman for having a clear vision for what he was trying to do and then doing it well. Yes, this is a Marine pride film, but not in the usually cheesy manner that these guys are sometimes portrayed in Hollywood. No, this is one that does a decent job of actually honoring the soldier hero. Hoo-aah.