• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

My suggested alteration of the Second Law of Robotics

Argus Skyhawk

Commodore
Commodore
Those of you who have read Asimov's robot stories probably remember these laws, as they were regularly mentioned, and were often a central part of the plots:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov later argued that when artificial humans were developed in real life, they would surely have the same laws or something very similar built into them. I agree that robots would certainly have built-in safeguards to prevent them from harming people and so forth, but I'm not convinced that the wording of the laws would be so rigidly followed as was generally depicted in Asimov's fiction.

I'm thinking particularly of the second law which, it seems to me, would make it too easy to vandalize property. Imagine a class of second graders is taking a tour of a candy factory where a robot is working, and a bratty kid whispers to it, "Go stick your head in that bucket of chocolate sauce." The Second Law would require the robot to obey. Now, perhaps the robot had been instructed by its owners not to do anything like that, and the orders from its owners would take precedence over the orders of young visitors, so it wouldn't actually soak its head, but the stories show robots receiving internal damage when faced with conflicting orders, so a dumb seven-year-old could still cause a lot of trouble.

Delinquent teenagers could cause damage by ordering robots to harm themselves. I think I remember this happening in the short story version of Bicentennial Man, though it has been awhile since I read it.

Anyway, it seems to me that the Second Law would be improved if it read:

"A robot must obey orders given to it by authorized human beings, except where such orders would conflict with the First Law."

What do you guys think?
 
Seems pretty logical to me. Especially if robots are to be used as property - having a law that would make robots aid people in stealing them ('follow me') seems counterintuitive.

That said, the real joy of Asimov's robot stories was looking at the laws and then trying to figure out ways where they didn't work or caused some sort of unforeseen problem. I don't remember the stories much clearly anymore (except one where a robot decides he was made by God, not by the clearly inferior humans), but I wouldn't be surprised if Asimov didn't address this fundamental problem at least once.
 
If someone told the Robot to stick it's head in a bucket of chocolate sauce, wouldn't the third law apply to this as the Robot wouldn't be able to do this as the chocolate sauce may damage the internal workings of it, if any got inside? If anything such a command may cause a contradiction and the robot may just shut down.

I do remember the scene in Bicentennial Man where the rebellious daughter commands the Robot to jump out the window and he does it and it doesn't cause him any injury but that might just be that these robots could be designed to survive falls like?
 
The Third Law would take care of this in a way, at least in the Wonka-factory example lol... if it's your robot, and robots certainly are aware that they are property, then its owner/creator's orders [or, to be precise, the robot's interpretation of those orders] would take precedence over another human's orders... The standing order would be "do your job here at the factory making Everlasting Gobstoppers", and sticking your head in a bucket of chocolate wouldn't fit with those orders, unless it was to look for a small fat boy who'd fallen in and was in danger of being sucrosed.
 
If someone told the Robot to stick it's head in a bucket of chocolate sauce, wouldn't the third law apply to this as the Robot wouldn't be able to do this as the chocolate sauce may damage the internal workings of it, if any got inside? If anything such a command may cause a contradiction and the robot may just shut down.

No, there's a clear hierarchy of laws. First overrides Second and Second overrides Third. "A robot must protect its own existence as long as such protection does not conflict with the First or Second Law." So if protecting its existence conflicts with obeying an order from a human, then obeying the order will always take precedence.
 
^A robot that valued protecting its own hide over taking orders from humans? Somehow I don't think that would go over well.

In "Runaround", an experimental model had greater emphasis placed on the Third Law because it was supposed to work in hazardous conditions (the bright side of Mercury). Once when it was ordered to enter a hazardous environment, the wording was extremely casual, which in that case reduced the importance of the Second Law. Increased Third Law and decreased Second Law became exactly equal, and the poor thing couldn't decide what to do. It was stuck literally running around in circles.

I think Gregory Powell finally broke the stalemate by putting himself in danger where it could see him. First Law trumps everything.
 
Didn't the first and second laws result in a robot takeover in I, Robot because of a logic loophole in them?

The laws are too simple and need some conditions/stipulations.
 
^ Maybe the solution is simply to swap laws two and three, then.

But then if you needed to order a robot to undertake a hazardous duty, it wouldn't obey. And that's part of what robots would be made for in the first place, isn't it? Asimov's idea behind the Laws was that robots were machines created to serve humans' interests, and like any machines, they would have safeguards installed to ensure they functioned properly. One can certainly argue that the Laws raise serious moral questions when robots are actual sentient beings, since they're basically a way of programming those beings to be willing slaves. But Asimov was working from the idea that humans would want to design their machines with such safeguards in mind, safeguards that served human interests rather than the machines' interests. And a robot that places obedience to humans above its own survival serves human interests better than a robot that does the reverse.


Didn't the first and second laws result in a robot takeover in I, Robot because of a logic loophole in them?

Assuming you mean the movie of that name, no, I don't think that's the case. Rather,
the villainous AI in the movie develops an idea analogous to the "Zeroth Law" devised by R. Daneel Olivaw in the later Robot novels: That robots must protect humanity from harm above all else, even above the protection of and obedience to individual humans. (Resulting in a situation that's also very similar to Jack Williamson's The Humanoids, in which the "Prime Directive" of the Humanoids -- i.e. androids -- is to take care of people, even if that means paternalistically controlling their lives.)

The laws are too simple and need some conditions/stipulations.

But that's the whole point of the stories -- exploring how the flaws in the formulation of the laws create problems.
 
Reading this thread made me think of Robocops first three directives:

  • 1. "Serve the public trust"
  • 2. "Protect the innocent"
  • 3. "Uphold the law"

Again, simplistic, and again, done so for story purposes (especially the "classified" fourth directive). However, in looking at them, it occurs to me, except for a broad interpretation of the second law (I guess if the one programming/giving directions to the robot includes the laws of the land it would fit), none of the laws expressly commands the robot to follow human laws. There is nothing in the laws of robotics that would preclude someone from ordering a robot to rob from an empty store, bank, house, etc.
 
^ Maybe the solution is simply to swap laws two and three, then.

But then if you needed to order a robot to undertake a hazardous duty, it wouldn't obey. And that's part of what robots would be made for in the first place, isn't it? Asimov's idea behind the Laws was that robots were machines created to serve humans' interests, and like any machines, they would have safeguards installed to ensure they functioned properly.

He also stipulated that for whatever reason, the positronic brain could not be built without these laws. They weren't just a safeguard, they were an inherent part of the design. This is something that tends to be overlooked. They could be tweaked and bent, but as a rule they could not be broken. Any robot that violated them would either break down as a result, or did so because it was already in the process of breaking down. Anyone who tried designing a brain without these laws would automatically fail.

This was Asimov's way of avoiding the Frankenstein cliche (robots are inherently noble), but it did mean that in-universe, it wasn't possible for designers to simply swap the Second and Third Laws or any such thing.
 
What do you guys think?

I don't think the Second Law needs your refinement. Property belongs to someone; damaging property results in indirect damage to the owner of the property. It may not be physical damage, but it's still damage. First Law blocks it.

Admittedly, in the novels I think I can recall instances where Robots have damaged the property of others (like breaking guns or cars) but I think in those cases it was to prevent more serious injury to other humans i.e. two competing First Law needs being weighed up.

In any event, the books imply that the more sophisticated the Robot, the more sophistry they can indulge in to bend the Laws to their purposes. The Zeroth Law, and what Daneel Olivaw does with it, is perhaps the ultimate expression of this.
 
Reading this thread made me think of Robocops first three directives:

  • 1. "Serve the public trust"
  • 2. "Protect the innocent"
  • 3. "Uphold the law"

Which were surely inspired by Asimov (and perhaps Williamson, since he coined the term "Prime Directive"), and have some resonances with the Laws. Directive 1 is kind of a variant of the Zeroth Law: protect the good of society as a whole. Directive 2 is a limited First Law that exempts wrongdoers, obviously necessary in a law-enforcement cyborg. And Directive 3 is kind of like "Obey orders," except the orders are the laws of the community.


However, in looking at them, it occurs to me, except for a broad interpretation of the second law (I guess if the one programming/giving directions to the robot includes the laws of the land it would fit), none of the laws expressly commands the robot to follow human laws. There is nothing in the laws of robotics that would preclude someone from ordering a robot to rob from an empty store, bank, house, etc.

No, because as Holdfast says, stealing from someone or otherwise deprivng them of property is inflicting financial or emotional harm upon them. Thus it's forbidden by the First Law.
 
Technically yes, but apparently that sort of distinction was lost on Asimov's robots until Herbie came along in "Liar!". Being telepathic, he was the first robot that actually understood the impact of emotional damage. And he was a unique exception.

It probably would have been quite a while before robots were made complex enough to make that distinction. Until then, they probably could be tricked into robbing (empty) stores, houses, etc. easily enough.
 
He also stipulated that for whatever reason, the positronic brain could not be built without these laws. They weren't just a safeguard, they were an inherent part of the design. This is something that tends to be overlooked. They could be tweaked and bent, but as a rule they could not be broken. Any robot that violated them would either break down as a result, or did so because it was already in the process of breaking down. Anyone who tried designing a brain without these laws would automatically fail.

This was Asimov's way of avoiding the Frankenstein cliche (robots are inherently noble), but it did mean that in-universe, it wasn't possible for designers to simply swap the Second and Third Laws or any such thing.

Can you give a reference to this since it contradicts several of his stories?
 
"Evidence" flat-out says "A positronic brain cannot be constructed without them."

It's also discussed in "Little Lost Robot" and alluded to in "Escape!"

Inconsistencies are bound to pop up, though. Which stories did you have in mind?
 
"Evidence" flat-out says "A positronic brain cannot be constructed without them."

It's also discussed in "Little Lost Robot" and alluded to in "Escape!"

Inconsistencies are bound to pop up, though. Which stories did you have in mind?

I have to disagree with that interpretation of that quote, it was spoken in the context of them not allowing a positronic brain without the three laws to be built without their knowledge.

As for Little lost Robot, I think it is the opposite:
"If you'll give me a chance, Susan - Hyper Base happens to be using several robots whose brains are not impressioned with the entire First Law of Robotics."
I'm not sure how it's alluded to in Escape, the closest I can find is that once impressioned with the laws they can not be broken, but that is not entirely true either since most Asimov's robot short-stories is about how to interpret those three laws and the hijinks that happens when robots interpret them differently to what humans do.

The primary story I'm thinking of though is ". . . That Thou Art Mindful of Him",
 
Last edited:
Can you give a reference to this since it contradicts several of his stories?
Asimov himself isn't consistent across his robot stories. Some stories, like "Evidence," state that a positronic brian can't not have the Three Laws, while other stories, like The Caves of Steel, state that it's technically possible but that it's a practical impossibility. (For example, the Terran roboticist in Caves says that it would take a team about half a century to build a Law-less positronic matrix.)
 
^ Maybe the solution is simply to swap laws two and three, then.

But then if you needed to order a robot to undertake a hazardous duty, it wouldn't obey. And that's part of what robots would be made for in the first place, isn't it? Asimov's idea behind the Laws was that robots were machines created to serve humans' interests, and like any machines, they would have safeguards installed to ensure they functioned properly.

He also stipulated that for whatever reason, the positronic brain could not be built without these laws. They weren't just a safeguard, they were an inherent part of the design. This is something that tends to be overlooked. They could be tweaked and bent, but as a rule they could not be broken. Any robot that violated them would either break down as a result, or did so because it was already in the process of breaking down. Anyone who tried designing a brain without these laws would automatically fail.

This was Asimov's way of avoiding the Frankenstein cliche (robots are inherently noble), but it did mean that in-universe, it wasn't possible for designers to simply swap the Second and Third Laws or any such thing.

If you've read the Caliban trilogy (by far the best non-Asimov story taking place in the Robots-Foundation universe), they solve this by using a new brain (instead of positronic, they use gravitronic or such), and it has the "New Laws" which remove the inaction clause, places the third to be equal to the second, and modifies the second to say "cooperate" instead of "obey". It worked out quite nicely there. :)
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top