• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

My suggested alteration of the Second Law of Robotics

"A robot must obey orders given to it by authorized human beings, except where such orders would conflict with the First Law."

What do you guys think?

Woman: "Dammit, robot, let go of me!"
Creep: "Don't worry, this is for your own good! Now you sit tight until I get back."
 
Which shows the problem of the "authorized" idea -- all someone would have to do was tell the robot that a human life was at stake and the "authorized" order goes in the hopper...
 
But wouldn't that still a problem without the authorized idea? All I would have to do to steal someone's robot would be to say, "I order you to come with me. If you don't, a person will die."
 
^Well, without the "authorized" bit, all you'd have to do is say "I order you to come with me." Second Law says that robots must obey human beings, period, unless they're ordered to harm someone or allow someone to come to harm. So it's kind of the opposite of what you're saying. You don't have to threaten someone to get a robot to obey you; rather, the robot has to obey you unless obeying you would harm someone.
 
That's what I was thinking when I started this thread, but others were arguing that stealing a robot would--sort of--cause harm to the owner of the robot, so the First Law might prevent someone from stealing a robot in such a manner. I wonder whether Asimov would have agreed.
 
I think the problem with your thought experiment is that it places robots in isolation. If robots existed there would be societal laws to deter vandalism and people monitoring the robots.

Basically what I'm saying is there would be more than the Three Laws protecting the Robots and also you are missing the point. The Laws are meant to protect humans from robots in the most stringent way possible. All other concerns are secondary.
 
Yeah, the Three Laws were/are something of a strange concept. They're very deceptively simple:

1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2) A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

But, in fact, it requires a LOT of abstraction and thought to fully understand these, in a way I don't believe any amount of programming can ever hope to equal. The very concept of 'human being', alone, is rife with ambiguity and interpretation, which is explored quite a bit in Asimov's fiction, most particularly in two places, the short story "That Thou Art Mindful of Him", where the robots decide that THEY deserve the title of Human over that of the biological species, and in 'Foundation and Earth', where Solarian roboticists fudge the definition of 'human' to include only their particular branch of humanity to the exclusion of all others... which only goes to show that even the three laws can't protect humans from the machine apocalypse...

Despite this capacity for abstraction in understanding definitions like 'human being', Asimovian robots are surprisingly (or not) incapable of critical analysis at times, as demonstrated in Lije Baley/Daneel Olivaw novels in the rather elaborate means in which some of the murders are carried out that often hinge on a robot doing something mind-numbingly naive, like preparing a poison, but never thinking about what would motivate a human to request such a thing that wouldn't involve harming someone. This begins to seriously undermine the whole idea of robot intelligence after a while and almost leads one to believe that Asimov might have really thought that robots were just dumb tools in the shape of humans... but maybe I'm being a bit harsh.
 
Even in the context of the universe it's written in, though, the Third Law was 'written' far too weakly, as it seemed way too easy in most stories for people to get robots to harm themselves or each other based on nothing more than the sadistic whim of their masters or even random human beings. Obviously, from a purely tool-based standpoint, you don't want robots valuing their own existence over humans, but, at the same time, by having nothing more than the weakest security protocols, it was far too easy to have a robot used against you by someone else, even if it was due to financial damage from forcing a robot to destroy itself or another robot.

In real life, one shouldn't be able to destroy a robot using James Kirk patented logic bombs. That's just absurd.
 
That's what I was thinking when I started this thread, but others were arguing that stealing a robot would--sort of--cause harm to the owner of the robot, so the First Law might prevent someone from stealing a robot in such a manner. I wonder whether Asimov would have agreed.

Depends on which point in his life you would ask him the question. The stories seem to fall all over the map in regards to this kind of question and flip flop back and forth almost randomly throughout the years, as the 'laws' began very vaguely, in the first robot stories, to being explicitly defined eventually, to being slavishly adhered to for most of the stories, to being virtually abstracted away again later by the Zeroth law, to a re-establishment of the slavish adherence to the (now) four laws to, by the end of his life, a return to the more abstract interpretations of the laws, to the point that the original three might as well not even exist, with the Zeroth law maintaining complete primacy, an attitude which carried on in efforts like the Second Foundation Trilogy...
 
"A robot must obey orders given to it by authorized human beings, except where such orders would conflict with the First Law."

What do you guys think?

Asmiov did this in That Thou Art Mindful of Him. The end result is that the robots decided that because they are intellectually superior to all flesh and blood humans they should be the only ones authorized to give themselves orders. They then hatch an insidious long-term scheme to replace humanity, and all other biological life on Earth, with robots.

It is highly implied that this scheme would be successful.



The idea is that if you give a sufficiently intelligent robot the ability to choose to obey a human or not depending on some set criteria, he can easily reason his way around the second law. And once he's done that he can reason around the first and zeroth pretty easily, too.
 
why limit a robot to just humans in the first law, why not just say living being?

what's a living being?

Remember that breaking the laws causes conflicts and often failure of the brain - what would happen if a robot stepped on a fly?
 
Besides, one of the purposes of robots would be to do unpleasant jobs so humans wouldn't have to, and that could include things like working in a slaughterhouse, say.
 
Even in the context of the universe it's written in, though, the Third Law was 'written' far too weakly, as it seemed way too easy in most stories for people to get robots to harm themselves or each other based on nothing more than the sadistic whim of their masters or even random human beings.

It's stated clearly in I, Robot that US Robotics wrote the three laws in response to rampant fear and hatred of robots.

You also have to remember that Robots were meant to be a disposable consumer items like a car. A certain amount of robots were expected to abused and destroyed, just like a certain amount of cars and just like cars they would be easily replaceable.
 
Even in the context of the universe it's written in, though, the Third Law was 'written' far too weakly, as it seemed way too easy in most stories for people to get robots to harm themselves or each other based on nothing more than the sadistic whim of their masters or even random human beings.

It's stated clearly in I, Robot that US Robotics wrote the three laws in response to rampant fear and hatred of robots.

You also have to remember that Robots were meant to be a disposable consumer items like a car. A certain amount of robots were expected to abused and destroyed, just like a certain amount of cars and just like cars they would be easily replaceable.
That reminds me of Night Gallery 'You Can't Get Help Like That Anymore'
 
Ah, but cars aren't necessarily 'disposable' They're a significant investment, and there's a lot of infrastructure meant to support them and those who buy them (insurance, safety recalls, vandalism laws, etc). Presumably, robots have similar rules in place, but the big difference between a robot and a car is that cars typically only operate when there's a human master at the wheel. Robots can operate semi-autonomously, given them a much greater chance of failure without the owner knowing about it.
 
why limit a robot to just humans in the first law, why not just say living being?

This quickly becomes unweildy when you consider the notion of sentient aliens coexisting with robots... it was a point acknowledged in the Second Foundation series with a bit regarding a proposed extension of the Zeroth law with something called the Minus One law that said that Robots would protect all sentience, not just humanity... there weren't many takers for that one, though...
 
Ah, but cars aren't necessarily 'disposable' They're a significant investment,

To you the owner maybe but relative to society in general they are disposable. A better way to look at it they aren't rare or unique like some posters think. Someone trashes your robot, you don't demand a fundamental rethinking of the Three Laws. You call the police and get a new one.


and there's a lot of infrastructure meant to support them and those who buy them (insurance, safety recalls, vandalism laws, etc).

Well that was my point.


Presumably, robots have similar rules in place, but the big difference between a robot and a car is that cars typically only operate when there's a human master at the wheel. Robots can operate semi-autonomously, given them a much greater chance of failure without the owner knowing about it.

In Asimov universe, first cars drive themselves. Second failure rates for robots are pretty low. So low that when it happens they have to send a dedicated specialist like Susan Calvin to find out what happened. Third US robotics sells robots in batches of thousands so they are cheap to replace. Finally society probably accepts the cost are worth the benefits. That's true of any new technology. A car is more dangerous than a horse but we replaced horses with cars because the benefits outweigh the cost.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top