View Single Post
Old February 27 2013, 05:38 AM   #4
Tiberius
Commodore
 
Re: Moral issues with Robotics

newtype_alpha wrote: View Post
You'd have to ask the robots.

And no, I'm not being sarcastic.

But could we trust their answer? If the robot says it is alive, how could we tell if it actually means it or is just saying it so we don't disassemble it?

The machine gains rights if and when it gains moral agency and the ability to independently make informed decisions about the nature of its own existence and its role in society.
We face the problem of how to determine this. There's also another problem. What if I create a robot which will clearly reach this point, but I include a chip or something that will shut it down BEFORE it reaches that point? Am I acting immorally?

With that level of independence comes the implication that some of the robot's decisions may not agree with the priorities of its creators and/or owners, and then you have the question of under what circumstances a decision made by a robot take precedence over decisions made by humans (or, for that matter, other robots). That, then, becomes the question of "rights," a demarcation line of personal sovereignty and under what circumstances a robot can make decisions that no higher authority can override.
This gets interesting if you replace "robot" with "child" and "creators" with "parents".

If it's not self aware, it will have very few (if any) rights that aren't given to it by interested humans. Animals are currently in this situation; certain urban critters have a peculiar set of rights and privelages, apparently just because animal rights activists think they're cute and feel bad when people hurt them. The delegation of rights is otherwise entirely arbitrary; gluing your dog to the floor until he starves to death constitutes animal cruelty, but somehow killing mice with sticky traps doesn't. Likewise, dogs face an immediate death sentence for the crime of biting a human, while rats face a death sentence for the crime of being rats.
While I agree that animal rights is somewhat arbitrary (as illustrated by your rat trap), I think the issue is that it is wrong to be cruel to an animal because it can feel pain. To relate this to the topic, would it be wrong to crush a robot if the robot would suffer from it? How could such suffering be shown to exist?

A conscious computer rates no better than a squirrel if it isn't self aware. We may feel a little awkward about accidentally running it over with a car (like my dad did to his phone last month) but it's just a computer, not yet a person.
Why should self awareness count is the defining factor rather than consciousness? If we say that a squirrel is conscious but not self aware, does that make it okay to intentionally run them over?
Tiberius is offline   Reply With Quote