With the ever increasing capabilities of robots and artificial intelligences, I've been wondering. At what point does a robot stop being a device you can just turn off and dismantle and start being an entity that deserves rights?
You'd have to ask the robots.
And no, I'm not being sarcastic. The machine gains rights if and when it gains moral agency and the ability to independently make informed decisions about the nature of its own existence and its role in society. With that level of independence comes the implication that some of the robot's decisions may not agree with the priorities of its creators and/or owners, and then you have the question of under what circumstances a decision made by a robot take precedence over decisions made by humans (or, for that matter, other robots
). That, then, becomes the question of "rights," a demarcation line of personal sovereignty and under what circumstances a robot can make decisions that no higher authority can override.
How will we be able to tell when a computer becomes a conscious entity, even if it is not self aware? Will they have any legal rights?
If it's not self aware, it will have very few (if any) rights that aren't given to it by interested humans. Animals are currently in this situation; certain urban critters have a peculiar set of rights and privelages, apparently just because animal rights activists think they're cute and feel bad when people hurt them. The delegation of rights is otherwise entirely arbitrary; gluing your dog to the floor until he starves to death constitutes animal cruelty, but somehow killing mice with sticky traps doesn't. Likewise, dogs face an immediate death sentence for the crime of biting a human, while rats face a death sentence for the crime of being rats.
A conscious computer rates no better than a squirrel if it isn't self aware. We may feel a little awkward about accidentally running it over with a car (like my dad did to his phone last month) but it's just a computer, not yet a person