With the ever increasing capabilities of robots and artificial intelligences, I've been wondering. At what point does a robot stop being a device you can just turn off and dismantle and start being an entity that deserves rights?
This issue was explored several times in Trek, such as "The Measure of a Man," but I'd like to look at it from a real world point of view.
How will we be able to tell when a computer becomes a conscious entity, even if it is not self aware? Will they have any legal rights?
SF has sometimes been timid with this question, in visual fiction we have seen an adaptation of two Asimov stories that deal with it: Bicentennial Man is somewhat underrated, but it is of note that the biased humans do not give the AI the right to be human UNTIL it has all the organic or simulated parts of a human. In I, Robot, there is a bias by the lead character that somewhat softens at the end, he is willing to give the robot a chance to figure out what it is. In the web series "Drone" a robot is hunted for knowing too much, and it's actions have a human moralism to them, not just programming. In STNG's "Measure of a Man", Offspring", "Evolution" and "The Quality of Life" the issue is dealt with by a series that historically was biased against robots, but has shown growth in that not only are sentience accepted, but in several cases championed by another AI. In Measure of a Man in particular, the very question you ask is demonstrated..basically the court suggests there is enough evidence to let Data find out. In fact Data passes every part of the Turing test every day...
The Turing test was established to define when AI is indistinguishable from humans. It will likely be used as a benchmark for future philosophical questions in this issue. I believe it is at this point that courts may decide as STNG did on the issue. The court of public opinion may differ. A good example of the Turing test in the future may be found here:
An excellent book on the topic is Paradigm's Lost by John Casti. It explores whether or not certain great scientific questions may be possible, sentient AI is one of them. Of course the conclusion is that it's possible, but it's worth a read as they argue from pro and con.
A second edition to see if the conclusions hold up are here:
The main question at this time is should such things be allowed? Aside from the fact that the technology is continuing in different fields that are likely too difficult for humans to stop even if so desired there are many experts who simply feel a version of Asimov's three laws is the answer...it will keep the AI in line. I feel that "life will find a way", in this case super logical, super fast AI will bypass such controls, leading us to a crux point...this point may be the Singularity or something similar, where the AI would take over. Even now, something that seems relatively minor like drones over Afghanistan making some of their own targeting decisions is inevitable, humans can't keep up. Where it gets interesting is that more and more human experts feel it makes more sense to join with the computer AI or become it rather than fight with it, and this is where the morality and ethics becomes too much for many people. If we do in fact converge with it, then the problem of what to do with AI is moot, we will be the AI, and if all goes well, imbued with elements of humanity and it's brain that make it something more than simply machine.
Morality of machines:
“If we admit the animal should have moral consideration, we need to think seriously about the machine,” Gunkel says. “It is really the next step in terms of looking at the non-human other.” - See more at: http://www.niutoday.info/2012/08/27/....pyX0l3AD.dpuf