View Single Post
Old February 27 2013, 03:35 AM   #1
Moral issues with Robotics

With the ever increasing capabilities of robots and artificial intelligences, I've been wondering. At what point does a robot stop being a device you can just turn off and dismantle and start being an entity that deserves rights?

This issue was explored several times in Trek, such as "The Measure of a Man," but I'd like to look at it from a real world point of view.

How will we be able to tell when a computer becomes a conscious entity, even if it is not self aware? Will they have any legal rights?
Tiberius is offline   Reply With Quote