View Single Post
Old February 28 2013, 08:42 AM   #12
Re: Moral issues with Robotics

Edit_XYZ wrote: View Post
Tiberius wrote: View Post
But could we trust their answer? If the robot says it is alive, how could we tell if it actually means it or is just saying it so we don't disassemble it?
Put a robot in a room. If you cannot tell the difference between him and a human by means of testing his mentality, without looking at them, then the robot is self-aware.
That would depend on the test though, wouldn't it?

If the animal is not self-aware, his "rights" are given by humans, not requested by the animal.
But animals never request those rights. When was the last time a chimp said, "Please don't keep me in a small cage"?

These rights are ultimately about humans, not about animals - they are passive elements, not even accepting these rights or not.
I gotta disagree. A lot of animal rights legislation is designed to protect animals. Humans don't really gain from it.

And then, there are other problems:
In nature, predation (and all the pain it involves) is one of the main causes of death. The animals are part of the food chain - predators for some species, prey for other.
So are Humans. Granted, we can avoid it, for the most part, but animals still do occasionally prey on Humans.

Can we even be certain a non-self-aware entity is sentient/can feel pain?
Some studies claim that animals can; other studies claim that the chemicals released are identical to the ones released during vigorous exercise.
What about a fly? Can it feel pain?

Ultimately, one can only be certain that someone is sentient is if he/she tells you this himself/herself AKA if it is self-aware/relatively intelligent.
But that's a point. When it comes to robots, how can you tell the difference between a robot who says it is sapient because it genuinely believes that it is, and a robot that claims to be sapient because it's determined that doing so will keep it from being destroyed?
Tiberius is offline   Reply With Quote