But could we trust their answer? If the robot says it is alive, how could we tell if it actually means it or is just saying it so we don't disassemble it?
Put a robot in a room. If you cannot tell the difference between him and a human by means of testing his mentality, without looking at them, then the robot is self-aware.
The Turing Test is not infallible.. in fact i'd say a sufficiently well programmed robot with enough processing power can easily fool a person by just repeating (for example) philosophical stances from something they know but they would not get the meaning. I don't know if someone human could discern the difference and i certainly don't believe it would make it self-aware.
A better option to judge that is to simply determine if a machine can go beyond its programming, i.e. if it wants to do something that was not included in its initial programming. The very fact that it wants something may be in itself a key factor in determing self awareness becuase desire is a key aspect of self awareness.
You have to be aware of yourself as a single identity and want to improve the condition of this identity for your own benefit.. a robot doesn't do that on its own. It will perform its task for which it was designed for and doesn't see the need for it.
A combat model doesn't suddenly decide that it wants to read novels or a cleaning model doesn't certainly want to paint.
As soon as that happens (because we've given the robots the option to do that) then we will have to decide the issue. Someone mentioned already "The Measure of a Man" which is a good example.. Data wants to do things besides his programming, i.e. art, music, exploring the human condition. It has no benefit on his performance as a Starfleet officer to be able to paint a picture but he does so irregardless and with these acts he has stepped over the line of simply being a (extremely well designed and capable) machine to something more.
Personally, as fascinating and cool as Data is, i'd not want machines like that to exist. Maybe that's cowardly or insecure by me but a human i can beat or be sure that someone else can but a robot has no real limits we can surpass.. they process data at a rate no human will ever be able to match and in a few years or decades their physical body will surpass ours in agility, precision, endurance and strength.
My problem is that we can't influence the way such a thing will develop.. if it gains sentience will it be a cool guy who's fun to hang out with or will it decide i'm a useless waster of ressourcess and bash my skull in?
Many SF stories exist that explore these things and it's also no coincidence they do because humans think about that and even if technology hasn't yet caught up with SF it will during our lifetimes.
I've seen early robots where people went nuts because it could move fingers separately... now these things navigate unknown obstacles (awkwardly but they do) and there are only a few years apart.
We will see footage of the first robot beating a human easily in basketball or cutting up some vegetables perfectly. This is ok but i don't want these robots to get ideas they shouldn't get.