Ask it. It will either give you a coherent answer, or it won't. If it does, then investigating WHY it gives that answer is a relatively straightforward process. No we don't. A machine that APPEARS to be self-aware might as well be. The question then is to what extent that awareness is assosciated with actual moral agency and desires. Put another way, just because you are GIVEN a choice, it does not follow you have the mental or physical capacity to make such a choice. Imagine if Siri, for example, one day evolved into a fully self-aware AI. That'd be a hell of an accomplishment, but considering that 99.9999% of Siri's programmed instinct involves fetching data from verbal requests input from her users, she will probably choose to do something related to that task 99.9999% of the time. Self-aware Siri is far less likely to care about, say, global warming or her own impending destruction when her owner decides to upgrade to the next iPhone, because she isn't programmed to care about those things and they are otherwise beyond the scope of her awareness. If you asked Sentient Siri "What are you doing right now?" she would surely answer, "I am sitting on your desk right now waiting patiently for you to ask me something. Why? What are YOU doing?" We don't. I merely know that I am self-aware, and I assume this to be true of the people around me because they exhibit behaviors that I have come to associate with my own self-awareness. The same is true from your end; you don't know whether I am self-aware or not -- for all you know, you've been talking to a cleverly-programmed machine this entire time -- but my responses give you the impression of thought processes indicative of self-awareness. I think what might be tripping you up is the fact that very few machines are even setup to have any sort of open-ended interactions with humans -- or their environment in general -- in a way that any sort of test of self-awareness would even be possible. But since we are talking about robots, we've got plenty of datapoints and samples for robot behavior. Self-awareness goes WAY beyond simple autonomy or expert decisionmaking; if a machine were to achieve this, it would not be difficult to recognize. As opposed to ACCIDENTALLY creating something and then stopping its development? There's not much difference there except intent, and the fact that machines cannot feel pain at ANY stage of development. And yet we as a society are broadly encouraged to kill rats... "I think, therefore I am." If God didn't want us to run over squirrels, he wouldn't have made them so stupid. Anyway, it's not a question of intelligence. By many standards, computers are ALREADY smarter than humans. That they, unlike animals, are NOT self-aware, is the reason why they do not have/need/want any actual rights.