You'd have to ask the robots.
And no, I'm not being sarcastic.
But could we trust their answer? If the robot says it is alive, how could we tell if it actually means it or is just saying it so we don't disassemble it?
"I think, therefore I am." If the robot could be shown to be capable of "wanting" anything at all, those desires should be taken into consideration. If it lies about being alive to protect itself, we'd have to examine why it wants to protect itself.
And no, you can't just cheat and program a computer to say "Please don't dismantle me." It's more complicated than that.
We face the problem of how to determine this.
Same way we do it with people. Psychologists have all kinds of tests to assess mental functioning and cognitive awareness, whether or not a person understands right and wrong, understands what's happening to them, is aware of themselves or others. For machines, this is theorized as involving some sort of
Turing Test.
There's also another problem. What if I create a robot which will clearly reach this point, but I include a chip or something that will shut it down BEFORE it reaches that point? Am I acting immorally?
Only to the extent that abortion is immoral. That's a whole different can of worms.
While I agree that animal rights is somewhat arbitrary (as illustrated by your rat trap), I think the issue is that it is wrong to be cruel to an animal because it can feel pain.
Terrorists can feel pain too; why isn't it wrong to inflict pain on THEM?
Again, it's the issue of rights, and the extent to which the desires of a living thing take precedence over the desires of others. Certain creatures -- and, historically, certain PEOPLE -- have been placed in a position of such low importance that the majority has no reason to care about their desires and inflict massive harm on them whenever it is convenient. In this context, discussing potential robot rights is hardly an academic issue since we can barely maintain a consistent set of HUMAN rights.
Why should self awareness count is the defining factor rather than consciousness?
Because a being that is not aware of itself doesn't have coherent desires related to itself, and therefore has no agency worth considering. Consciousness is ultimately just a sophisticated form of data processing and doesn't mean much in and of itself.
If we say that a squirrel is conscious but not self aware, does that make it okay to intentionally run them over?
Squirrels
are conscious and
are somewhat self aware. For that reason, intentionally running them over is a dick thing to do. But they
are squirrels; they're not very smart, and their scope of moral agency is limited to things that are virtually inconsequential in the human world, therefore we lack a strong moral imperative to AVOID running them over if they happen to be running across the road in the paths of our cars.