View Single Post
Old March 1 2013, 06:24 AM   #21
Crazy Eddie
Rear Admiral
Crazy Eddie's Avatar
Location: I'm in your ___, ___ing your ___
Re: Moral issues with Robotics

Tiberius wrote: View Post
newtype_alpha wrote: View Post
"I think, therefore I am." If the robot could be shown to be capable of "wanting" anything at all, those desires should be taken into consideration. If it lies about being alive to protect itself, we'd have to examine why it wants to protect itself.
But how could we tell?
Ask it.

It will either give you a coherent answer, or it won't. If it does, then investigating WHY it gives that answer is a relatively straightforward process.

Still, I think my original point remains. We'd need some way to distinguish between a robot that is genuinely self aware and one that only appears to be.
No we don't. A machine that APPEARS to be self-aware might as well be. The question then is to what extent that awareness is assosciated with actual moral agency and desires.

Put another way, just because you are GIVEN a choice, it does not follow you have the mental or physical capacity to make such a choice. Imagine if Siri, for example, one day evolved into a fully self-aware AI. That'd be a hell of an accomplishment, but considering that 99.9999% of Siri's programmed instinct involves fetching data from verbal requests input from her users, she will probably choose to do something related to that task 99.9999% of the time. Self-aware Siri is far less likely to care about, say, global warming or her own impending destruction when her owner decides to upgrade to the next iPhone, because she isn't programmed to care about those things and they are otherwise beyond the scope of her awareness. If you asked Sentient Siri "What are you doing right now?" she would surely answer, "I am sitting on your desk right now waiting patiently for you to ask me something. Why? What are YOU doing?"

But how do we know that any other person is self aware?
We don't. I merely know that I am self-aware, and I assume this to be true of the people around me because they exhibit behaviors that I have come to associate with my own self-awareness. The same is true from your end; you don't know whether I am self-aware or not -- for all you know, you've been talking to a cleverly-programmed machine this entire time -- but my responses give you the impression of thought processes indicative of self-awareness.

I think what might be tripping you up is the fact that very few machines are even setup to have any sort of open-ended interactions with humans -- or their environment in general -- in a way that any sort of test of self-awareness would even be possible. But since we are talking about robots, we've got plenty of datapoints and samples for robot behavior. Self-awareness goes WAY beyond simple autonomy or expert decisionmaking; if a machine were to achieve this, it would not be difficult to recognize.

I think it's a little bit different. In this case we're dealing with intentionally creating something with the intention of stopping its development.
As opposed to ACCIDENTALLY creating something and then stopping its development? There's not much difference there except intent, and the fact that machines cannot feel pain at ANY stage of development.

Ah, but they intentionally commit crimes against society. Rats generally do not.
And yet we as a society are broadly encouraged to kill rats...

How do you determine self awareness?
"I think, therefore I am."

"Oh, come on, Bob! I don't know about you, but my compassion for someone is not limited to my estimate of their intelligence!"
If God didn't want us to run over squirrels, he wouldn't have made them so stupid.

Anyway, it's not a question of intelligence. By many standards, computers are ALREADY smarter than humans. That they, unlike animals, are NOT self-aware, is the reason why they do not have/need/want any actual rights.
The Complete Illustrated Guide to Starfleet - Online Now!
Crazy Eddie is offline   Reply With Quote