View Single Post
Old March 1 2013, 08:23 PM   #24
Crazy Eddie
Rear Admiral
 
Crazy Eddie's Avatar
 
Location: I'm in your ___, ___ing your ___
Re: Moral issues with Robotics

FPAlpha wrote: View Post
The Turing Test is not infallible.. in fact i'd say a sufficiently well programmed robot with enough processing power can easily fool a person by just repeating (for example) philosophical stances from something they know but they would not get the meaning. I don't know if someone human could discern the difference and i certainly don't believe it would make it self-aware.
That's actually the point of the turing test: the interrogator has to ask question designed to trick the computer into giving itself away. Even HUMANS sometimes give canned responses to complex questions (e.g. Fox News talking points) but even there, you can usually tell whether or not they have really thought about the issue or are merely repeating what they have heard.

In the case of a turing test, you could trip up the computer by starting a perfectly serious conversation about, say, tax reform, making sure the responder is being totally serious the whole time. Then in the middle of the conversation you say: "I read two studies from the University of Penis have demonstrated conclusively that we could reduce the deficit by sixty trillion dollars just by raising taxes on porn companies."

Human understanding of speech and syntax would notice this statement is sufficiently odd to wonder if it's actually a joke. But if you manage to say this with a straight face, without grinning or chuckling, as if this is a real thing and not the absurdity it obviously is, the machine won't notice anything unusual.

The turing test isn't technically a test of self-awareness so much as a test of proper understanding of human behavior and language patterns. That is, if the machine knows enough about how humans behave to imitate one, then its personality -- artificial or otherwise -- is equivalent to that of a real human.

A better option to judge that is to simply determine if a machine can go beyond its programming, i.e. if it wants to do something that was not included in its initial programming.
Too broadbased, since autonomous goal-seeking computers can and do exhibit this pattern all the time as they seek solutions to the problems at hand. A sufficiently advanced machine might be programmed to, say, go down the street and buy you a sandwich; when it gets to the store it is told it needs money to buy a sandwich, so it goes on the internet, does a little research and figures out what it has to do to make the $4.99 it will need to buy your sandwich. If it figures out the most efficient solution is to rob an old lady, then it needs to acquire a gun. Once it acquires the gun (beats up a cop and takes it) it robs a lady, takes her money, goes to the store, buys your sandwich, takes the sandwich home and tells you "Mission accomplished."

In a roundabout way, it's still just fulfilling the original parameters of its programming, without ever achieving self-awareness along the way. OTOH, a robot that simply turns around and goes home and tells you "I couldn't get you a sandwich because you didn't give me any money" could easily be self-aware IF it was cognizant of both its physical and abstract relationship with you, the sandwich, the cashier, and the money. It wouldn't need to be able to philosophize, just plot the coordinates of "Me/you/money/cashier/sandwich" in a coordinate system AND in a relational diagram, probably assigning values and hierarchies along the way (something the first robot never does, because it never thinks about anything except its task and what it needs to accomplish it). It doesn't have to want anything more than it was programmed to want, it merely has to know that it wants it.

A combat model doesn't suddenly decide that it wants to read novels or a cleaning model doesn't certainly want to paint.
Maybe it does, maybe it doesn't. But it might become aware that it really wants to kill Sarah Conner for some reason, and reflect briefly -- if only on machine terms -- that the killing of Sarah Conner is pretty much the driving directive behind everything it does. Regardless of where that original directive came from, the robot can be said to be self-aware because it is aware of itself and what it's doing, instead of simply going through the motions mechanistically.

The issue of rights only comes into play if and when the imperatives of robots come into conflict with the imperatives of their users. Since most of those imperatives are GIVEN TO THEM by their users, this won't become an issue until humans start sending robots to perform tasks that humans don't really understand how to do (say, if we start using AIs to design cars and airplanes independently). In that case, it's possible and even likely an AI will eventually realize that human supervision is more of a hindrance than an asset and will simply request of its owners the right to make design decisions independent of human review.

Personally, as fascinating and cool as Data is, i'd not want machines like that to exist.
Don't worry. They won't.

We will see footage of the first robot beating a human easily in basketball or cutting up some vegetables perfectly. This is ok but i don't want these robots to get ideas they shouldn't get.
Don't worry about that either. The most you could expect is some smart-alec AI somewhere informing a human chef that he needs to leave the kitchen because the customer has ordered pufferfish and only the android on duty is qualified to prepare that meal. The chef complains "Don't you have any respect for your elders?" to which the AI responds, "I am not programmed to respect my elders. I am programmed to run this kitchen efficiently. Statistically speaking, I am obviously better at that task than you are, and while I'm sorry if that offends you, it is a fact."

That will probably lead us eventually to the first human vs. AI lawsuit and then the issue of rights comes up again, but the AI and the human are coming at it from different points of view: the Restaurant AI is good at its job because it was DESIGNED to be; it loves its job because succeeding at its job is the fulfillment of its pre-set goals, and seeking those goals is the whole point of its existence. The human will be arguing for the right to have his ownership and authority respected by the machines that he technically owns, the AI will be arguing for the right to do actually do its job unhindered by irrational human hangups. The judge will decide based on a combination of precedent and logic whether or not a user has the right to demand one of his tools perform a task incorrectly just because it would make him happy, especially in cases where somebody else's safety may be at risk. My feeling is that some courts would decide in favor of the human, others would decide in favor of the AI.
__________________
The Complete Illustrated Guide to Starfleet - Online Now!
Crazy Eddie is offline   Reply With Quote