View Single Post
Old March 2 2013, 12:11 AM   #25
Vice Admiral
FPAlpha's Avatar
Location: Mannheim, Germany
Re: Moral issues with Robotics

Deckerd wrote: View Post
It's 'regardless'. Aside from that I doubt we'll see robots which surpass our capabilities, Skynet notwithstanding, because there isn't any need for them. I expect universities will push the envelope as far as they can but what's the commercial benefit? Soldiers? Why pay a couple of million per unit instead of something that walks voluntarily into the recruiting office?

Because a) prices will plummet once the technologies needed become more widespread and b) you can go to war without endangering your own soldiers by sending in combat drones. You will not lose votes by losing a few machines.. a really ruthless politician might even spin it as creating jobs in manufacturing for these drones to replace losses.

There are already robots in existence that surpass our capabilities.. no human can match the precision of a correctly programmed and designed robot (just look at welding robots in car manufacturing).
The only thing we are still better at is combined action, something as simple as walking which we do automatically including detection of obstacles, avoiding them or balancing over them.

Robots have a hard time identifying obstacles and coordinating limbs efficiently to navigate them but there are constant improvements. When it comes to physical action they will surpass us in our lifetime.

The only thing they may never be able to beat us is creativity.. that simple undefinable spark that lets people like Beethoven or Van Gogh create magic or someone like Hawking unravel the mysteries of the universe. Now with arts this may be highly controversial as evidenced by an experiment with a monkey who was let loose with colors on a blank canvas and art experts later judging the "picture" as a masterpiece so basically a robot may be designed to paint pictures by emulating art styles and people interpreting it their own way but it wouldn't be creativity.

In the case of a turing test, you could trip up the computer by starting a perfectly serious conversation about, say, tax reform, making sure the responder is being totally serious the whole time. Then in the middle of the conversation you say: "I read two studies from the University of Penis have demonstrated conclusively that we could reduce the deficit by sixty trillion dollars just by raising taxes on porn companies."
I know it's a simple example but a sufficiently advanced computer to warrant a serious Turing Test would most likely spot the total disconnect between the tax theme and male genitalia/pornī.. or the interviewer could ask this question to dumb people and get the same result (i've known quite a few people totally unable to grasp the concept of sarcasm or irony).

No computer today would be able to pass a Turing Test.. computers today are at best huge databases with complicated programs that regulate how they should process these informations. This is why computers can play and win against chess grandmasters.. it's not their genius plays but their ability to calculate a huge amount of plays in advance and pick out the best option not because the are inherently able to pick it but because they were told by the programmers what to look for in a game of chess.

Too broadbased, since autonomous goal-seeking computers can and do exhibit this pattern all the time as they seek solutions to the problems at hand. A sufficiently advanced machine might be programmed to, say, go down the street and buy you a sandwich; when it gets to the store it is told it needs money to buy a sandwich, so it goes on the internet, does a little research and figures out what it has to do to make the $4.99 it will need to buy your sandwich. If it figures out the most efficient solution is to rob an old lady, then it needs to acquire a gun. Once it acquires the gun (beats up a cop and takes it) it robs a lady, takes her money, goes to the store, buys your sandwich, takes the sandwich home and tells you "Mission accomplished."

In a roundabout way, it's still just fulfilling the original parameters of its programming, (snip)
That's not what i meant.. the examples you posted was just a robot whose programming included problem solving techniques.. aquire something without money, then seek a solution to aquire money to buy it. If given enough leeway a robot could come up with solutions including checks for legality.. it could boil down to simply database checks and some if-then algorithms (hugely simplified) but it doesn't mean the robot will become self aware, i.e. go outside its programming. It will never get the idea to say "Screw you master and get your own sandwich because i don't feel like it!"

Maybe it does, maybe it doesn't. But it might become aware that it really wants to kill Sarah Conner for some reason, and reflect briefly -- if only on machine terms -- that the killing of Sarah Conner is pretty much the driving directive behind everything it does. Regardless of where that original directive came from, the robot can be said to be self-aware because it is aware of itself and what it's doing, instead of simply going through the motions mechanistically.
Why would the robot even reflect on it at all? In this case the logic is as simple as your statement below.. mission is to kill Sarah Connor but the robot arrives unarmed and unclothed. In order to blend in with the population and draw less attention first step would be to aquire inconspicious clothing and then weapons (even though it's perfectly capable of killing with its bare hands). The robot is not aware of itself in the meaning we discuss here.. it's merely aware of its actions and why it did them in order to fulfill its mission.

It will never ask itself why Sarah Connor or John Connor need to die other than his side needs to win.. it will not get the concept of winning, surviving or living at all. It's just going through the motions of its programming until the mission is accomplished or it is destroyed.

That will probably lead us eventually to the first human vs. AI lawsuit and then the issue of rights comes up again, but the AI and the human are coming at it from different points of view: the Restaurant AI is good at its job because it was DESIGNED to be; it loves its job because succeeding at its job is the fulfillment of its pre-set goals, and seeking those goals is the whole point of its existence. The human will be arguing for the right to have his ownership and authority respected by the machines that he technically owns, the AI will be arguing for the right to do actually do its job unhindered by irrational human hangups. The judge will decide based on a combination of precedent and logic whether or not a user has the right to demand one of his tools perform a task incorrectly just because it would make him happy, especially in cases where somebody else's safety may be at risk. My feeling is that some courts would decide in favor of the human, others would decide in favor of the AI.
That's probably more of a case of sensitive programming and human ego. As i said before we will see robots who can perform menial tasks more efficiently and better than any human.. cleaning, simple construction, maybe even combat (a robot never tires and can achieve greater weapon precision than any human).
It will take one simple verdict of "Suck it up.. the robot will of course be faster and more precise than you. If you can't handle that then don't aquire one!" It will however not be able to become a Michelin stars honored chef because it lacks the creativity to create food that humans respond well to.
It may be able to cook a steak perfectly given the right set of tools to measure the steak constantly and stop cooking when certain criteria are met but it will not be able to take total whacky assortments of stuff never been used together and turn it into a meal that people will speak about.
"Chewie, we're home.."

Last edited by FPAlpha; March 2 2013 at 12:43 AM.
FPAlpha is offline   Reply With Quote