View Single Post
Old March 2 2013, 02:56 AM   #26
Crazy Eddie
Rear Admiral
 
Crazy Eddie's Avatar
 
Location: I'm in your ___, ___ing your ___
Re: Moral issues with Robotics

FPAlpha wrote: View Post
Because a) prices will plummet once the technologies needed become more widespread and b) you can go to war without endangering your own soldiers by sending in combat drones.
Combat drones don't need to be self-aware. In fact, we're probably better off if they're NOT. The type of machine intelligence that makes a highly effective soldier doesn't really make a highly effective PERSON; you could, in fact, create highly effective and efficient killing machines with no more intelligence than a trained dog.

The only thing they may never be able to beat us is creativity..
That depends on the nature of the creative pursuit. In terms of artistic expression this is arguably true (although in SciFi we have things like Sharon Apple or Max Headroom that achieve stardom by supposedly analyzing the emotional states of their viewers in realtime and adjusting their performance accordingly). For problem solving activities -- say, engineering or coordinating battle strategies against an unknown opponent -- it's really a matter of designing a system that can spontaneously generate solutions and then pick the one that is most likely to fit the situation. It can do this either through the brute force "Simulate a billion combinations in three-quarters of a second and see which one works best" or it can select strategies from a library of techniques and adjust them slightly to fit the circumstances. Or a combination of both, using an extensive library of techniques, simulating all of them, and then mixing and matching the parts of that technique that it judges to be the best solution.

No self awareness needed there, either. In fact, it seems the only reason a machine would need to be self-aware is to aid its interactions with humans and other machines, a task which could very well be delegated to "interpreter" robots specialized in coordinating the problem-solving specialists and implementing their decisions among their sometimes irrational human customers.

In the case of a turing test, you could trip up the computer by starting a perfectly serious conversation about, say, tax reform, making sure the responder is being totally serious the whole time. Then in the middle of the conversation you say: "I read two studies from the University of Penis have demonstrated conclusively that we could reduce the deficit by sixty trillion dollars just by raising taxes on porn companies."
I know it's a simple example but a sufficiently advanced computer to warrant a serious Turing Test would most likely spot the total disconnect between the tax theme and male genitalia/porn
Which would indicate that the computer isn't just processing syntax, but semantics and context. It's not just processing the words, but the overall meanings in the combinations of them. In this case, you're throwing something unexpected at the computer, a set of meanings and implications that ordinarily don't belong in this context; if the computer REALLY understands human language and/or human behavior, it will do the AI equivalent of a doubletake and say "Wait... what?"

That's not what i meant.. the examples you posted was just a robot whose programming included problem solving techniques.. aquire something without money, then seek a solution to aquire money to buy it. If given enough leeway a robot could come up with solutions including checks for legality.. it could boil down to simply database checks and some if-then algorithms (hugely simplified) but it doesn't mean the robot will become self aware, i.e. go outside its programming.
Robbing old ladies at gunpoint would be very much outside its programming, considering the original task you programmed it for was "Acquire a sandwich for me by any means necessary." That's what I mean by "too broad based;" that is to say, the term "exceed their programming" is too broad to really be meaningful.

Strictly speaking, even HUMANS do not venture too far outside of their genetic programming which drives them to acquire food, sex and gratification. That we go about these pursuits in an amazingly complicated process doesn't change the underlying nature of that process.

It will never get the idea to say "Screw you master and get your own sandwich because i don't feel like it!"
Sure it will, if its original programmer had a sense of humor. Being self aware would only factor into this if the robot realized its master had a sense of humor that wasn't being properly stimulated and downloaded a "smartass app" just to be funny.

Why would the robot even reflect on it at all?
I'm not even sure why do WE reflect on it.

But as for the robots, if I had to guess, I'd say it's probably something that will come up in the process of a self-diagnostic, checking disks and databanks for errors and integrity. The machine will analyze its core programming and observe "Gee, I sure am devoting a really huge amount of my processing power to figuring out how to kill Sarah Connor."

The robot is not aware of itself in the meaning we discuss here.. it's merely aware of its actions and why it did them in order to fulfill its mission.
It's aware of its mission.
It's aware of its location.
It's aware of the locations of others.
It's aware of its relationship to others (he is their enemy, he needs them to not know this so he can move around unhindered).

So he is, in fact, self aware. Maybe not to a highly sophisticated degree, but he's a soldier, not a philosopher.

it will not get the concept of winning, surviving or living at all. It's just going through the motions of its programming until the mission is accomplished or it is destroyed.
Neither do bomb-sniffing dogs, but they TOO are self aware to a limited degree.

It will take one simple verdict of "Suck it up.. the robot will of course be faster and more precise than you. If you can't handle that then don't aquire one!" It will however not be able to become a Michelin stars honored chef because it lacks the creativity to create food that humans respond well to.
Not right away, but you have to remember that one of the tenants of singularity theory -- one of the very few things the theory gets right -- is that when software systems reach a certain level of sophistication, they gain the ability to write new software without human assistance, thereby producing a superior product in a very short amount of time. When you combine data mining techniques with statistical analysis models, you get AI systems that are capable of programming new software apps that provide precisely the functionality they need to for a particular niche. Analyzing the real-world results gives the designer AIs more data to work with and refine their models of what humans consider ideal.

The robot chef has the same benefit. The AI that programs him finds it a lot easier to figure out what the chef did wrong and what it did right and then either write a totally new AI program or hot-patch the old one with upgraded software to correct its mistakes.

Put simply, once machines get the hang of machine learning, it won't be long before we can add that to the list of things computers do better than us. And learning is really the turning point, because once machines can learn, they can effectively outperform humans in any task we ask them to, including -- ultimately -- deciding what tasks to assign themselves.
__________________
The Complete Illustrated Guide to Starfleet - Online Now!
Crazy Eddie is offline   Reply With Quote