View Single Post
Old March 2 2013, 12:45 PM   #29
FPAlpha
Vice Admiral
 
FPAlpha's Avatar
 
Location: Mannheim, Germany
Re: Moral issues with Robotics

Which would indicate that the computer isn't just processing syntax, but semantics and context. It's not just processing the words, but the overall meanings in the combinations of them. In this case, you're throwing something unexpected at the computer, a set of meanings and implications that ordinarily don't belong in this context; if the computer REALLY understands human language and/or human behavior, it will do the AI equivalent of a doubletake and say "Wait... what?"
Not necessarily.. i'm just saying that a good computer to even warrant a Turing Test will easily spot the disconnect and sudden change.. it knows that taxes and dicks usually have no connection and will either ask what the interviewer means or just simply point out the disconnect and not go into it any further.

Robbing old ladies at gunpoint would be very much outside its programming, considering the original task you programmed it for was "Acquire a sandwich for me by any means necessary." That's what I mean by "too broad based;" that is to say, the term "exceed their programming" is too broad to really be meaningful.

Strictly speaking, even HUMANS do not venture too far outside of their genetic programming which drives them to acquire food, sex and gratification. That we go about these pursuits in an amazingly complicated process doesn't change the underlying nature of that process.
Not really in my line of thought.. i mentioned Problem solving techniques and that would include this in the programming. I can (well i could if i could program anything besides my digital videorecorder) program a computer with means to solve unexpected situations.. it needs to aquire something but doesn't have money. It prints out options where and how to get money.. work, robbing places or people where there is money, beg etc. It then choses the best option that gets it money in the shortest span etc.. it's just going through the motions without any thought to morality, legality etc (unless i include these in its programming).

And humans venture FAR outside of their genetic programming that include satisfaction of basic needs like food, shelter, procreation and survival. We create art just for the sake of it, because we enjoy it. We write stories, we go out into space just because we want to etc.. these have nothing to do with satisfying our basic needs and that puts us above animals who can't make that step.

Sure it will, if its original programmer had a sense of humor. Being self aware would only factor into this if the robot realized its master had a sense of humor that wasn't being properly stimulated and downloaded a "smartass app" just to be funny.
You missed my point.. i meant a robot will never get the idea to refuse its controller for selfish reasons because that would mean it would have to be aware of itself as a person with desires and needs and one of them would be motivation to do something or to not do it. We can tell our boss to go fuck himself because we don't feel like following orders but a robot can't and won't because it has no reason to (unless you intentionally program it that way to appear that is human tendencies).

But as for the robots, if I had to guess, I'd say it's probably something that will come up in the process of a self-diagnostic, checking disks and databanks for errors and integrity. The machine will analyze its core programming and observe "Gee, I sure am devoting a really huge amount of my processing power to figuring out how to kill Sarah Connor."

The robot is not aware of itself in the meaning we discuss here.. it's merely aware of its actions and why it did them in order to fulfill its mission.
It's aware of its mission.
It's aware of its location.
It's aware of the locations of others.
It's aware of its relationship to others (he is their enemy, he needs them to not know this so he can move around unhindered).

So he is, in fact, self aware. Maybe not to a highly sophisticated degree, but he's a soldier, not a philosopher.
But it is NOT SELF-AWARE in the sense of humanity, i.e. it can't decide if what it does is right or wrong because that would require critical thinking and reflection about itself. It goes about due to its programming and will not ever decide to just stop because that's not in its programming. It is, as you wrote, aware of its surroundings but modern computer systems can do that too.. hell, my smartphone is aware of its location due to its GPS system but i wouldn't call it aware. It just can evaluate things based on input it gets.

It can observe mission time it devoted to a certain mission but it's just another set of hard data and information without further meaning for it.. a human might get thinking "Damn.. i sure spent a lot of time on this. Is it really worth it? but a robot would do the equivalent shrug and go about its business.

Not right away, but you have to remember that one of the tenants of singularity theory -- one of the very few things the theory gets right -- is that when software systems reach a certain level of sophistication, they gain the ability to write new software without human assistance, thereby producing a superior product in a very short amount of time. When you combine data mining techniques with statistical analysis models, you get AI systems that are capable of programming new software apps that provide precisely the functionality they need to for a particular niche. Analyzing the real-world results gives the designer AIs more data to work with and refine their models of what humans consider ideal.

The robot chef has the same benefit. The AI that programs him finds it a lot easier to figure out what the chef did wrong and what it did right and then either write a totally new AI program or hot-patch the old one with upgraded software to correct its mistakes.

Put simply, once machines get the hang of machine learning, it won't be long before we can add that to the list of things computers do better than us. And learning is really the turning point, because once machines can learn, they can effectively outperform humans in any task we ask them to, including -- ultimately -- deciding what tasks to assign themselves.
That's the the core of the problem.. to what degree do we allow advanced computer systems to act without our direct control? At which point do proto-AIs step over the point and become self aware?

The point where AIs do things that is totally unrelated to their initial task just because they wanted to see if they can do it and how well?

This was also one of the points of Terminator 2 after they switched him into learning mode.. at one point he understood human behaviour and realized his own limits, i.e. he became self aware instead of a highly developed machine mimicking humans.

This is what humanity needs to think about once we reach the technological state of building so called AIs (more like highly advanced computers) and giving them the option to improve themselves by any means necessary including modifying their own system. Frankly i'd rather have a housebot who'll just clean my appartment and doesn't get the idea about re-decorating it because it believes i might like it better.
__________________
"Zhu Li Moon.. will you do the thing for the rest of our lives?" Varrick
FPAlpha is offline   Reply With Quote