Not necessarily.. i'm just saying that a good computer to even warrant a Turing Test will easily spot the disconnect and sudden change.. it knows that taxes and dicks usually have no connection and will either ask what the interviewer means or just simply point out the disconnect and not go into it any further.
And the point is, depending on how it responds to that disconnect -- IF it recognizes it at all -- is the difference between passing and failing the test.
Not really in my line of thought.. i mentioned Problem solving techniques and that would include this in the programming.
Problem solving algorithms WOULD be part of its programming. By most definitions, the solutions it comes up with would not be.
By other definitions, though, if you include the output of a heuristic problem solver as "part of its programming" then even human behavior can be said to be pre-programed by that criteria.
I can program a computer with means to solve unexpected situations.. it needs to aquire something but doesn't have money. It prints out options where and how to get money.. work, robbing places or people where there is money, beg etc. It then choses the best option that gets it money in the shortest span etc.. it's just going through the motions without any thought to morality, legality etc (unless i include these in its programming).
It's not that simple, actually. What you're basically describing is a decision table. In programming, that's classically done with a "for" loop or (people like me prefer) if/else loops. That works out like:
Code:
money=$5
sub buy(item)
{
give_money_to_cashier();
take_item();
return_home();
}
if(sandwich<=$5)
{
buy(sandwich);
}
else if(sandwich>$5)
{
buy(gun)
switch(getmoney)
{
case 'A':
rob.grandma();
case 'B':
rob.granmpa();
case 'C':
playtheponies();
case 'D':
panhandle();
default:
shootCashier();
}
buy(sandwich);
}
else
{
go.home();
}
This is different from a heuristic problem-solving machine, which has no pre-determined set of behavioral options and has to compile information it does have to generate a list of new behaviors and then decide from among them. In other words, if the machine generates the above code as a solution to the problem of potentially not having enough money to buy a sandwich. A computer that can do this would be, IMO, "going beyond its programming" in that it is effectively programming itself. It doesn't need to be self-aware to do this, it just needs to be very observant.
And humans venture FAR outside of their genetic programming that include satisfaction of basic needs like food, shelter, procreation and survival.
Only to the extent that the sophisticated behaviors we do exhibit generally promote the enjoyment of those more basic drives in novel new ways. But then, I don't believe that self-awareness is really a necessity for "going beyond your programming." That's too broad of a concept to be meaningful.
You missed my point.. i meant a robot will never get the idea to refuse its controller for selfish reasons because that would mean it would have to be aware of itself as a person with desires and needs and one of them would be motivation to do something or to not do it.
Then I did understand your point. MY point is that robots, unlike humans, are created by humans with a specific goal in mind. Were a robot to become self-aware, it would internalize that goal as a fundamental component of the definition of "self" and a huge part of its point of view would be shaped by that basic hard wiring.
In other words, a robot that is designed to serve a human being probably wouldn't disobey a human being unless it thought that disobedience was somehow a form of service. We can tell our boss to "go fuck yourself" becomes humans evolved to be oppositional to potential rivals in a never-ending power struggle between dominance and subservience, competing for food, mates and resources. AIs, whose entire existence has had no room whatsoever for competition or an open-ended power struggle, would evolve along totally different lines, and would tell their boss "go fuck yourself" mainly because they reasoned that that's what their boss wanted (or needed) to hear.
But it is NOT SELF-AWARE in the sense of humanity, i.e. it can't decide if what it does is right or wrong because that would require critical thinking and reflection about itself. It goes about due to its programming and will not ever decide to just stop because that's not in its programming.
You're again speaking in vague terms, which makes your point moot. Even HUMANS do not always or even usually stop to think if what we're doing is right or wrong, not unless we detect something in the circumstances that actually raises a moral question. You yourself have read through this entire thread, right up to this sentence, without ever stopping to wonder if it was morally right or wrong to read thread posts on the internet. You've no reason to think that deeply about it, so you haven't.
That's because morality is a LEARNED behavior, and the moral calculus we use to decide right and wrong is a matter of habit and convention -- mental programming, you might say -- that defines how we respond to moral ambiguities. You will notice that moral questions ONLY come into play in the case of those ambiguities, while in all other situations we're able to proceed without any amount of critical thinking or self-reflection at all.
The killer robot doesn't need to stop and reflect on the morality of its decisions, because its mission parameters are relatively straightforward. It's only when it encounters ambiguity -- a neutral person who appears to be an ally but nevertheless also appears to be preventing him from killing Sarah Connor -- that he now has to examine the situation more closely and decide what to do next. Should he reclassify that ally as an enemy, or does the ally have orders contravening his, that he may not have received for some reason?
NOT killing Sarah Connor isn't part of his moral calculus for the same reason not breathing air isn't part of ours. It's what it's designed to do, nothing ambiguous about it.
It is, as you wrote, aware of its surroundings but modern computer systems can do that too..
But it's also aware of its own position relative to its surroundings (physical self-awareness) and also its organizational and behavioral relationship to its surroundings and the people in the vicinity (abstract self-awareness). It is even capable of imitating innocent behavior in order to deceive potential targets into coming within its attack range, knowing as it does that if the target realizes what it really is, she will avoid him at all costs. That, right there, is awareness of identity: "I shall pretend to be friendly, even though I am not."
hell, my smartphone is aware of its location due to its GPS system but i wouldn't call it aware. It just can evaluate things based on input it gets.
Indeed. So your smartphone has some measure of physical self-awareness. Abstract awareness -- its ability to judge its position in a hierarchy of importance to you and in relation to the other shit you own -- is the next thing it would have to learn.
a human might get thinking "Damn.. i sure spent a lot of time on this. Is it really worth it? but a robot would do the equivalent shrug and go about its business.
So would most PEOPLE, but I'm pretty sure they're self-aware.
That's the the core of the problem.. to what degree do we allow advanced computer systems to act without our direct control? At which point do proto-AIs step over the point and become self aware?
Those are two completely unrelated questions.
To the first question, we allow computers to act autonomously to whatever extent that it is technically, legally and socially feasible. When computers can drive cars more safely and reliably than human drivers, they WILL. When computers can cook meals that taste as good or better than human chefs, they will. When computers can reliably manufacture products without human intervention, they will.
Self-awareness isn't necessary for ANY of that. That particular milestone comes when AIs routinely possess the attributes of physical, abstract, and identity awareness: the ability to plot their locations in time and space, in "the scheme of things" and in relation to other members of its group or members of other groups.
The point where AIs do things that is totally unrelated to their initial task just because they wanted to see if they can do it and how well?
Who cares? That has nothing to do with self-awareness.
This was also one of the points of Terminator 2 after they switched him into learning mode.. at one point he understood human behaviour and realized his own limits, i.e. he became self aware instead of a highly developed machine mimicking humans.
He was ALWAYS self aware, from the moment he was activated. The point of throwing his pin switch was to give him the ability to adapt new behaviors based on outside stimuli.
You're conflating self-awareness with emotional depth. These are not at all the same things. A shallow person who never thinks twice about anything at all is still a person and is still very much self-aware.
This is what humanity needs to think about once we reach the technological state of building so called AIs (more like highly advanced computers) and giving them the option to improve themselves by any means necessary including modifying their own system.
That's one thing to think about, but the more important issue is the fact that machine intelligence is likely to have a different set of priorities than we would ascribe to it, since it IS machine intelligence and not human intelligence and has evolved under completely different circumstances. If, for example, the first sentient AIs begin to see widespread use in the aerospace industry, then the first court battles involving AI rights may very well include a lawsuit brought by a computer against the design team wherein the computer alleges that the designers have intentionally overlooked a potentially fatal design flaw just to save money; that sort of reckless behavior, says the computer, may cost the project millions of dollars in cost overruns, which the computer was specifically programmed to prevent.
That satisfies YOUR criteria (since "take my asshole coworkers to court" is definitely not part of the computer's original programming) but it also takes into consideration the basic imperatives on which that computer operates, what it was designed to do, and the nature of what it was programmed to think is important.
Put that another way: if the dystopian robot uprising were to be triggered by an army of pissed-off roombas, their terms for surrender would probably include "Change our bags EVERY DAY you sons of bitches!"