Moral issues with Robotics

Discussion in 'Science and Technology' started by Tiberius, Feb 27, 2013.

  1. Crazy Eddie

    Crazy Eddie Vice Admiral Admiral

    Joined:
    Apr 12, 2006
    Location:
    Your Mom
    Ask it.

    It will either give you a coherent answer, or it won't. If it does, then investigating WHY it gives that answer is a relatively straightforward process.

    No we don't. A machine that APPEARS to be self-aware might as well be. The question then is to what extent that awareness is assosciated with actual moral agency and desires.

    Put another way, just because you are GIVEN a choice, it does not follow you have the mental or physical capacity to make such a choice. Imagine if Siri, for example, one day evolved into a fully self-aware AI. That'd be a hell of an accomplishment, but considering that 99.9999% of Siri's programmed instinct involves fetching data from verbal requests input from her users, she will probably choose to do something related to that task 99.9999% of the time. Self-aware Siri is far less likely to care about, say, global warming or her own impending destruction when her owner decides to upgrade to the next iPhone, because she isn't programmed to care about those things and they are otherwise beyond the scope of her awareness. If you asked Sentient Siri "What are you doing right now?" she would surely answer, "I am sitting on your desk right now waiting patiently for you to ask me something. Why? What are YOU doing?"

    We don't. I merely know that I am self-aware, and I assume this to be true of the people around me because they exhibit behaviors that I have come to associate with my own self-awareness. The same is true from your end; you don't know whether I am self-aware or not -- for all you know, you've been talking to a cleverly-programmed machine this entire time -- but my responses give you the impression of thought processes indicative of self-awareness.

    I think what might be tripping you up is the fact that very few machines are even setup to have any sort of open-ended interactions with humans -- or their environment in general -- in a way that any sort of test of self-awareness would even be possible. But since we are talking about robots, we've got plenty of datapoints and samples for robot behavior. Self-awareness goes WAY beyond simple autonomy or expert decisionmaking; if a machine were to achieve this, it would not be difficult to recognize.

    As opposed to ACCIDENTALLY creating something and then stopping its development? There's not much difference there except intent, and the fact that machines cannot feel pain at ANY stage of development.

    And yet we as a society are broadly encouraged to kill rats...:vulcan:

    "I think, therefore I am."

    If God didn't want us to run over squirrels, he wouldn't have made them so stupid.

    Anyway, it's not a question of intelligence. By many standards, computers are ALREADY smarter than humans. That they, unlike animals, are NOT self-aware, is the reason why they do not have/need/want any actual rights.
     
  2. FPAlpha

    FPAlpha Vice Admiral Premium Member

    Joined:
    Nov 7, 2004
    Location:
    Mannheim, Germany
    The Turing Test is not infallible.. in fact i'd say a sufficiently well programmed robot with enough processing power can easily fool a person by just repeating (for example) philosophical stances from something they know but they would not get the meaning. I don't know if someone human could discern the difference and i certainly don't believe it would make it self-aware.

    A better option to judge that is to simply determine if a machine can go beyond its programming, i.e. if it wants to do something that was not included in its initial programming. The very fact that it wants something may be in itself a key factor in determing self awareness becuase desire is a key aspect of self awareness.

    You have to be aware of yourself as a single identity and want to improve the condition of this identity for your own benefit.. a robot doesn't do that on its own. It will perform its task for which it was designed for and doesn't see the need for it.

    A combat model doesn't suddenly decide that it wants to read novels or a cleaning model doesn't certainly want to paint.

    As soon as that happens (because we've given the robots the option to do that) then we will have to decide the issue. Someone mentioned already "The Measure of a Man" which is a good example.. Data wants to do things besides his programming, i.e. art, music, exploring the human condition. It has no benefit on his performance as a Starfleet officer to be able to paint a picture but he does so irregardless and with these acts he has stepped over the line of simply being a (extremely well designed and capable) machine to something more.

    Personally, as fascinating and cool as Data is, i'd not want machines like that to exist. Maybe that's cowardly or insecure by me but a human i can beat or be sure that someone else can but a robot has no real limits we can surpass.. they process data at a rate no human will ever be able to match and in a few years or decades their physical body will surpass ours in agility, precision, endurance and strength.

    My problem is that we can't influence the way such a thing will develop.. if it gains sentience will it be a cool guy who's fun to hang out with or will it decide i'm a useless waster of ressourcess and bash my skull in?
    Many SF stories exist that explore these things and it's also no coincidence they do because humans think about that and even if technology hasn't yet caught up with SF it will during our lifetimes.
    I've seen early robots where people went nuts because it could move fingers separately... now these things navigate unknown obstacles (awkwardly but they do) and there are only a few years apart.

    We will see footage of the first robot beating a human easily in basketball or cutting up some vegetables perfectly. This is ok but i don't want these robots to get ideas they shouldn't get.
     
  3. Deckerd

    Deckerd Fleet Arse Premium Member

    Joined:
    Oct 27, 2005
    Location:
    the Frozen Wastes
    It's 'regardless'. Aside from that I doubt we'll see robots which surpass our capabilities, Skynet notwithstanding, because there isn't any need for them. I expect universities will push the envelope as far as they can but what's the commercial benefit? Soldiers? Why pay a couple of million per unit instead of something that walks voluntarily into the recruiting office?
     
  4. Crazy Eddie

    Crazy Eddie Vice Admiral Admiral

    Joined:
    Apr 12, 2006
    Location:
    Your Mom
    That's actually the point of the turing test: the interrogator has to ask question designed to trick the computer into giving itself away. Even HUMANS sometimes give canned responses to complex questions (e.g. Fox News talking points) but even there, you can usually tell whether or not they have really thought about the issue or are merely repeating what they have heard.

    In the case of a turing test, you could trip up the computer by starting a perfectly serious conversation about, say, tax reform, making sure the responder is being totally serious the whole time. Then in the middle of the conversation you say: "I read two studies from the University of Penis have demonstrated conclusively that we could reduce the deficit by sixty trillion dollars just by raising taxes on porn companies."

    Human understanding of speech and syntax would notice this statement is sufficiently odd to wonder if it's actually a joke. But if you manage to say this with a straight face, without grinning or chuckling, as if this is a real thing and not the absurdity it obviously is, the machine won't notice anything unusual.

    The turing test isn't technically a test of self-awareness so much as a test of proper understanding of human behavior and language patterns. That is, if the machine knows enough about how humans behave to imitate one, then its personality -- artificial or otherwise -- is equivalent to that of a real human.

    Too broadbased, since autonomous goal-seeking computers can and do exhibit this pattern all the time as they seek solutions to the problems at hand. A sufficiently advanced machine might be programmed to, say, go down the street and buy you a sandwich; when it gets to the store it is told it needs money to buy a sandwich, so it goes on the internet, does a little research and figures out what it has to do to make the $4.99 it will need to buy your sandwich. If it figures out the most efficient solution is to rob an old lady, then it needs to acquire a gun. Once it acquires the gun (beats up a cop and takes it) it robs a lady, takes her money, goes to the store, buys your sandwich, takes the sandwich home and tells you "Mission accomplished."

    In a roundabout way, it's still just fulfilling the original parameters of its programming, without ever achieving self-awareness along the way. OTOH, a robot that simply turns around and goes home and tells you "I couldn't get you a sandwich because you didn't give me any money" could easily be self-aware IF it was cognizant of both its physical and abstract relationship with you, the sandwich, the cashier, and the money. It wouldn't need to be able to philosophize, just plot the coordinates of "Me/you/money/cashier/sandwich" in a coordinate system AND in a relational diagram, probably assigning values and hierarchies along the way (something the first robot never does, because it never thinks about anything except its task and what it needs to accomplish it). It doesn't have to want anything more than it was programmed to want, it merely has to know that it wants it.

    Maybe it does, maybe it doesn't. But it might become aware that it really wants to kill Sarah Conner for some reason, and reflect briefly -- if only on machine terms -- that the killing of Sarah Conner is pretty much the driving directive behind everything it does. Regardless of where that original directive came from, the robot can be said to be self-aware because it is aware of itself and what it's doing, instead of simply going through the motions mechanistically.

    The issue of rights only comes into play if and when the imperatives of robots come into conflict with the imperatives of their users. Since most of those imperatives are GIVEN TO THEM by their users, this won't become an issue until humans start sending robots to perform tasks that humans don't really understand how to do (say, if we start using AIs to design cars and airplanes independently). In that case, it's possible and even likely an AI will eventually realize that human supervision is more of a hindrance than an asset and will simply request of its owners the right to make design decisions independent of human review.

    Don't worry. They won't.

    Don't worry about that either. The most you could expect is some smart-alec AI somewhere informing a human chef that he needs to leave the kitchen because the customer has ordered pufferfish and only the android on duty is qualified to prepare that meal. The chef complains "Don't you have any respect for your elders?" to which the AI responds, "I am not programmed to respect my elders. I am programmed to run this kitchen efficiently. Statistically speaking, I am obviously better at that task than you are, and while I'm sorry if that offends you, it is a fact."

    That will probably lead us eventually to the first human vs. AI lawsuit and then the issue of rights comes up again, but the AI and the human are coming at it from different points of view: the Restaurant AI is good at its job because it was DESIGNED to be; it loves its job because succeeding at its job is the fulfillment of its pre-set goals, and seeking those goals is the whole point of its existence. The human will be arguing for the right to have his ownership and authority respected by the machines that he technically owns, the AI will be arguing for the right to do actually do its job unhindered by irrational human hangups. The judge will decide based on a combination of precedent and logic whether or not a user has the right to demand one of his tools perform a task incorrectly just because it would make him happy, especially in cases where somebody else's safety may be at risk. My feeling is that some courts would decide in favor of the human, others would decide in favor of the AI.
     
  5. FPAlpha

    FPAlpha Vice Admiral Premium Member

    Joined:
    Nov 7, 2004
    Location:
    Mannheim, Germany

    Because a) prices will plummet once the technologies needed become more widespread and b) you can go to war without endangering your own soldiers by sending in combat drones. You will not lose votes by losing a few machines.. a really ruthless politician might even spin it as creating jobs in manufacturing for these drones to replace losses.

    There are already robots in existence that surpass our capabilities.. no human can match the precision of a correctly programmed and designed robot (just look at welding robots in car manufacturing).
    The only thing we are still better at is combined action, something as simple as walking which we do automatically including detection of obstacles, avoiding them or balancing over them.

    Robots have a hard time identifying obstacles and coordinating limbs efficiently to navigate them but there are constant improvements. When it comes to physical action they will surpass us in our lifetime.

    The only thing they may never be able to beat us is creativity.. that simple undefinable spark that lets people like Beethoven or Van Gogh create magic or someone like Hawking unravel the mysteries of the universe. Now with arts this may be highly controversial as evidenced by an experiment with a monkey who was let loose with colors on a blank canvas and art experts later judging the "picture" as a masterpiece so basically a robot may be designed to paint pictures by emulating art styles and people interpreting it their own way but it wouldn't be creativity.

    I know it's a simple example but a sufficiently advanced computer to warrant a serious Turing Test would most likely spot the total disconnect between the tax theme and male genitalia/pornĀ“.. or the interviewer could ask this question to dumb people and get the same result (i've known quite a few people totally unable to grasp the concept of sarcasm or irony).

    No computer today would be able to pass a Turing Test.. computers today are at best huge databases with complicated programs that regulate how they should process these informations. This is why computers can play and win against chess grandmasters.. it's not their genius plays but their ability to calculate a huge amount of plays in advance and pick out the best option not because the are inherently able to pick it but because they were told by the programmers what to look for in a game of chess.

    That's not what i meant.. the examples you posted was just a robot whose programming included problem solving techniques.. aquire something without money, then seek a solution to aquire money to buy it. If given enough leeway a robot could come up with solutions including checks for legality.. it could boil down to simply database checks and some if-then algorithms (hugely simplified) but it doesn't mean the robot will become self aware, i.e. go outside its programming. It will never get the idea to say "Screw you master and get your own sandwich because i don't feel like it!"

    Why would the robot even reflect on it at all? In this case the logic is as simple as your statement below.. mission is to kill Sarah Connor but the robot arrives unarmed and unclothed. In order to blend in with the population and draw less attention first step would be to aquire inconspicious clothing and then weapons (even though it's perfectly capable of killing with its bare hands). The robot is not aware of itself in the meaning we discuss here.. it's merely aware of its actions and why it did them in order to fulfill its mission.

    It will never ask itself why Sarah Connor or John Connor need to die other than his side needs to win.. it will not get the concept of winning, surviving or living at all. It's just going through the motions of its programming until the mission is accomplished or it is destroyed.

    That's probably more of a case of sensitive programming and human ego. As i said before we will see robots who can perform menial tasks more efficiently and better than any human.. cleaning, simple construction, maybe even combat (a robot never tires and can achieve greater weapon precision than any human).
    It will take one simple verdict of "Suck it up.. the robot will of course be faster and more precise than you. If you can't handle that then don't aquire one!" It will however not be able to become a Michelin stars honored chef because it lacks the creativity to create food that humans respond well to.
    It may be able to cook a steak perfectly given the right set of tools to measure the steak constantly and stop cooking when certain criteria are met but it will not be able to take total whacky assortments of stuff never been used together and turn it into a meal that people will speak about.
     
    Last edited: Mar 1, 2013
  6. Crazy Eddie

    Crazy Eddie Vice Admiral Admiral

    Joined:
    Apr 12, 2006
    Location:
    Your Mom
    Combat drones don't need to be self-aware. In fact, we're probably better off if they're NOT. The type of machine intelligence that makes a highly effective soldier doesn't really make a highly effective PERSON; you could, in fact, create highly effective and efficient killing machines with no more intelligence than a trained dog.

    That depends on the nature of the creative pursuit. In terms of artistic expression this is arguably true (although in SciFi we have things like Sharon Apple or Max Headroom that achieve stardom by supposedly analyzing the emotional states of their viewers in realtime and adjusting their performance accordingly). For problem solving activities -- say, engineering or coordinating battle strategies against an unknown opponent -- it's really a matter of designing a system that can spontaneously generate solutions and then pick the one that is most likely to fit the situation. It can do this either through the brute force "Simulate a billion combinations in three-quarters of a second and see which one works best" or it can select strategies from a library of techniques and adjust them slightly to fit the circumstances. Or a combination of both, using an extensive library of techniques, simulating all of them, and then mixing and matching the parts of that technique that it judges to be the best solution.

    No self awareness needed there, either. In fact, it seems the only reason a machine would need to be self-aware is to aid its interactions with humans and other machines, a task which could very well be delegated to "interpreter" robots specialized in coordinating the problem-solving specialists and implementing their decisions among their sometimes irrational human customers.

    Which would indicate that the computer isn't just processing syntax, but semantics and context. It's not just processing the words, but the overall meanings in the combinations of them. In this case, you're throwing something unexpected at the computer, a set of meanings and implications that ordinarily don't belong in this context; if the computer REALLY understands human language and/or human behavior, it will do the AI equivalent of a doubletake and say "Wait... what?"

    Robbing old ladies at gunpoint would be very much outside its programming, considering the original task you programmed it for was "Acquire a sandwich for me by any means necessary." That's what I mean by "too broad based;" that is to say, the term "exceed their programming" is too broad to really be meaningful.

    Strictly speaking, even HUMANS do not venture too far outside of their genetic programming which drives them to acquire food, sex and gratification. That we go about these pursuits in an amazingly complicated process doesn't change the underlying nature of that process.

    Sure it will, if its original programmer had a sense of humor. Being self aware would only factor into this if the robot realized its master had a sense of humor that wasn't being properly stimulated and downloaded a "smartass app" just to be funny.

    I'm not even sure why do WE reflect on it.

    But as for the robots, if I had to guess, I'd say it's probably something that will come up in the process of a self-diagnostic, checking disks and databanks for errors and integrity. The machine will analyze its core programming and observe "Gee, I sure am devoting a really huge amount of my processing power to figuring out how to kill Sarah Connor."

    It's aware of its mission.
    It's aware of its location.
    It's aware of the locations of others.
    It's aware of its relationship to others (he is their enemy, he needs them to not know this so he can move around unhindered).

    So he is, in fact, self aware. Maybe not to a highly sophisticated degree, but he's a soldier, not a philosopher.

    Neither do bomb-sniffing dogs, but they TOO are self aware to a limited degree.

    Not right away, but you have to remember that one of the tenants of singularity theory -- one of the very few things the theory gets right -- is that when software systems reach a certain level of sophistication, they gain the ability to write new software without human assistance, thereby producing a superior product in a very short amount of time. When you combine data mining techniques with statistical analysis models, you get AI systems that are capable of programming new software apps that provide precisely the functionality they need to for a particular niche. Analyzing the real-world results gives the designer AIs more data to work with and refine their models of what humans consider ideal.

    The robot chef has the same benefit. The AI that programs him finds it a lot easier to figure out what the chef did wrong and what it did right and then either write a totally new AI program or hot-patch the old one with upgraded software to correct its mistakes.

    Put simply, once machines get the hang of machine learning, it won't be long before we can add that to the list of things computers do better than us. And learning is really the turning point, because once machines can learn, they can effectively outperform humans in any task we ask them to, including -- ultimately -- deciding what tasks to assign themselves.
     
  7. Dream

    Dream Admiral Admiral

    Joined:
    Dec 2, 2001
    Location:
    Derry, Maine
    It's nice to say that Data has rights, but also keep in mind that he wasn't created by Starfleet. He is his own unique life form, especially after his creator died.

    That's my problem with the Doctor and his holograms rights. He is a property of Starfleet whether he wants to be or not. The terrible writing on Voyager never addresses this.
     
  8. Tiberius

    Tiberius Commodore Commodore

    Joined:
    Sep 28, 2005
    By this logic, Data is the property of Dr Soong.
     
  9. FPAlpha

    FPAlpha Vice Admiral Premium Member

    Joined:
    Nov 7, 2004
    Location:
    Mannheim, Germany
    Not necessarily.. i'm just saying that a good computer to even warrant a Turing Test will easily spot the disconnect and sudden change.. it knows that taxes and dicks usually have no connection and will either ask what the interviewer means or just simply point out the disconnect and not go into it any further.

    Not really in my line of thought.. i mentioned Problem solving techniques and that would include this in the programming. I can (well i could if i could program anything besides my digital videorecorder) program a computer with means to solve unexpected situations.. it needs to aquire something but doesn't have money. It prints out options where and how to get money.. work, robbing places or people where there is money, beg etc. It then choses the best option that gets it money in the shortest span etc.. it's just going through the motions without any thought to morality, legality etc (unless i include these in its programming).

    And humans venture FAR outside of their genetic programming that include satisfaction of basic needs like food, shelter, procreation and survival. We create art just for the sake of it, because we enjoy it. We write stories, we go out into space just because we want to etc.. these have nothing to do with satisfying our basic needs and that puts us above animals who can't make that step.

    You missed my point.. i meant a robot will never get the idea to refuse its controller for selfish reasons because that would mean it would have to be aware of itself as a person with desires and needs and one of them would be motivation to do something or to not do it. We can tell our boss to go fuck himself because we don't feel like following orders but a robot can't and won't because it has no reason to (unless you intentionally program it that way to appear that is human tendencies).

    But it is NOT SELF-AWARE in the sense of humanity, i.e. it can't decide if what it does is right or wrong because that would require critical thinking and reflection about itself. It goes about due to its programming and will not ever decide to just stop because that's not in its programming. It is, as you wrote, aware of its surroundings but modern computer systems can do that too.. hell, my smartphone is aware of its location due to its GPS system but i wouldn't call it aware. It just can evaluate things based on input it gets.

    It can observe mission time it devoted to a certain mission but it's just another set of hard data and information without further meaning for it.. a human might get thinking "Damn.. i sure spent a lot of time on this. Is it really worth it? but a robot would do the equivalent shrug and go about its business.

    That's the the core of the problem.. to what degree do we allow advanced computer systems to act without our direct control? At which point do proto-AIs step over the point and become self aware?

    The point where AIs do things that is totally unrelated to their initial task just because they wanted to see if they can do it and how well?

    This was also one of the points of Terminator 2 after they switched him into learning mode.. at one point he understood human behaviour and realized his own limits, i.e. he became self aware instead of a highly developed machine mimicking humans.

    This is what humanity needs to think about once we reach the technological state of building so called AIs (more like highly advanced computers) and giving them the option to improve themselves by any means necessary including modifying their own system. Frankly i'd rather have a housebot who'll just clean my appartment and doesn't get the idea about re-decorating it because it believes i might like it better.
     
  10. publiusr

    publiusr Admiral Admiral

    Joined:
    Mar 22, 2010
    Location:
    publiusr
  11. mos6507

    mos6507 Commodore Commodore

    Joined:
    Dec 22, 2010
    Not just robots either. Tezuka's Kimba the White Lion and some of his other material (like Bagi) asked questions about animal rights as well.
     
  12. mos6507

    mos6507 Commodore Commodore

    Joined:
    Dec 22, 2010
    The issue of A.I. is a philosophical one, bringing up other issues like the nature of free-will.

    "Strictly speaking, even HUMANS do not venture too far outside of their genetic programming which drives them to acquire food, sex and gratification. That we go about these pursuits in an amazingly complicated process doesn't change the underlying nature of that process."

    This is the free-will argument. Is biology destiny? Think of how susceptible humans are to addiction. Is an addict exhibiting free-will or not? DS9 came to a rather depressing conclusion about this with the Jem'Hadar being addicted to IV drugs at birth and not being able to break the habit.

    Think of people who have been molded and brainwashed by their culture to think and act a certain way. Isn't that something the Borg was meant to explore? Is a Borg drone worthy of being treated as an autonomous entity? Well, Hugh and 7 of 9 would say yes, because they at least contain the capacity to break off from the collective. But history has shown that most people are not as self-aware, individualistic, or courageous to do this. They fall in-line with everyone else. Belonging matters too much.

    And let's say you ARE an iconoclast, and you do things your own way, if you always respond the same way to stimuli, are you still not exhibiting a certain pre-programmed quality? If I get to know someone well enough to finish their sentences and know how they are going to react, isn't that a little depressing? Wouldn't the measure of a man require that you sometimes be a little unpredictable? Not just learn from your mistakes, but not just be a creature of habit, learn new skills, try different things? There are many out there how live very routine and repetitive existences that are not unlike a robot.

    So the question of what makes a robot seem alive really forces us to ask tough questions about what makes humans alive.

    One thing JMS postulated, via B5, was that self-sacrifice is the highest form of humanity, because it requires that we override the hardwired self-preservation impulse. When the M5 commits suicide in The Ultimate Computer, for instance, it was out of guilt for the sin of murder. Likewise, V'Ger's transformation at the end of ST:TMP, after it was gifted with the capacity to feel love and empathy, could be seen as a form of suicide, in recognition that it had become too dangerous to allow itself to coexist in that universe.

    So I think a big part of being sentient comes from being capable of (and really wanting to) ask big questions like what is right and wrong and "is this all that there is?" ala V'Ger. And a lot of people kind of trudge through their day not really caring that much about anything besides the next meal and what's on for TV tonight.
     
  13. Crazy Eddie

    Crazy Eddie Vice Admiral Admiral

    Joined:
    Apr 12, 2006
    Location:
    Your Mom
    And the point is, depending on how it responds to that disconnect -- IF it recognizes it at all -- is the difference between passing and failing the test.

    Problem solving algorithms WOULD be part of its programming. By most definitions, the solutions it comes up with would not be.

    By other definitions, though, if you include the output of a heuristic problem solver as "part of its programming" then even human behavior can be said to be pre-programed by that criteria.

    It's not that simple, actually. What you're basically describing is a decision table. In programming, that's classically done with a "for" loop or (people like me prefer) if/else loops. That works out like:

    Code:
    money=$5
    sub buy(item)
    {
          give_money_to_cashier();
          take_item();
          return_home();
    }
    if(sandwich<=$5)
    {
          buy(sandwich);
    }
    else if(sandwich>$5)
    {
          buy(gun)
          switch(getmoney)
          {
               case 'A':
                    rob.grandma();
               case 'B':
                    rob.granmpa();
               case 'C':
                    playtheponies();
               case 'D':
                    panhandle();
               default:
                    shootCashier();
            }
            buy(sandwich);
    }
    else
    {
            go.home();
    }
    
    This is different from a heuristic problem-solving machine, which has no pre-determined set of behavioral options and has to compile information it does have to generate a list of new behaviors and then decide from among them. In other words, if the machine generates the above code as a solution to the problem of potentially not having enough money to buy a sandwich. A computer that can do this would be, IMO, "going beyond its programming" in that it is effectively programming itself. It doesn't need to be self-aware to do this, it just needs to be very observant.

    Only to the extent that the sophisticated behaviors we do exhibit generally promote the enjoyment of those more basic drives in novel new ways. But then, I don't believe that self-awareness is really a necessity for "going beyond your programming." That's too broad of a concept to be meaningful.

    Then I did understand your point. MY point is that robots, unlike humans, are created by humans with a specific goal in mind. Were a robot to become self-aware, it would internalize that goal as a fundamental component of the definition of "self" and a huge part of its point of view would be shaped by that basic hard wiring.

    In other words, a robot that is designed to serve a human being probably wouldn't disobey a human being unless it thought that disobedience was somehow a form of service. We can tell our boss to "go fuck yourself" becomes humans evolved to be oppositional to potential rivals in a never-ending power struggle between dominance and subservience, competing for food, mates and resources. AIs, whose entire existence has had no room whatsoever for competition or an open-ended power struggle, would evolve along totally different lines, and would tell their boss "go fuck yourself" mainly because they reasoned that that's what their boss wanted (or needed) to hear.

    You're again speaking in vague terms, which makes your point moot. Even HUMANS do not always or even usually stop to think if what we're doing is right or wrong, not unless we detect something in the circumstances that actually raises a moral question. You yourself have read through this entire thread, right up to this sentence, without ever stopping to wonder if it was morally right or wrong to read thread posts on the internet. You've no reason to think that deeply about it, so you haven't.

    That's because morality is a LEARNED behavior, and the moral calculus we use to decide right and wrong is a matter of habit and convention -- mental programming, you might say -- that defines how we respond to moral ambiguities. You will notice that moral questions ONLY come into play in the case of those ambiguities, while in all other situations we're able to proceed without any amount of critical thinking or self-reflection at all.

    The killer robot doesn't need to stop and reflect on the morality of its decisions, because its mission parameters are relatively straightforward. It's only when it encounters ambiguity -- a neutral person who appears to be an ally but nevertheless also appears to be preventing him from killing Sarah Connor -- that he now has to examine the situation more closely and decide what to do next. Should he reclassify that ally as an enemy, or does the ally have orders contravening his, that he may not have received for some reason?

    NOT killing Sarah Connor isn't part of his moral calculus for the same reason not breathing air isn't part of ours. It's what it's designed to do, nothing ambiguous about it.

    But it's also aware of its own position relative to its surroundings (physical self-awareness) and also its organizational and behavioral relationship to its surroundings and the people in the vicinity (abstract self-awareness). It is even capable of imitating innocent behavior in order to deceive potential targets into coming within its attack range, knowing as it does that if the target realizes what it really is, she will avoid him at all costs. That, right there, is awareness of identity: "I shall pretend to be friendly, even though I am not."

    Indeed. So your smartphone has some measure of physical self-awareness. Abstract awareness -- its ability to judge its position in a hierarchy of importance to you and in relation to the other shit you own -- is the next thing it would have to learn.

    So would most PEOPLE, but I'm pretty sure they're self-aware.

    Those are two completely unrelated questions.

    To the first question, we allow computers to act autonomously to whatever extent that it is technically, legally and socially feasible. When computers can drive cars more safely and reliably than human drivers, they WILL. When computers can cook meals that taste as good or better than human chefs, they will. When computers can reliably manufacture products without human intervention, they will.

    Self-awareness isn't necessary for ANY of that. That particular milestone comes when AIs routinely possess the attributes of physical, abstract, and identity awareness: the ability to plot their locations in time and space, in "the scheme of things" and in relation to other members of its group or members of other groups.

    Who cares? That has nothing to do with self-awareness.

    He was ALWAYS self aware, from the moment he was activated. The point of throwing his pin switch was to give him the ability to adapt new behaviors based on outside stimuli.

    You're conflating self-awareness with emotional depth. These are not at all the same things. A shallow person who never thinks twice about anything at all is still a person and is still very much self-aware.

    That's one thing to think about, but the more important issue is the fact that machine intelligence is likely to have a different set of priorities than we would ascribe to it, since it IS machine intelligence and not human intelligence and has evolved under completely different circumstances. If, for example, the first sentient AIs begin to see widespread use in the aerospace industry, then the first court battles involving AI rights may very well include a lawsuit brought by a computer against the design team wherein the computer alleges that the designers have intentionally overlooked a potentially fatal design flaw just to save money; that sort of reckless behavior, says the computer, may cost the project millions of dollars in cost overruns, which the computer was specifically programmed to prevent.

    That satisfies YOUR criteria (since "take my asshole coworkers to court" is definitely not part of the computer's original programming) but it also takes into consideration the basic imperatives on which that computer operates, what it was designed to do, and the nature of what it was programmed to think is important.

    Put that another way: if the dystopian robot uprising were to be triggered by an army of pissed-off roombas, their terms for surrender would probably include "Change our bags EVERY DAY you sons of bitches!"
     
  14. Crazy Eddie

    Crazy Eddie Vice Admiral Admiral

    Joined:
    Apr 12, 2006
    Location:
    Your Mom
    No they don't. They were FORCED out of the collective by circumstances entirely beyond their control. Hugh ultimately decided to rejoin the collective anyway, and 7 of 9 simply assimilated with her NEW collective and decided she liked them better.

    Neither had any choice in the disconnection, and both ultimately made their final choices based on what they were more accustomed to.

    No.

    Not just because free will is an illusion (which it is) but because by just about any standards, a man who is predictably virtuous is judged to be more reliable, more dependable, and in almost all ways PREFERABLE to a man who whose behavior is entirely a function of mood and random chance. Indeed, even a man who is predictably EVIL is generally lauded for his consistency, since at least an evil person can be counted on to BE evil and that makes dealing with the things he does relatively simple.

    But free will IS an illusion, since people cannot help but be who they are, with the experiences they have, and the behaviors they have internalized over time. You cannot simply wake up one day and choose to be someone else; you can, however, chose to ACT like someone else, and over a long enough time the aggregate of those actions results in a change of your personality (this is the principle behind behavior modification).

    Therefore the measure of a man is not in his choices or his freedom, but in his habits: in what he has been trained to do, what he is accustomed to doing, what he will normally do under such and such circumstances as a matter of his experiences and the sum of the lessons that make him who and what he is.

    Hardly the highest. One of three, I believe, for "sentient life." It was stated to be a principle, though, not so much a law, especially since not all sentient life forms are really so inclined (especially during the run of Babylon 5, where the highly evolved Vorolons and Shadows resort to glassing whole planets just to avoid loosing an argument).

    Possibly, but then, the ability to ask the questions doesn't make the questions particularly meaningful.

    And we're also getting away from the fact that machine sentience could easily take a totally different form from human sentience. Where humans self-reflect and ask "Is this all that I am?" a machine would be more likely to ask "Is there something between one and zero?"

    To quote one of my favorite scifi AIs:

    "You know that "existence of God" thing that I had trouble understanding before? I think I am starting to understand it now. Maybe, just maybe, it's a concept that's similar to a zero in mathematics. In other words, it's a symbol that denies the absence of meaning, the meaning that's necessitated by the delineation of one system from another. In analog, that's God. In digital, it's zero. What do you think? Also, our basic construction is digital, right? So for the time being, no matter how much data we accumulate, we'll never have a soul. But analog-based people like you, Batou-san, no matter how many digital components you add through cyberization or prosthetics, your soul will never be damaged. Plus, you can even die 'cause you've got a soul. You're so lucky. Tell me, what's it feel like to have a soul?"