RSS iconTwitter iconFacebook icon

The Trek BBS title image

The Trek BBS statistics

Threads: 140,835
Posts: 5,473,365
Members: 25,039
Currently online: 438
Newest member: noroadcordova

TrekToday headlines

Retro Review: Covenant
By: Michelle on Nov 22

Two Official Starships Collection Previews
By: T'Bonz on Nov 21

Saldana: Women Issues In Hollywood
By: T'Bonz on Nov 21

Shatner Book Kickstarter
By: T'Bonz on Nov 20

Trek Original Series Slippers
By: T'Bonz on Nov 19

Hemsworth Is Sexiest Man Alive
By: T'Bonz on Nov 19

Trek Business Card Cases
By: T'Bonz on Nov 17

February IDW Publishing Trek Comics
By: T'Bonz on Nov 17

Retro Review: The Siege of AR-558
By: Michelle on Nov 15

Trevco Full Bleed Uniform T-Shirts
By: T'Bonz on Nov 14


Welcome! The Trek BBS is the number one place to chat about Star Trek with like-minded fans. Please login to see our full range of forums as well as the ability to send and receive private messages, track your favourite topics and of course join in the discussions.

If you are a new visitor, join us for free. If you are an existing member please login below. Note: for members who joined under our old messageboard system, please login with your display name not your login name.


Go Back   The Trek BBS > Entertainment & Interests > Science and Technology

Science and Technology "Somewhere, something incredible is waiting to be known." - Carl Sagan.

Reply
 
Thread Tools
Old March 2 2013, 09:09 PM   #31
mos6507
Captain
 
mos6507's Avatar
 
Re: Moral issues with Robotics

Redfern wrote: View Post
Actually, that was a central theme of Tezuka's original comic (manga) in the 50s, years before it was first adapted to animation in the early 60s.
Not just robots either. Tezuka's Kimba the White Lion and some of his other material (like Bagi) asked questions about animal rights as well.
__________________
Fem Trekz on Facebook
mos6507 is offline   Reply With Quote
Old March 2 2013, 09:36 PM   #32
mos6507
Captain
 
mos6507's Avatar
 
Re: Moral issues with Robotics

The issue of A.I. is a philosophical one, bringing up other issues like the nature of free-will.

"Strictly speaking, even HUMANS do not venture too far outside of their genetic programming which drives them to acquire food, sex and gratification. That we go about these pursuits in an amazingly complicated process doesn't change the underlying nature of that process."

This is the free-will argument. Is biology destiny? Think of how susceptible humans are to addiction. Is an addict exhibiting free-will or not? DS9 came to a rather depressing conclusion about this with the Jem'Hadar being addicted to IV drugs at birth and not being able to break the habit.

Think of people who have been molded and brainwashed by their culture to think and act a certain way. Isn't that something the Borg was meant to explore? Is a Borg drone worthy of being treated as an autonomous entity? Well, Hugh and 7 of 9 would say yes, because they at least contain the capacity to break off from the collective. But history has shown that most people are not as self-aware, individualistic, or courageous to do this. They fall in-line with everyone else. Belonging matters too much.

And let's say you ARE an iconoclast, and you do things your own way, if you always respond the same way to stimuli, are you still not exhibiting a certain pre-programmed quality? If I get to know someone well enough to finish their sentences and know how they are going to react, isn't that a little depressing? Wouldn't the measure of a man require that you sometimes be a little unpredictable? Not just learn from your mistakes, but not just be a creature of habit, learn new skills, try different things? There are many out there how live very routine and repetitive existences that are not unlike a robot.

So the question of what makes a robot seem alive really forces us to ask tough questions about what makes humans alive.

One thing JMS postulated, via B5, was that self-sacrifice is the highest form of humanity, because it requires that we override the hardwired self-preservation impulse. When the M5 commits suicide in The Ultimate Computer, for instance, it was out of guilt for the sin of murder. Likewise, V'Ger's transformation at the end of ST:TMP, after it was gifted with the capacity to feel love and empathy, could be seen as a form of suicide, in recognition that it had become too dangerous to allow itself to coexist in that universe.

So I think a big part of being sentient comes from being capable of (and really wanting to) ask big questions like what is right and wrong and "is this all that there is?" ala V'Ger. And a lot of people kind of trudge through their day not really caring that much about anything besides the next meal and what's on for TV tonight.
__________________
Fem Trekz on Facebook
mos6507 is offline   Reply With Quote
Old March 4 2013, 02:10 AM   #33
Crazy Eddie
Rear Admiral
 
Crazy Eddie's Avatar
 
Location: I'm in your ___, ___ing your ___
Re: Moral issues with Robotics

FPAlpha wrote: View Post
Not necessarily.. i'm just saying that a good computer to even warrant a Turing Test will easily spot the disconnect and sudden change.. it knows that taxes and dicks usually have no connection and will either ask what the interviewer means or just simply point out the disconnect and not go into it any further.
And the point is, depending on how it responds to that disconnect -- IF it recognizes it at all -- is the difference between passing and failing the test.

Not really in my line of thought.. i mentioned Problem solving techniques and that would include this in the programming.
Problem solving algorithms WOULD be part of its programming. By most definitions, the solutions it comes up with would not be.

By other definitions, though, if you include the output of a heuristic problem solver as "part of its programming" then even human behavior can be said to be pre-programed by that criteria.

I can program a computer with means to solve unexpected situations.. it needs to aquire something but doesn't have money. It prints out options where and how to get money.. work, robbing places or people where there is money, beg etc. It then choses the best option that gets it money in the shortest span etc.. it's just going through the motions without any thought to morality, legality etc (unless i include these in its programming).
It's not that simple, actually. What you're basically describing is a decision table. In programming, that's classically done with a "for" loop or (people like me prefer) if/else loops. That works out like:

Code:
money=$5
sub buy(item)
{
      give_money_to_cashier();
      take_item();
      return_home();
}
if(sandwich<=$5)
{
      buy(sandwich);
}
else if(sandwich>$5)
{
      buy(gun)
      switch(getmoney)
      {
           case 'A':
                rob.grandma();
           case 'B':
                rob.granmpa();
           case 'C':
                playtheponies();
           case 'D':
                panhandle();
           default:
                shootCashier();
        }
        buy(sandwich);
}
else
{
        go.home();
}
This is different from a heuristic problem-solving machine, which has no pre-determined set of behavioral options and has to compile information it does have to generate a list of new behaviors and then decide from among them. In other words, if the machine generates the above code as a solution to the problem of potentially not having enough money to buy a sandwich. A computer that can do this would be, IMO, "going beyond its programming" in that it is effectively programming itself. It doesn't need to be self-aware to do this, it just needs to be very observant.

And humans venture FAR outside of their genetic programming that include satisfaction of basic needs like food, shelter, procreation and survival.
Only to the extent that the sophisticated behaviors we do exhibit generally promote the enjoyment of those more basic drives in novel new ways. But then, I don't believe that self-awareness is really a necessity for "going beyond your programming." That's too broad of a concept to be meaningful.

You missed my point.. i meant a robot will never get the idea to refuse its controller for selfish reasons because that would mean it would have to be aware of itself as a person with desires and needs and one of them would be motivation to do something or to not do it.
Then I did understand your point. MY point is that robots, unlike humans, are created by humans with a specific goal in mind. Were a robot to become self-aware, it would internalize that goal as a fundamental component of the definition of "self" and a huge part of its point of view would be shaped by that basic hard wiring.

In other words, a robot that is designed to serve a human being probably wouldn't disobey a human being unless it thought that disobedience was somehow a form of service. We can tell our boss to "go fuck yourself" becomes humans evolved to be oppositional to potential rivals in a never-ending power struggle between dominance and subservience, competing for food, mates and resources. AIs, whose entire existence has had no room whatsoever for competition or an open-ended power struggle, would evolve along totally different lines, and would tell their boss "go fuck yourself" mainly because they reasoned that that's what their boss wanted (or needed) to hear.

But it is NOT SELF-AWARE in the sense of humanity, i.e. it can't decide if what it does is right or wrong because that would require critical thinking and reflection about itself. It goes about due to its programming and will not ever decide to just stop because that's not in its programming.
You're again speaking in vague terms, which makes your point moot. Even HUMANS do not always or even usually stop to think if what we're doing is right or wrong, not unless we detect something in the circumstances that actually raises a moral question. You yourself have read through this entire thread, right up to this sentence, without ever stopping to wonder if it was morally right or wrong to read thread posts on the internet. You've no reason to think that deeply about it, so you haven't.

That's because morality is a LEARNED behavior, and the moral calculus we use to decide right and wrong is a matter of habit and convention -- mental programming, you might say -- that defines how we respond to moral ambiguities. You will notice that moral questions ONLY come into play in the case of those ambiguities, while in all other situations we're able to proceed without any amount of critical thinking or self-reflection at all.

The killer robot doesn't need to stop and reflect on the morality of its decisions, because its mission parameters are relatively straightforward. It's only when it encounters ambiguity -- a neutral person who appears to be an ally but nevertheless also appears to be preventing him from killing Sarah Connor -- that he now has to examine the situation more closely and decide what to do next. Should he reclassify that ally as an enemy, or does the ally have orders contravening his, that he may not have received for some reason?

NOT killing Sarah Connor isn't part of his moral calculus for the same reason not breathing air isn't part of ours. It's what it's designed to do, nothing ambiguous about it.

It is, as you wrote, aware of its surroundings but modern computer systems can do that too..
But it's also aware of its own position relative to its surroundings (physical self-awareness) and also its organizational and behavioral relationship to its surroundings and the people in the vicinity (abstract self-awareness). It is even capable of imitating innocent behavior in order to deceive potential targets into coming within its attack range, knowing as it does that if the target realizes what it really is, she will avoid him at all costs. That, right there, is awareness of identity: "I shall pretend to be friendly, even though I am not."

hell, my smartphone is aware of its location due to its GPS system but i wouldn't call it aware. It just can evaluate things based on input it gets.
Indeed. So your smartphone has some measure of physical self-awareness. Abstract awareness -- its ability to judge its position in a hierarchy of importance to you and in relation to the other shit you own -- is the next thing it would have to learn.

a human might get thinking "Damn.. i sure spent a lot of time on this. Is it really worth it? but a robot would do the equivalent shrug and go about its business.
So would most PEOPLE, but I'm pretty sure they're self-aware.

That's the the core of the problem.. to what degree do we allow advanced computer systems to act without our direct control? At which point do proto-AIs step over the point and become self aware?
Those are two completely unrelated questions.

To the first question, we allow computers to act autonomously to whatever extent that it is technically, legally and socially feasible. When computers can drive cars more safely and reliably than human drivers, they WILL. When computers can cook meals that taste as good or better than human chefs, they will. When computers can reliably manufacture products without human intervention, they will.

Self-awareness isn't necessary for ANY of that. That particular milestone comes when AIs routinely possess the attributes of physical, abstract, and identity awareness: the ability to plot their locations in time and space, in "the scheme of things" and in relation to other members of its group or members of other groups.

The point where AIs do things that is totally unrelated to their initial task just because they wanted to see if they can do it and how well?
Who cares? That has nothing to do with self-awareness.

This was also one of the points of Terminator 2 after they switched him into learning mode.. at one point he understood human behaviour and realized his own limits, i.e. he became self aware instead of a highly developed machine mimicking humans.
He was ALWAYS self aware, from the moment he was activated. The point of throwing his pin switch was to give him the ability to adapt new behaviors based on outside stimuli.

You're conflating self-awareness with emotional depth. These are not at all the same things. A shallow person who never thinks twice about anything at all is still a person and is still very much self-aware.

This is what humanity needs to think about once we reach the technological state of building so called AIs (more like highly advanced computers) and giving them the option to improve themselves by any means necessary including modifying their own system.
That's one thing to think about, but the more important issue is the fact that machine intelligence is likely to have a different set of priorities than we would ascribe to it, since it IS machine intelligence and not human intelligence and has evolved under completely different circumstances. If, for example, the first sentient AIs begin to see widespread use in the aerospace industry, then the first court battles involving AI rights may very well include a lawsuit brought by a computer against the design team wherein the computer alleges that the designers have intentionally overlooked a potentially fatal design flaw just to save money; that sort of reckless behavior, says the computer, may cost the project millions of dollars in cost overruns, which the computer was specifically programmed to prevent.

That satisfies YOUR criteria (since "take my asshole coworkers to court" is definitely not part of the computer's original programming) but it also takes into consideration the basic imperatives on which that computer operates, what it was designed to do, and the nature of what it was programmed to think is important.

Put that another way: if the dystopian robot uprising were to be triggered by an army of pissed-off roombas, their terms for surrender would probably include "Change our bags EVERY DAY you sons of bitches!"
__________________
The Complete Illustrated Guide to Starfleet - Online Now!
Crazy Eddie is offline   Reply With Quote
Old March 4 2013, 02:41 AM   #34
Crazy Eddie
Rear Admiral
 
Crazy Eddie's Avatar
 
Location: I'm in your ___, ___ing your ___
Re: Moral issues with Robotics

mos6507 wrote: View Post
Hugh and 7 of 9 would say yes, because they at least contain the capacity to break off from the collective.
No they don't. They were FORCED out of the collective by circumstances entirely beyond their control. Hugh ultimately decided to rejoin the collective anyway, and 7 of 9 simply assimilated with her NEW collective and decided she liked them better.

Neither had any choice in the disconnection, and both ultimately made their final choices based on what they were more accustomed to.

Wouldn't the measure of a man require that you sometimes be a little unpredictable?
No.

Not just because free will is an illusion (which it is) but because by just about any standards, a man who is predictably virtuous is judged to be more reliable, more dependable, and in almost all ways PREFERABLE to a man who whose behavior is entirely a function of mood and random chance. Indeed, even a man who is predictably EVIL is generally lauded for his consistency, since at least an evil person can be counted on to BE evil and that makes dealing with the things he does relatively simple.

But free will IS an illusion, since people cannot help but be who they are, with the experiences they have, and the behaviors they have internalized over time. You cannot simply wake up one day and choose to be someone else; you can, however, chose to ACT like someone else, and over a long enough time the aggregate of those actions results in a change of your personality (this is the principle behind behavior modification).

Therefore the measure of a man is not in his choices or his freedom, but in his habits: in what he has been trained to do, what he is accustomed to doing, what he will normally do under such and such circumstances as a matter of his experiences and the sum of the lessons that make him who and what he is.

One thing JMS postulated, via B5, was that self-sacrifice is the highest form of humanity, because it requires that we override the hardwired self-preservation impulse.
Hardly the highest. One of three, I believe, for "sentient life." It was stated to be a principle, though, not so much a law, especially since not all sentient life forms are really so inclined (especially during the run of Babylon 5, where the highly evolved Vorolons and Shadows resort to glassing whole planets just to avoid loosing an argument).

So I think a big part of being sentient comes from being capable of (and really wanting to) ask big questions like what is right and wrong and "is this all that there is?"
Possibly, but then, the ability to ask the questions doesn't make the questions particularly meaningful.

And we're also getting away from the fact that machine sentience could easily take a totally different form from human sentience. Where humans self-reflect and ask "Is this all that I am?" a machine would be more likely to ask "Is there something between one and zero?"

To quote one of my favorite scifi AIs:

"You know that "existence of God" thing that I had trouble understanding before? I think I am starting to understand it now. Maybe, just maybe, it's a concept that's similar to a zero in mathematics. In other words, it's a symbol that denies the absence of meaning, the meaning that's necessitated by the delineation of one system from another. In analog, that's God. In digital, it's zero. What do you think? Also, our basic construction is digital, right? So for the time being, no matter how much data we accumulate, we'll never have a soul. But analog-based people like you, Batou-san, no matter how many digital components you add through cyberization or prosthetics, your soul will never be damaged. Plus, you can even die 'cause you've got a soul. You're so lucky. Tell me, what's it feel like to have a soul?"
__________________
The Complete Illustrated Guide to Starfleet - Online Now!
Crazy Eddie is offline   Reply With Quote
Reply

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump



All times are GMT +1. The time now is 01:34 PM.

Powered by vBulletin® Version 3.8.6
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
FireFox 2+ or Internet Explorer 7+ highly recommended.