RSS iconTwitter iconFacebook icon

The Trek BBS title image

The Trek BBS statistics

Threads: 140,123
Posts: 5,433,338
Members: 24,933
Currently online: 675
Newest member: karanfree

TrekToday headlines

Pine In New Skit
By: T'Bonz on Oct 21

Stewart In Holiday Film
By: T'Bonz on Oct 21

The Red Shirt Diaries #8
By: T'Bonz on Oct 20

IDW Publishing January Comics
By: T'Bonz on Oct 20

Retro Review: Chrysalis
By: Michelle on Oct 18

The Next Generation Season Seven Blu-ray Details
By: T'Bonz on Oct 17

CBS Launches Streaming Service
By: T'Bonz on Oct 17

Yelchin In New Indie Thriller
By: T'Bonz on Oct 17

Saldana In The Book of Life
By: T'Bonz on Oct 17

Cracked’s New Sci-Fi Satire
By: T'Bonz on Oct 16


Welcome! The Trek BBS is the number one place to chat about Star Trek with like-minded fans. Please login to see our full range of forums as well as the ability to send and receive private messages, track your favourite topics and of course join in the discussions.

If you are a new visitor, join us for free. If you are an existing member please login below. Note: for members who joined under our old messageboard system, please login with your display name not your login name.


Go Back   The Trek BBS > Entertainment & Interests > Science and Technology

Science and Technology "Somewhere, something incredible is waiting to be known." - Carl Sagan.

Reply
 
Thread Tools
Old February 27 2013, 03:35 AM   #1
Tiberius
Commodore
 
Moral issues with Robotics

With the ever increasing capabilities of robots and artificial intelligences, I've been wondering. At what point does a robot stop being a device you can just turn off and dismantle and start being an entity that deserves rights?

This issue was explored several times in Trek, such as "The Measure of a Man," but I'd like to look at it from a real world point of view.

How will we be able to tell when a computer becomes a conscious entity, even if it is not self aware? Will they have any legal rights?
Tiberius is offline   Reply With Quote
Old February 27 2013, 05:24 AM   #2
Crazy Eddie
Rear Admiral
 
Crazy Eddie's Avatar
 
Location: I'm in your ___, ___ing your ___
Re: Moral issues with Robotics

Tiberius wrote: View Post
With the ever increasing capabilities of robots and artificial intelligences, I've been wondering. At what point does a robot stop being a device you can just turn off and dismantle and start being an entity that deserves rights?
You'd have to ask the robots.

And no, I'm not being sarcastic. The machine gains rights if and when it gains moral agency and the ability to independently make informed decisions about the nature of its own existence and its role in society. With that level of independence comes the implication that some of the robot's decisions may not agree with the priorities of its creators and/or owners, and then you have the question of under what circumstances a decision made by a robot take precedence over decisions made by humans (or, for that matter, other robots). That, then, becomes the question of "rights," a demarcation line of personal sovereignty and under what circumstances a robot can make decisions that no higher authority can override.

How will we be able to tell when a computer becomes a conscious entity, even if it is not self aware? Will they have any legal rights?
If it's not self aware, it will have very few (if any) rights that aren't given to it by interested humans. Animals are currently in this situation; certain urban critters have a peculiar set of rights and privelages, apparently just because animal rights activists think they're cute and feel bad when people hurt them. The delegation of rights is otherwise entirely arbitrary; gluing your dog to the floor until he starves to death constitutes animal cruelty, but somehow killing mice with sticky traps doesn't. Likewise, dogs face an immediate death sentence for the crime of biting a human, while rats face a death sentence for the crime of being rats.

A conscious computer rates no better than a squirrel if it isn't self aware. We may feel a little awkward about accidentally running it over with a car (like my dad did to his phone last month) but it's just a computer, not yet a person.
__________________
The Complete Illustrated Guide to Starfleet - Online Now!
Crazy Eddie is offline   Reply With Quote
Old February 27 2013, 05:32 AM   #3
billcosby
Commodore
 
billcosby's Avatar
 
Location: billcosby
Re: Moral issues with Robotics

The 1980s version of the series Astroboy (the only Astroboy series I was exposed to) explored this rather heavy handed theme in nearly every episode yet made it very entertaining and approachable to kids as well. All the robots in that world were servants or worse and subject to hatred and racism from most of the humans. Because the kids would identify with Astro (who became a superhero) they could see his side of the story emphatically and learn about the pain caused when judging others.

Anyway... modern day robots have a long way to come before they are like the population from Astroboy. What a stroke of brilliance to have robots play out a story about racism.
__________________
My 1st Edition TrekCCG virtual expansion: http://billcosbytrekccg.blogspot.com/

billcosby is offline   Reply With Quote
Old February 27 2013, 05:38 AM   #4
Tiberius
Commodore
 
Re: Moral issues with Robotics

newtype_alpha wrote: View Post
You'd have to ask the robots.

And no, I'm not being sarcastic.

But could we trust their answer? If the robot says it is alive, how could we tell if it actually means it or is just saying it so we don't disassemble it?

The machine gains rights if and when it gains moral agency and the ability to independently make informed decisions about the nature of its own existence and its role in society.
We face the problem of how to determine this. There's also another problem. What if I create a robot which will clearly reach this point, but I include a chip or something that will shut it down BEFORE it reaches that point? Am I acting immorally?

With that level of independence comes the implication that some of the robot's decisions may not agree with the priorities of its creators and/or owners, and then you have the question of under what circumstances a decision made by a robot take precedence over decisions made by humans (or, for that matter, other robots). That, then, becomes the question of "rights," a demarcation line of personal sovereignty and under what circumstances a robot can make decisions that no higher authority can override.
This gets interesting if you replace "robot" with "child" and "creators" with "parents".

If it's not self aware, it will have very few (if any) rights that aren't given to it by interested humans. Animals are currently in this situation; certain urban critters have a peculiar set of rights and privelages, apparently just because animal rights activists think they're cute and feel bad when people hurt them. The delegation of rights is otherwise entirely arbitrary; gluing your dog to the floor until he starves to death constitutes animal cruelty, but somehow killing mice with sticky traps doesn't. Likewise, dogs face an immediate death sentence for the crime of biting a human, while rats face a death sentence for the crime of being rats.
While I agree that animal rights is somewhat arbitrary (as illustrated by your rat trap), I think the issue is that it is wrong to be cruel to an animal because it can feel pain. To relate this to the topic, would it be wrong to crush a robot if the robot would suffer from it? How could such suffering be shown to exist?

A conscious computer rates no better than a squirrel if it isn't self aware. We may feel a little awkward about accidentally running it over with a car (like my dad did to his phone last month) but it's just a computer, not yet a person.
Why should self awareness count is the defining factor rather than consciousness? If we say that a squirrel is conscious but not self aware, does that make it okay to intentionally run them over?
Tiberius is offline   Reply With Quote
Old February 27 2013, 01:30 PM   #5
Collingwood Nick
Vice Admiral
 
Collingwood Nick's Avatar
 
Re: Moral issues with Robotics

Knowing people and their ways, robots are going to be our domestic slaves for a very long time indeed. We don't just give away power.
__________________
"I will never coach against my boys"
Collingwood Nick
Collingwood Nick is offline   Reply With Quote
Old February 27 2013, 04:41 PM   #6
Edit_XYZ
Fleet Captain
 
Edit_XYZ's Avatar
 
Location: At star's end.
Re: Moral issues with Robotics

Tiberius wrote: View Post
But could we trust their answer? If the robot says it is alive, how could we tell if it actually means it or is just saying it so we don't disassemble it?
Put a robot in a room. If you cannot tell the difference between him and a human by means of testing his mentality, without looking at them, then the robot is self-aware.

A conscious computer rates no better than a squirrel if it isn't self aware. We may feel a little awkward about accidentally running it over with a car (like my dad did to his phone last month) but it's just a computer, not yet a person.
Why should self awareness count is the defining factor rather than consciousness? If we say that a squirrel is conscious but not self aware, does that make it okay to intentionally run them over?
If the animal is not self-aware, his "rights" are given by humans, not requested by the animal.
These rights are ultimately about humans, not about animals - they are passive elements, not even accepting these rights or not.

And then, there are other problems:
In nature, predation (and all the pain it involves) is one of the main causes of death. The animals are part of the food chain - predators for some species, prey for other.

Can we even be certain a non-self-aware entity is sentient/can feel pain?
Some studies claim that animals can; other studies claim that the chemicals released are identical to the ones released during vigorous exercise.
What about a fly? Can it feel pain?

Ultimately, one can only be certain that someone is sentient is if he/she tells you this himself/herself AKA if it is self-aware/relatively intelligent.

And humans are not the ones above the minimal self-awareness/intelligence threshold needed for this; we are only the ones that are the furthest beyond this threshold.
At a substantial distance behind, you have bottlenose dolphins, chimpanzees, etc; all pressing against a "ceiling" represented by the intelligence needed to survive in their environments. It is still not known what caused our ancestors to leap-frog this obstacle so spectacularly and became far more intelligent than was actually needed for survival.
__________________
"Let truth and falsehood grapple ... Truth is strong" - John Milton
Edit_XYZ is offline   Reply With Quote
Old February 27 2013, 04:44 PM   #7
Redfern
Commodore
 
Redfern's Avatar
 
Location: Georgia, USA
Re: Moral issues with Robotics

billcosby wrote: View Post
The 1980s version of the series Astroboy (the only Astroboy series I was exposed to) explored this rather heavy handed theme in nearly every episode yet made it very entertaining and approachable to kids as well.
Actually, that was a central theme of Tezuka's original comic (manga) in the 50s, years before it was first adapted to animation in the early 60s.

If you're genuinely curious, Dark Horse Comics reprinted select stories (in English) in pocket sized omnibus collections several years ago. It might be a tad tricky to get them now as I assume they are out of print. I know DH released at least 20 something volumes (as those are the one I have), but the number may have been far more.

Sincerely,

Bill
__________________
Tempt the Hand of Fate and it'll give you the "finger"!

Freighter Tails: the Misadventures of Mzzkiti
Redfern is offline   Reply With Quote
Old February 27 2013, 04:51 PM   #8
Deckerd
Fleet Arse
 
Deckerd's Avatar
 
Location: the Frozen Wastes
Re: Moral issues with Robotics

Edit_XYZ wrote: View Post
It is still not known what caused our ancestors to leap-frog this obstacle so spectacularly and became far more intelligent than was actually needed for survival.
Well surely the size of our brains is what was needed for survival? Nature rarely produces redundancy in any form.
__________________
They couldn't hit an elephant at this distance.
Deckerd is offline   Reply With Quote
Old February 27 2013, 05:18 PM   #9
Edit_XYZ
Fleet Captain
 
Edit_XYZ's Avatar
 
Location: At star's end.
Re: Moral issues with Robotics

Deckerd wrote: View Post
Edit_XYZ wrote: View Post
It is still not known what caused our ancestors to leap-frog this obstacle so spectacularly and became far more intelligent than was actually needed for survival.
Well surely the size of our brains is what was needed for survival? Nature rarely produces redundancy in any form.
Actually, human intelligence is significantly above what is needed to prosper even for an "intelligence niche" species (all all other such species demonstrate).
And yes, nature almost always only evolves an attribute until it is "good enough". Hence the mystery of our unnecessarily (from a survival perspective) large brains.
__________________
"Let truth and falsehood grapple ... Truth is strong" - John Milton
Edit_XYZ is offline   Reply With Quote
Old February 27 2013, 05:31 PM   #10
Deckerd
Fleet Arse
 
Deckerd's Avatar
 
Location: the Frozen Wastes
Re: Moral issues with Robotics

Whether the size is necessary or not appears to be a matter of your opinion. I imagine it grew because of what our ancestors were doing with it. The human cranium design has made several major sacrifices, compared to our nearest relatives, in order to accommodate that brain, so the logical conclusion is that it was necessary.
__________________
They couldn't hit an elephant at this distance.
Deckerd is offline   Reply With Quote
Old February 27 2013, 06:04 PM   #11
RAMA
Vice Admiral
 
RAMA's Avatar
 
Location: NJ, USA
Re: Moral issues with Robotics

Tiberius wrote: View Post
With the ever increasing capabilities of robots and artificial intelligences, I've been wondering. At what point does a robot stop being a device you can just turn off and dismantle and start being an entity that deserves rights?

This issue was explored several times in Trek, such as "The Measure of a Man," but I'd like to look at it from a real world point of view.

How will we be able to tell when a computer becomes a conscious entity, even if it is not self aware? Will they have any legal rights?
SF has sometimes been timid with this question, in visual fiction we have seen an adaptation of two Asimov stories that deal with it: Bicentennial Man is somewhat underrated, but it is of note that the biased humans do not give the AI the right to be human UNTIL it has all the organic or simulated parts of a human. In I, Robot, there is a bias by the lead character that somewhat softens at the end, he is willing to give the robot a chance to figure out what it is. In the web series "Drone" a robot is hunted for knowing too much, and it's actions have a human moralism to them, not just programming. In STNG's "Measure of a Man", Offspring", "Evolution" and "The Quality of Life" the issue is dealt with by a series that historically was biased against robots, but has shown growth in that not only are sentience accepted, but in several cases championed by another AI. In Measure of a Man in particular, the very question you ask is demonstrated..basically the court suggests there is enough evidence to let Data find out. In fact Data passes every part of the Turing test every day...

The Turing test was established to define when AI is indistinguishable from humans. It will likely be used as a benchmark for future philosophical questions in this issue. I believe it is at this point that courts may decide as STNG did on the issue. The court of public opinion may differ. A good example of the Turing test in the future may be found here:

http://www.kurzweilai.net/the-singul...vailable-today

An excellent book on the topic is Paradigm's Lost by John Casti. It explores whether or not certain great scientific questions may be possible, sentient AI is one of them. Of course the conclusion is that it's possible, but it's worth a read as they argue from pro and con.


http://www.amazon.com/Paradigms-Lost...pd_sim_sbs_b_1

A second edition to see if the conclusions hold up are here:

http://www.amazon.com/gp/product/068...?ie=UTF8&psc=1

The main question at this time is should such things be allowed? Aside from the fact that the technology is continuing in different fields that are likely too difficult for humans to stop even if so desired there are many experts who simply feel a version of Asimov's three laws is the answer...it will keep the AI in line. I feel that "life will find a way", in this case super logical, super fast AI will bypass such controls, leading us to a crux point...this point may be the Singularity or something similar, where the AI would take over. Even now, something that seems relatively minor like drones over Afghanistan making some of their own targeting decisions is inevitable, humans can't keep up. Where it gets interesting is that more and more human experts feel it makes more sense to join with the computer AI or become it rather than fight with it, and this is where the morality and ethics becomes too much for many people. If we do in fact converge with it, then the problem of what to do with AI is moot, we will be the AI, and if all goes well, imbued with elements of humanity and it's brain that make it something more than simply machine.

Robot Ethics:

http://www.economist.com/node/21556234

Morality of machines:

“If we admit the animal should have moral consideration, we need to think seriously about the machine,” Gunkel says. “It is really the next step in terms of looking at the non-human other.” - See more at: http://www.niutoday.info/2012/08/27/....pyX0l3AD.dpuf

RAMA
__________________
It is far better to grasp the universe as it really is than to persist in delusion, however satisfying and reassuring. Carl Sagan
RAMA is offline   Reply With Quote
Old February 28 2013, 08:42 AM   #12
Tiberius
Commodore
 
Re: Moral issues with Robotics

Edit_XYZ wrote: View Post
Tiberius wrote: View Post
But could we trust their answer? If the robot says it is alive, how could we tell if it actually means it or is just saying it so we don't disassemble it?
Put a robot in a room. If you cannot tell the difference between him and a human by means of testing his mentality, without looking at them, then the robot is self-aware.
That would depend on the test though, wouldn't it?

If the animal is not self-aware, his "rights" are given by humans, not requested by the animal.
But animals never request those rights. When was the last time a chimp said, "Please don't keep me in a small cage"?

These rights are ultimately about humans, not about animals - they are passive elements, not even accepting these rights or not.
I gotta disagree. A lot of animal rights legislation is designed to protect animals. Humans don't really gain from it.

And then, there are other problems:
In nature, predation (and all the pain it involves) is one of the main causes of death. The animals are part of the food chain - predators for some species, prey for other.
So are Humans. Granted, we can avoid it, for the most part, but animals still do occasionally prey on Humans.

Can we even be certain a non-self-aware entity is sentient/can feel pain?
Some studies claim that animals can; other studies claim that the chemicals released are identical to the ones released during vigorous exercise.
What about a fly? Can it feel pain?

Ultimately, one can only be certain that someone is sentient is if he/she tells you this himself/herself AKA if it is self-aware/relatively intelligent.
But that's a point. When it comes to robots, how can you tell the difference between a robot who says it is sapient because it genuinely believes that it is, and a robot that claims to be sapient because it's determined that doing so will keep it from being destroyed?
Tiberius is offline   Reply With Quote
Old February 28 2013, 08:47 AM   #13
Tiberius
Commodore
 
Re: Moral issues with Robotics

Thanks for all that, RAMA. I'll have a look at those links. Have you got a link for that Drone webseries?
Tiberius is offline   Reply With Quote
Old February 28 2013, 10:40 AM   #14
TheOneWhoKnocks
Vice Admiral
 
TheOneWhoKnocks's Avatar
 
Location: Sephiroth
Re: Moral issues with Robotics

it isn't robotics by itself that we will have Moral Issues with, it will be what we do with it, giving it A.I. or Human Augmentation that will bring up debate
__________________
No, you clearly don't know who you're talking to, so let me clue you in. I am not in danger, Skyler. I am the danger. A guy opens his door and gets shot and you think that of me? No. I am the one who knocks!
TheOneWhoKnocks is offline   Reply With Quote
Old February 28 2013, 10:59 AM   #15
Asbo Zaprudder
Rear Admiral
 
Asbo Zaprudder's Avatar
 
Location: Sand in the Vaseline
Re: Moral issues with Robotics

Easy, just program your robot to serve man -- there can't possibly be any problem then, can there?
__________________
"After a time, you may find that having is not so pleasing a thing, after all, as wanting. It is not logical, but it is often true." -- Spock -- Flip flap!
Asbo Zaprudder is offline   Reply With Quote
Reply

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump



All times are GMT +1. The time now is 11:04 PM.

Powered by vBulletin® Version 3.8.6
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
FireFox 2+ or Internet Explorer 7+ highly recommended.