Moral issues with Robotics

Discussion in 'Science and Technology' started by Tiberius, Feb 27, 2013.

  1. Tiberius

    Tiberius Commodore Commodore

    Joined:
    Sep 28, 2005
    With the ever increasing capabilities of robots and artificial intelligences, I've been wondering. At what point does a robot stop being a device you can just turn off and dismantle and start being an entity that deserves rights?

    This issue was explored several times in Trek, such as "The Measure of a Man," but I'd like to look at it from a real world point of view.

    How will we be able to tell when a computer becomes a conscious entity, even if it is not self aware? Will they have any legal rights?
     
  2. Crazy Eddie

    Crazy Eddie Vice Admiral Admiral

    Joined:
    Apr 12, 2006
    Location:
    Your Mom
    You'd have to ask the robots.

    And no, I'm not being sarcastic. The machine gains rights if and when it gains moral agency and the ability to independently make informed decisions about the nature of its own existence and its role in society. With that level of independence comes the implication that some of the robot's decisions may not agree with the priorities of its creators and/or owners, and then you have the question of under what circumstances a decision made by a robot take precedence over decisions made by humans (or, for that matter, other robots). That, then, becomes the question of "rights," a demarcation line of personal sovereignty and under what circumstances a robot can make decisions that no higher authority can override.

    If it's not self aware, it will have very few (if any) rights that aren't given to it by interested humans. Animals are currently in this situation; certain urban critters have a peculiar set of rights and privelages, apparently just because animal rights activists think they're cute and feel bad when people hurt them. The delegation of rights is otherwise entirely arbitrary; gluing your dog to the floor until he starves to death constitutes animal cruelty, but somehow killing mice with sticky traps doesn't. Likewise, dogs face an immediate death sentence for the crime of biting a human, while rats face a death sentence for the crime of being rats.

    A conscious computer rates no better than a squirrel if it isn't self aware. We may feel a little awkward about accidentally running it over with a car (like my dad did to his phone last month) but it's just a computer, not yet a person.
     
  3. ALF

    ALF Rear Admiral Rear Admiral

    Joined:
    Mar 12, 2005
    Location:
    Program Melmac1 - Holodeck 3
    The 1980s version of the series Astroboy (the only Astroboy series I was exposed to) explored this rather heavy handed theme in nearly every episode yet made it very entertaining and approachable to kids as well. All the robots in that world were servants or worse and subject to hatred and racism from most of the humans. Because the kids would identify with Astro (who became a superhero) they could see his side of the story emphatically and learn about the pain caused when judging others.

    Anyway... modern day robots have a long way to come before they are like the population from Astroboy. What a stroke of brilliance to have robots play out a story about racism.
     
  4. Tiberius

    Tiberius Commodore Commodore

    Joined:
    Sep 28, 2005

    But could we trust their answer? If the robot says it is alive, how could we tell if it actually means it or is just saying it so we don't disassemble it?

    We face the problem of how to determine this. There's also another problem. What if I create a robot which will clearly reach this point, but I include a chip or something that will shut it down BEFORE it reaches that point? Am I acting immorally?

    This gets interesting if you replace "robot" with "child" and "creators" with "parents".

    While I agree that animal rights is somewhat arbitrary (as illustrated by your rat trap), I think the issue is that it is wrong to be cruel to an animal because it can feel pain. To relate this to the topic, would it be wrong to crush a robot if the robot would suffer from it? How could such suffering be shown to exist?

    Why should self awareness count is the defining factor rather than consciousness? If we say that a squirrel is conscious but not self aware, does that make it okay to intentionally run them over?
     
  5. Captain_Nick

    Captain_Nick Vice Admiral Admiral

    Joined:
    Jan 28, 2002
    Knowing people and their ways, robots are going to be our domestic slaves for a very long time indeed. We don't just give away power.
     
  6. Edit_XYZ

    Edit_XYZ Fleet Captain Fleet Captain

    Joined:
    Sep 30, 2011
    Location:
    At star's end.
    Put a robot in a room. If you cannot tell the difference between him and a human by means of testing his mentality, without looking at them, then the robot is self-aware.

    If the animal is not self-aware, his "rights" are given by humans, not requested by the animal.
    These rights are ultimately about humans, not about animals - they are passive elements, not even accepting these rights or not.

    And then, there are other problems:
    In nature, predation (and all the pain it involves) is one of the main causes of death. The animals are part of the food chain - predators for some species, prey for other.

    Can we even be certain a non-self-aware entity is sentient/can feel pain?
    Some studies claim that animals can; other studies claim that the chemicals released are identical to the ones released during vigorous exercise.
    What about a fly? Can it feel pain?

    Ultimately, one can only be certain that someone is sentient is if he/she tells you this himself/herself AKA if it is self-aware/relatively intelligent.

    And humans are not the ones above the minimal self-awareness/intelligence threshold needed for this; we are only the ones that are the furthest beyond this threshold.
    At a substantial distance behind, you have bottlenose dolphins, chimpanzees, etc; all pressing against a "ceiling" represented by the intelligence needed to survive in their environments. It is still not known what caused our ancestors to leap-frog this obstacle so spectacularly and became far more intelligent than was actually needed for survival.
     
  7. Redfern

    Redfern Rear Admiral Rear Admiral

    Joined:
    Sep 28, 2006
    Location:
    Georgia, USA
    Actually, that was a central theme of Tezuka's original comic (manga) in the 50s, years before it was first adapted to animation in the early 60s.

    If you're genuinely curious, Dark Horse Comics reprinted select stories (in English) in pocket sized omnibus collections several years ago. It might be a tad tricky to get them now as I assume they are out of print. I know DH released at least 20 something volumes (as those are the one I have), but the number may have been far more.

    Sincerely,

    Bill
     
  8. Deckerd

    Deckerd Fleet Arse Premium Member

    Joined:
    Oct 27, 2005
    Location:
    the Frozen Wastes
    Well surely the size of our brains is what was needed for survival? Nature rarely produces redundancy in any form.
     
  9. Edit_XYZ

    Edit_XYZ Fleet Captain Fleet Captain

    Joined:
    Sep 30, 2011
    Location:
    At star's end.
    Actually, human intelligence is significantly above what is needed to prosper even for an "intelligence niche" species (all all other such species demonstrate).
    And yes, nature almost always only evolves an attribute until it is "good enough". Hence the mystery of our unnecessarily (from a survival perspective) large brains.
     
  10. Deckerd

    Deckerd Fleet Arse Premium Member

    Joined:
    Oct 27, 2005
    Location:
    the Frozen Wastes
    Whether the size is necessary or not appears to be a matter of your opinion. I imagine it grew because of what our ancestors were doing with it. The human cranium design has made several major sacrifices, compared to our nearest relatives, in order to accommodate that brain, so the logical conclusion is that it was necessary.
     
  11. RAMA

    RAMA Admiral Admiral

    Joined:
    Dec 13, 1999
    Location:
    USA
    SF has sometimes been timid with this question, in visual fiction we have seen an adaptation of two Asimov stories that deal with it: Bicentennial Man is somewhat underrated, but it is of note that the biased humans do not give the AI the right to be human UNTIL it has all the organic or simulated parts of a human. In I, Robot, there is a bias by the lead character that somewhat softens at the end, he is willing to give the robot a chance to figure out what it is. In the web series "Drone" a robot is hunted for knowing too much, and it's actions have a human moralism to them, not just programming. In STNG's "Measure of a Man", Offspring", "Evolution" and "The Quality of Life" the issue is dealt with by a series that historically was biased against robots, but has shown growth in that not only are sentience accepted, but in several cases championed by another AI. In Measure of a Man in particular, the very question you ask is demonstrated..basically the court suggests there is enough evidence to let Data find out. In fact Data passes every part of the Turing test every day...

    The Turing test was established to define when AI is indistinguishable from humans. It will likely be used as a benchmark for future philosophical questions in this issue. I believe it is at this point that courts may decide as STNG did on the issue. The court of public opinion may differ. A good example of the Turing test in the future may be found here:

    http://www.kurzweilai.net/the-singularity-is-near-movie-available-today

    An excellent book on the topic is Paradigm's Lost by John Casti. It explores whether or not certain great scientific questions may be possible, sentient AI is one of them. Of course the conclusion is that it's possible, but it's worth a read as they argue from pro and con.


    http://www.amazon.com/Paradigms-Lost-John-L-Casti/dp/0380711656/ref=pd_sim_sbs_b_1

    A second edition to see if the conclusions hold up are here:

    http://www.amazon.com/gp/product/0688161154/ref=oh_details_o00_s00_i00?ie=UTF8&psc=1

    The main question at this time is should such things be allowed? Aside from the fact that the technology is continuing in different fields that are likely too difficult for humans to stop even if so desired there are many experts who simply feel a version of Asimov's three laws is the answer...it will keep the AI in line. I feel that "life will find a way", in this case super logical, super fast AI will bypass such controls, leading us to a crux point...this point may be the Singularity or something similar, where the AI would take over. Even now, something that seems relatively minor like drones over Afghanistan making some of their own targeting decisions is inevitable, humans can't keep up. Where it gets interesting is that more and more human experts feel it makes more sense to join with the computer AI or become it rather than fight with it, and this is where the morality and ethics becomes too much for many people. If we do in fact converge with it, then the problem of what to do with AI is moot, we will be the AI, and if all goes well, imbued with elements of humanity and it's brain that make it something more than simply machine.

    Robot Ethics:

    http://www.economist.com/node/21556234

    Morality of machines:

    “If we admit the animal should have moral consideration, we need to think seriously about the machine,” Gunkel says. “It is really the next step in terms of looking at the non-human other.” - See more at: http://www.niutoday.info/2012/08/27/morality-for-robots/#sthash.pyX0l3AD.dpuf

    RAMA
     
  12. Tiberius

    Tiberius Commodore Commodore

    Joined:
    Sep 28, 2005
    That would depend on the test though, wouldn't it?

    But animals never request those rights. When was the last time a chimp said, "Please don't keep me in a small cage"?

    I gotta disagree. A lot of animal rights legislation is designed to protect animals. Humans don't really gain from it.

    So are Humans. Granted, we can avoid it, for the most part, but animals still do occasionally prey on Humans.

    But that's a point. When it comes to robots, how can you tell the difference between a robot who says it is sapient because it genuinely believes that it is, and a robot that claims to be sapient because it's determined that doing so will keep it from being destroyed?
     
  13. Tiberius

    Tiberius Commodore Commodore

    Joined:
    Sep 28, 2005
    Thanks for all that, RAMA. I'll have a look at those links. Have you got a link for that Drone webseries?
     
  14. Sephiroth

    Sephiroth Vice Admiral Admiral

    Joined:
    Jul 15, 2004
    Location:
    Sephiroth
    it isn't robotics by itself that we will have Moral Issues with, it will be what we do with it, giving it A.I. or Human Augmentation that will bring up debate
     
  15. Asbo Zaprudder

    Asbo Zaprudder Admiral Admiral

    Joined:
    Jan 14, 2004
    Location:
    Rishi's Sad Madhouse
    Easy, just program your robot to serve man -- there can't possibly be any problem then, can there? ;)
     
  16. Tiberius

    Tiberius Commodore Commodore

    Joined:
    Sep 28, 2005
    Yes. Serve man, with a salad and some balsamic vinegar...
     
  17. Crazy Eddie

    Crazy Eddie Vice Admiral Admiral

    Joined:
    Apr 12, 2006
    Location:
    Your Mom
    "I think, therefore I am." If the robot could be shown to be capable of "wanting" anything at all, those desires should be taken into consideration. If it lies about being alive to protect itself, we'd have to examine why it wants to protect itself.

    And no, you can't just cheat and program a computer to say "Please don't dismantle me." It's more complicated than that.

    Same way we do it with people. Psychologists have all kinds of tests to assess mental functioning and cognitive awareness, whether or not a person understands right and wrong, understands what's happening to them, is aware of themselves or others. For machines, this is theorized as involving some sort of Turing Test.

    Only to the extent that abortion is immoral. That's a whole different can of worms.

    Terrorists can feel pain too; why isn't it wrong to inflict pain on THEM?

    Again, it's the issue of rights, and the extent to which the desires of a living thing take precedence over the desires of others. Certain creatures -- and, historically, certain PEOPLE -- have been placed in a position of such low importance that the majority has no reason to care about their desires and inflict massive harm on them whenever it is convenient. In this context, discussing potential robot rights is hardly an academic issue since we can barely maintain a consistent set of HUMAN rights.

    Because a being that is not aware of itself doesn't have coherent desires related to itself, and therefore has no agency worth considering. Consciousness is ultimately just a sophisticated form of data processing and doesn't mean much in and of itself.

    Squirrels are conscious and are somewhat self aware. For that reason, intentionally running them over is a dick thing to do. But they are squirrels; they're not very smart, and their scope of moral agency is limited to things that are virtually inconsequential in the human world, therefore we lack a strong moral imperative to AVOID running them over if they happen to be running across the road in the paths of our cars.
     
  18. RAMA

    RAMA Admiral Admiral

    Joined:
    Dec 13, 1999
    Location:
    USA
  19. Silvercrest

    Silvercrest Vice Admiral Admiral

    Joined:
    Oct 4, 2003
    That sounds suspiciously Lamarckian.
     
  20. Tiberius

    Tiberius Commodore Commodore

    Joined:
    Sep 28, 2005
    But how could we tell?

    Still, I think my original point remains. We'd need some way to distinguish between a robot that is genuinely self aware and one that only appears to be.

    But how do we know that any other person is self aware?

    I think it's a little bit different. In this case we're dealing with intentionally creating something with the intention of stopping its development.

    Ah, but they intentionally commit crimes against society. Rats generally do not.

    Again, it's the issue of rights, and the extent to which the desires of a living thing take precedence over the desires of others. Certain creatures -- and, historically, certain PEOPLE -- have been placed in a position of such low importance that the majority has no reason to care about their desires and inflict massive harm on them whenever it is convenient. In this context, discussing potential robot rights is hardly an academic issue since we can barely maintain a consistent set of HUMAN rights.

    How do you determine self awareness?

    [​IMG]

    "Oh, come on, Bob! I don't know about you, but my compassion for someone is not limited to my estimate of their intelligence!"