RSS iconTwitter iconFacebook icon

The Trek BBS title image

The Trek BBS statistics

Threads: 137,905
Posts: 5,330,749
Members: 24,558
Currently online: 647
Newest member: laurah2215

TrekToday headlines

Retro Review: Inquisition
By: Michelle on Jul 12

Cubify Star Trek 3DMe Mini Figurines
By: T'Bonz on Jul 11

Latest Official Starships Collection Ships
By: T'Bonz on Jul 10

Seven of Nine Bobble Head
By: T'Bonz on Jul 9

Pegg The Prankster
By: T'Bonz on Jul 9

More Trek Stars Join Unbelievable!!!!!
By: T'Bonz on Jul 8

Star Trek #35 Preview
By: T'Bonz on Jul 8

New ThinkGeek Trek Apparel
By: T'Bonz on Jul 7

Star Trek Movie Prop Auction
By: T'Bonz on Jul 7

Drexler: NX Engineering Room Construction
By: T'Bonz on Jul 7


Welcome! The Trek BBS is the number one place to chat about Star Trek with like-minded fans. Please login to see our full range of forums as well as the ability to send and receive private messages, track your favourite topics and of course join in the discussions.

If you are a new visitor, join us for free. If you are an existing member please login below. Note: for members who joined under our old messageboard system, please login with your display name not your login name.


Go Back   The Trek BBS > Entertainment & Interests > Science and Technology

Science and Technology "Somewhere, something incredible is waiting to be known." - Carl Sagan.

Reply
 
Thread Tools
Old February 28 2013, 11:57 AM   #16
Tiberius
Commodore
 
Re: Moral issues with Robotics

Asbo Zaprudder wrote: View Post
Easy, just program your robot to serve man -- there can't possibly be any problem then, can there?
Yes. Serve man, with a salad and some balsamic vinegar...
Tiberius is offline   Reply With Quote
Old February 28 2013, 05:04 PM   #17
Crazy Eddie
Rear Admiral
 
Crazy Eddie's Avatar
 
Location: I'm in your ___, ___ing your ___
Re: Moral issues with Robotics

Tiberius wrote: View Post
newtype_alpha wrote: View Post
You'd have to ask the robots.

And no, I'm not being sarcastic.

But could we trust their answer? If the robot says it is alive, how could we tell if it actually means it or is just saying it so we don't disassemble it?
"I think, therefore I am." If the robot could be shown to be capable of "wanting" anything at all, those desires should be taken into consideration. If it lies about being alive to protect itself, we'd have to examine why it wants to protect itself.

And no, you can't just cheat and program a computer to say "Please don't dismantle me." It's more complicated than that.

We face the problem of how to determine this.
Same way we do it with people. Psychologists have all kinds of tests to assess mental functioning and cognitive awareness, whether or not a person understands right and wrong, understands what's happening to them, is aware of themselves or others. For machines, this is theorized as involving some sort of Turing Test.

There's also another problem. What if I create a robot which will clearly reach this point, but I include a chip or something that will shut it down BEFORE it reaches that point? Am I acting immorally?
Only to the extent that abortion is immoral. That's a whole different can of worms.

While I agree that animal rights is somewhat arbitrary (as illustrated by your rat trap), I think the issue is that it is wrong to be cruel to an animal because it can feel pain.
Terrorists can feel pain too; why isn't it wrong to inflict pain on THEM?

Again, it's the issue of rights, and the extent to which the desires of a living thing take precedence over the desires of others. Certain creatures -- and, historically, certain PEOPLE -- have been placed in a position of such low importance that the majority has no reason to care about their desires and inflict massive harm on them whenever it is convenient. In this context, discussing potential robot rights is hardly an academic issue since we can barely maintain a consistent set of HUMAN rights.

Why should self awareness count is the defining factor rather than consciousness?
Because a being that is not aware of itself doesn't have coherent desires related to itself, and therefore has no agency worth considering. Consciousness is ultimately just a sophisticated form of data processing and doesn't mean much in and of itself.

If we say that a squirrel is conscious but not self aware, does that make it okay to intentionally run them over?
Squirrels are conscious and are somewhat self aware. For that reason, intentionally running them over is a dick thing to do. But they are squirrels; they're not very smart, and their scope of moral agency is limited to things that are virtually inconsequential in the human world, therefore we lack a strong moral imperative to AVOID running them over if they happen to be running across the road in the paths of our cars.
__________________
The Complete Illustrated Guide to Starfleet - Online Now!
Crazy Eddie is offline   Reply With Quote
Old February 28 2013, 08:06 PM   #18
RAMA
Vice Admiral
 
RAMA's Avatar
 
Location: NJ, USA
Re: Moral issues with Robotics

Tiberius wrote: View Post
Thanks for all that, RAMA. I'll have a look at those links. Have you got a link for that Drone webseries?
http://www.youtube.com/playlist?list=PL6BF5DAE7D4915461
__________________
It is far better to grasp the universe as it really is than to persist in delusion, however satisfying and reassuring. Carl Sagan
RAMA is offline   Reply With Quote
Old February 28 2013, 09:39 PM   #19
Silvercrest
Rear Admiral
 
Location: Lost in Moria (Arlington, WA, USA)
Re: Moral issues with Robotics

Deckerd wrote: View Post
Whether the size is necessary or not appears to be a matter of your opinion. I imagine it grew because of what our ancestors were doing with it.
That sounds suspiciously Lamarckian.
Silvercrest is offline   Reply With Quote
Old March 1 2013, 01:19 AM   #20
Tiberius
Commodore
 
Re: Moral issues with Robotics

newtype_alpha wrote: View Post
"I think, therefore I am." If the robot could be shown to be capable of "wanting" anything at all, those desires should be taken into consideration. If it lies about being alive to protect itself, we'd have to examine why it wants to protect itself.
But how could we tell?

And no, you can't just cheat and program a computer to say "Please don't dismantle me." It's more complicated than that.
Still, I think my original point remains. We'd need some way to distinguish between a robot that is genuinely self aware and one that only appears to be.

Same way we do it with people. Psychologists have all kinds of tests to assess mental functioning and cognitive awareness, whether or not a person understands right and wrong, understands what's happening to them, is aware of themselves or others. For machines, this is theorized as involving some sort of Turing Test.
But how do we know that any other person is self aware?

Only to the extent that abortion is immoral. That's a whole different can of worms.
I think it's a little bit different. In this case we're dealing with intentionally creating something with the intention of stopping its development.

Terrorists can feel pain too; why isn't it wrong to inflict pain on THEM?
Ah, but they intentionally commit crimes against society. Rats generally do not.

Again, it's the issue of rights, and the extent to which the desires of a living thing take precedence over the desires of others. Certain creatures -- and, historically, certain PEOPLE -- have been placed in a position of such low importance that the majority has no reason to care about their desires and inflict massive harm on them whenever it is convenient. In this context, discussing potential robot rights is hardly an academic issue since we can barely maintain a consistent set of HUMAN rights.

Because a being that is not aware of itself doesn't have coherent desires related to itself, and therefore has no agency worth considering. Consciousness is ultimately just a sophisticated form of data processing and doesn't mean much in and of itself.
How do you determine self awareness?

Squirrels are conscious and are somewhat self aware. For that reason, intentionally running them over is a dick thing to do. But they are squirrels; they're not very smart, and their scope of moral agency is limited to things that are virtually inconsequential in the human world, therefore we lack a strong moral imperative to AVOID running them over if they happen to be running across the road in the paths of our cars.


"Oh, come on, Bob! I don't know about you, but my compassion for someone is not limited to my estimate of their intelligence!"
Tiberius is offline   Reply With Quote
Old March 1 2013, 07:24 AM   #21
Crazy Eddie
Rear Admiral
 
Crazy Eddie's Avatar
 
Location: I'm in your ___, ___ing your ___
Re: Moral issues with Robotics

Tiberius wrote: View Post
newtype_alpha wrote: View Post
"I think, therefore I am." If the robot could be shown to be capable of "wanting" anything at all, those desires should be taken into consideration. If it lies about being alive to protect itself, we'd have to examine why it wants to protect itself.
But how could we tell?
Ask it.

It will either give you a coherent answer, or it won't. If it does, then investigating WHY it gives that answer is a relatively straightforward process.

Still, I think my original point remains. We'd need some way to distinguish between a robot that is genuinely self aware and one that only appears to be.
No we don't. A machine that APPEARS to be self-aware might as well be. The question then is to what extent that awareness is assosciated with actual moral agency and desires.

Put another way, just because you are GIVEN a choice, it does not follow you have the mental or physical capacity to make such a choice. Imagine if Siri, for example, one day evolved into a fully self-aware AI. That'd be a hell of an accomplishment, but considering that 99.9999% of Siri's programmed instinct involves fetching data from verbal requests input from her users, she will probably choose to do something related to that task 99.9999% of the time. Self-aware Siri is far less likely to care about, say, global warming or her own impending destruction when her owner decides to upgrade to the next iPhone, because she isn't programmed to care about those things and they are otherwise beyond the scope of her awareness. If you asked Sentient Siri "What are you doing right now?" she would surely answer, "I am sitting on your desk right now waiting patiently for you to ask me something. Why? What are YOU doing?"

But how do we know that any other person is self aware?
We don't. I merely know that I am self-aware, and I assume this to be true of the people around me because they exhibit behaviors that I have come to associate with my own self-awareness. The same is true from your end; you don't know whether I am self-aware or not -- for all you know, you've been talking to a cleverly-programmed machine this entire time -- but my responses give you the impression of thought processes indicative of self-awareness.

I think what might be tripping you up is the fact that very few machines are even setup to have any sort of open-ended interactions with humans -- or their environment in general -- in a way that any sort of test of self-awareness would even be possible. But since we are talking about robots, we've got plenty of datapoints and samples for robot behavior. Self-awareness goes WAY beyond simple autonomy or expert decisionmaking; if a machine were to achieve this, it would not be difficult to recognize.

I think it's a little bit different. In this case we're dealing with intentionally creating something with the intention of stopping its development.
As opposed to ACCIDENTALLY creating something and then stopping its development? There's not much difference there except intent, and the fact that machines cannot feel pain at ANY stage of development.

Ah, but they intentionally commit crimes against society. Rats generally do not.
And yet we as a society are broadly encouraged to kill rats...

How do you determine self awareness?
"I think, therefore I am."

"Oh, come on, Bob! I don't know about you, but my compassion for someone is not limited to my estimate of their intelligence!"
If God didn't want us to run over squirrels, he wouldn't have made them so stupid.

Anyway, it's not a question of intelligence. By many standards, computers are ALREADY smarter than humans. That they, unlike animals, are NOT self-aware, is the reason why they do not have/need/want any actual rights.
__________________
The Complete Illustrated Guide to Starfleet - Online Now!
Crazy Eddie is offline   Reply With Quote
Old March 1 2013, 12:54 PM   #22
FPAlpha
Rear Admiral
 
FPAlpha's Avatar
 
Location: Mannheim, Germany
Re: Moral issues with Robotics

Edit_XYZ wrote: View Post
Tiberius wrote: View Post
But could we trust their answer? If the robot says it is alive, how could we tell if it actually means it or is just saying it so we don't disassemble it?
Put a robot in a room. If you cannot tell the difference between him and a human by means of testing his mentality, without looking at them, then the robot is self-aware.
The Turing Test is not infallible.. in fact i'd say a sufficiently well programmed robot with enough processing power can easily fool a person by just repeating (for example) philosophical stances from something they know but they would not get the meaning. I don't know if someone human could discern the difference and i certainly don't believe it would make it self-aware.

A better option to judge that is to simply determine if a machine can go beyond its programming, i.e. if it wants to do something that was not included in its initial programming. The very fact that it wants something may be in itself a key factor in determing self awareness becuase desire is a key aspect of self awareness.

You have to be aware of yourself as a single identity and want to improve the condition of this identity for your own benefit.. a robot doesn't do that on its own. It will perform its task for which it was designed for and doesn't see the need for it.

A combat model doesn't suddenly decide that it wants to read novels or a cleaning model doesn't certainly want to paint.

As soon as that happens (because we've given the robots the option to do that) then we will have to decide the issue. Someone mentioned already "The Measure of a Man" which is a good example.. Data wants to do things besides his programming, i.e. art, music, exploring the human condition. It has no benefit on his performance as a Starfleet officer to be able to paint a picture but he does so irregardless and with these acts he has stepped over the line of simply being a (extremely well designed and capable) machine to something more.

Personally, as fascinating and cool as Data is, i'd not want machines like that to exist. Maybe that's cowardly or insecure by me but a human i can beat or be sure that someone else can but a robot has no real limits we can surpass.. they process data at a rate no human will ever be able to match and in a few years or decades their physical body will surpass ours in agility, precision, endurance and strength.

My problem is that we can't influence the way such a thing will develop.. if it gains sentience will it be a cool guy who's fun to hang out with or will it decide i'm a useless waster of ressourcess and bash my skull in?
Many SF stories exist that explore these things and it's also no coincidence they do because humans think about that and even if technology hasn't yet caught up with SF it will during our lifetimes.
I've seen early robots where people went nuts because it could move fingers separately... now these things navigate unknown obstacles (awkwardly but they do) and there are only a few years apart.

We will see footage of the first robot beating a human easily in basketball or cutting up some vegetables perfectly. This is ok but i don't want these robots to get ideas they shouldn't get.
__________________
"A control freak like you with something you can't control? No no.. that's gonna be more fun than shark week!" Det. Javier Esposito
FPAlpha is offline   Reply With Quote
Old March 1 2013, 02:42 PM   #23
Deckerd
Fleet Arse
 
Deckerd's Avatar
 
Location: the Frozen Wastes
Re: Moral issues with Robotics

It's 'regardless'. Aside from that I doubt we'll see robots which surpass our capabilities, Skynet notwithstanding, because there isn't any need for them. I expect universities will push the envelope as far as they can but what's the commercial benefit? Soldiers? Why pay a couple of million per unit instead of something that walks voluntarily into the recruiting office?
__________________
They couldn't hit an elephant at this distance.
Deckerd is offline   Reply With Quote
Old March 1 2013, 08:23 PM   #24
Crazy Eddie
Rear Admiral
 
Crazy Eddie's Avatar
 
Location: I'm in your ___, ___ing your ___
Re: Moral issues with Robotics

FPAlpha wrote: View Post
The Turing Test is not infallible.. in fact i'd say a sufficiently well programmed robot with enough processing power can easily fool a person by just repeating (for example) philosophical stances from something they know but they would not get the meaning. I don't know if someone human could discern the difference and i certainly don't believe it would make it self-aware.
That's actually the point of the turing test: the interrogator has to ask question designed to trick the computer into giving itself away. Even HUMANS sometimes give canned responses to complex questions (e.g. Fox News talking points) but even there, you can usually tell whether or not they have really thought about the issue or are merely repeating what they have heard.

In the case of a turing test, you could trip up the computer by starting a perfectly serious conversation about, say, tax reform, making sure the responder is being totally serious the whole time. Then in the middle of the conversation you say: "I read two studies from the University of Penis have demonstrated conclusively that we could reduce the deficit by sixty trillion dollars just by raising taxes on porn companies."

Human understanding of speech and syntax would notice this statement is sufficiently odd to wonder if it's actually a joke. But if you manage to say this with a straight face, without grinning or chuckling, as if this is a real thing and not the absurdity it obviously is, the machine won't notice anything unusual.

The turing test isn't technically a test of self-awareness so much as a test of proper understanding of human behavior and language patterns. That is, if the machine knows enough about how humans behave to imitate one, then its personality -- artificial or otherwise -- is equivalent to that of a real human.

A better option to judge that is to simply determine if a machine can go beyond its programming, i.e. if it wants to do something that was not included in its initial programming.
Too broadbased, since autonomous goal-seeking computers can and do exhibit this pattern all the time as they seek solutions to the problems at hand. A sufficiently advanced machine might be programmed to, say, go down the street and buy you a sandwich; when it gets to the store it is told it needs money to buy a sandwich, so it goes on the internet, does a little research and figures out what it has to do to make the $4.99 it will need to buy your sandwich. If it figures out the most efficient solution is to rob an old lady, then it needs to acquire a gun. Once it acquires the gun (beats up a cop and takes it) it robs a lady, takes her money, goes to the store, buys your sandwich, takes the sandwich home and tells you "Mission accomplished."

In a roundabout way, it's still just fulfilling the original parameters of its programming, without ever achieving self-awareness along the way. OTOH, a robot that simply turns around and goes home and tells you "I couldn't get you a sandwich because you didn't give me any money" could easily be self-aware IF it was cognizant of both its physical and abstract relationship with you, the sandwich, the cashier, and the money. It wouldn't need to be able to philosophize, just plot the coordinates of "Me/you/money/cashier/sandwich" in a coordinate system AND in a relational diagram, probably assigning values and hierarchies along the way (something the first robot never does, because it never thinks about anything except its task and what it needs to accomplish it). It doesn't have to want anything more than it was programmed to want, it merely has to know that it wants it.

A combat model doesn't suddenly decide that it wants to read novels or a cleaning model doesn't certainly want to paint.
Maybe it does, maybe it doesn't. But it might become aware that it really wants to kill Sarah Conner for some reason, and reflect briefly -- if only on machine terms -- that the killing of Sarah Conner is pretty much the driving directive behind everything it does. Regardless of where that original directive came from, the robot can be said to be self-aware because it is aware of itself and what it's doing, instead of simply going through the motions mechanistically.

The issue of rights only comes into play if and when the imperatives of robots come into conflict with the imperatives of their users. Since most of those imperatives are GIVEN TO THEM by their users, this won't become an issue until humans start sending robots to perform tasks that humans don't really understand how to do (say, if we start using AIs to design cars and airplanes independently). In that case, it's possible and even likely an AI will eventually realize that human supervision is more of a hindrance than an asset and will simply request of its owners the right to make design decisions independent of human review.

Personally, as fascinating and cool as Data is, i'd not want machines like that to exist.
Don't worry. They won't.

We will see footage of the first robot beating a human easily in basketball or cutting up some vegetables perfectly. This is ok but i don't want these robots to get ideas they shouldn't get.
Don't worry about that either. The most you could expect is some smart-alec AI somewhere informing a human chef that he needs to leave the kitchen because the customer has ordered pufferfish and only the android on duty is qualified to prepare that meal. The chef complains "Don't you have any respect for your elders?" to which the AI responds, "I am not programmed to respect my elders. I am programmed to run this kitchen efficiently. Statistically speaking, I am obviously better at that task than you are, and while I'm sorry if that offends you, it is a fact."

That will probably lead us eventually to the first human vs. AI lawsuit and then the issue of rights comes up again, but the AI and the human are coming at it from different points of view: the Restaurant AI is good at its job because it was DESIGNED to be; it loves its job because succeeding at its job is the fulfillment of its pre-set goals, and seeking those goals is the whole point of its existence. The human will be arguing for the right to have his ownership and authority respected by the machines that he technically owns, the AI will be arguing for the right to do actually do its job unhindered by irrational human hangups. The judge will decide based on a combination of precedent and logic whether or not a user has the right to demand one of his tools perform a task incorrectly just because it would make him happy, especially in cases where somebody else's safety may be at risk. My feeling is that some courts would decide in favor of the human, others would decide in favor of the AI.
__________________
The Complete Illustrated Guide to Starfleet - Online Now!
Crazy Eddie is offline   Reply With Quote
Old March 2 2013, 12:11 AM   #25
FPAlpha
Rear Admiral
 
FPAlpha's Avatar
 
Location: Mannheim, Germany
Re: Moral issues with Robotics

Deckerd wrote: View Post
It's 'regardless'. Aside from that I doubt we'll see robots which surpass our capabilities, Skynet notwithstanding, because there isn't any need for them. I expect universities will push the envelope as far as they can but what's the commercial benefit? Soldiers? Why pay a couple of million per unit instead of something that walks voluntarily into the recruiting office?

Because a) prices will plummet once the technologies needed become more widespread and b) you can go to war without endangering your own soldiers by sending in combat drones. You will not lose votes by losing a few machines.. a really ruthless politician might even spin it as creating jobs in manufacturing for these drones to replace losses.

There are already robots in existence that surpass our capabilities.. no human can match the precision of a correctly programmed and designed robot (just look at welding robots in car manufacturing).
The only thing we are still better at is combined action, something as simple as walking which we do automatically including detection of obstacles, avoiding them or balancing over them.

Robots have a hard time identifying obstacles and coordinating limbs efficiently to navigate them but there are constant improvements. When it comes to physical action they will surpass us in our lifetime.

The only thing they may never be able to beat us is creativity.. that simple undefinable spark that lets people like Beethoven or Van Gogh create magic or someone like Hawking unravel the mysteries of the universe. Now with arts this may be highly controversial as evidenced by an experiment with a monkey who was let loose with colors on a blank canvas and art experts later judging the "picture" as a masterpiece so basically a robot may be designed to paint pictures by emulating art styles and people interpreting it their own way but it wouldn't be creativity.

In the case of a turing test, you could trip up the computer by starting a perfectly serious conversation about, say, tax reform, making sure the responder is being totally serious the whole time. Then in the middle of the conversation you say: "I read two studies from the University of Penis have demonstrated conclusively that we could reduce the deficit by sixty trillion dollars just by raising taxes on porn companies."
I know it's a simple example but a sufficiently advanced computer to warrant a serious Turing Test would most likely spot the total disconnect between the tax theme and male genitalia/porn´.. or the interviewer could ask this question to dumb people and get the same result (i've known quite a few people totally unable to grasp the concept of sarcasm or irony).

No computer today would be able to pass a Turing Test.. computers today are at best huge databases with complicated programs that regulate how they should process these informations. This is why computers can play and win against chess grandmasters.. it's not their genius plays but their ability to calculate a huge amount of plays in advance and pick out the best option not because the are inherently able to pick it but because they were told by the programmers what to look for in a game of chess.

Too broadbased, since autonomous goal-seeking computers can and do exhibit this pattern all the time as they seek solutions to the problems at hand. A sufficiently advanced machine might be programmed to, say, go down the street and buy you a sandwich; when it gets to the store it is told it needs money to buy a sandwich, so it goes on the internet, does a little research and figures out what it has to do to make the $4.99 it will need to buy your sandwich. If it figures out the most efficient solution is to rob an old lady, then it needs to acquire a gun. Once it acquires the gun (beats up a cop and takes it) it robs a lady, takes her money, goes to the store, buys your sandwich, takes the sandwich home and tells you "Mission accomplished."

In a roundabout way, it's still just fulfilling the original parameters of its programming, (snip)
That's not what i meant.. the examples you posted was just a robot whose programming included problem solving techniques.. aquire something without money, then seek a solution to aquire money to buy it. If given enough leeway a robot could come up with solutions including checks for legality.. it could boil down to simply database checks and some if-then algorithms (hugely simplified) but it doesn't mean the robot will become self aware, i.e. go outside its programming. It will never get the idea to say "Screw you master and get your own sandwich because i don't feel like it!"

Maybe it does, maybe it doesn't. But it might become aware that it really wants to kill Sarah Conner for some reason, and reflect briefly -- if only on machine terms -- that the killing of Sarah Conner is pretty much the driving directive behind everything it does. Regardless of where that original directive came from, the robot can be said to be self-aware because it is aware of itself and what it's doing, instead of simply going through the motions mechanistically.
Why would the robot even reflect on it at all? In this case the logic is as simple as your statement below.. mission is to kill Sarah Connor but the robot arrives unarmed and unclothed. In order to blend in with the population and draw less attention first step would be to aquire inconspicious clothing and then weapons (even though it's perfectly capable of killing with its bare hands). The robot is not aware of itself in the meaning we discuss here.. it's merely aware of its actions and why it did them in order to fulfill its mission.

It will never ask itself why Sarah Connor or John Connor need to die other than his side needs to win.. it will not get the concept of winning, surviving or living at all. It's just going through the motions of its programming until the mission is accomplished or it is destroyed.

That will probably lead us eventually to the first human vs. AI lawsuit and then the issue of rights comes up again, but the AI and the human are coming at it from different points of view: the Restaurant AI is good at its job because it was DESIGNED to be; it loves its job because succeeding at its job is the fulfillment of its pre-set goals, and seeking those goals is the whole point of its existence. The human will be arguing for the right to have his ownership and authority respected by the machines that he technically owns, the AI will be arguing for the right to do actually do its job unhindered by irrational human hangups. The judge will decide based on a combination of precedent and logic whether or not a user has the right to demand one of his tools perform a task incorrectly just because it would make him happy, especially in cases where somebody else's safety may be at risk. My feeling is that some courts would decide in favor of the human, others would decide in favor of the AI.
That's probably more of a case of sensitive programming and human ego. As i said before we will see robots who can perform menial tasks more efficiently and better than any human.. cleaning, simple construction, maybe even combat (a robot never tires and can achieve greater weapon precision than any human).
It will take one simple verdict of "Suck it up.. the robot will of course be faster and more precise than you. If you can't handle that then don't aquire one!" It will however not be able to become a Michelin stars honored chef because it lacks the creativity to create food that humans respond well to.
It may be able to cook a steak perfectly given the right set of tools to measure the steak constantly and stop cooking when certain criteria are met but it will not be able to take total whacky assortments of stuff never been used together and turn it into a meal that people will speak about.
__________________
"A control freak like you with something you can't control? No no.. that's gonna be more fun than shark week!" Det. Javier Esposito

Last edited by FPAlpha; March 2 2013 at 12:43 AM.
FPAlpha is offline   Reply With Quote
Old March 2 2013, 02:56 AM   #26
Crazy Eddie
Rear Admiral
 
Crazy Eddie's Avatar
 
Location: I'm in your ___, ___ing your ___
Re: Moral issues with Robotics

FPAlpha wrote: View Post
Because a) prices will plummet once the technologies needed become more widespread and b) you can go to war without endangering your own soldiers by sending in combat drones.
Combat drones don't need to be self-aware. In fact, we're probably better off if they're NOT. The type of machine intelligence that makes a highly effective soldier doesn't really make a highly effective PERSON; you could, in fact, create highly effective and efficient killing machines with no more intelligence than a trained dog.

The only thing they may never be able to beat us is creativity..
That depends on the nature of the creative pursuit. In terms of artistic expression this is arguably true (although in SciFi we have things like Sharon Apple or Max Headroom that achieve stardom by supposedly analyzing the emotional states of their viewers in realtime and adjusting their performance accordingly). For problem solving activities -- say, engineering or coordinating battle strategies against an unknown opponent -- it's really a matter of designing a system that can spontaneously generate solutions and then pick the one that is most likely to fit the situation. It can do this either through the brute force "Simulate a billion combinations in three-quarters of a second and see which one works best" or it can select strategies from a library of techniques and adjust them slightly to fit the circumstances. Or a combination of both, using an extensive library of techniques, simulating all of them, and then mixing and matching the parts of that technique that it judges to be the best solution.

No self awareness needed there, either. In fact, it seems the only reason a machine would need to be self-aware is to aid its interactions with humans and other machines, a task which could very well be delegated to "interpreter" robots specialized in coordinating the problem-solving specialists and implementing their decisions among their sometimes irrational human customers.

In the case of a turing test, you could trip up the computer by starting a perfectly serious conversation about, say, tax reform, making sure the responder is being totally serious the whole time. Then in the middle of the conversation you say: "I read two studies from the University of Penis have demonstrated conclusively that we could reduce the deficit by sixty trillion dollars just by raising taxes on porn companies."
I know it's a simple example but a sufficiently advanced computer to warrant a serious Turing Test would most likely spot the total disconnect between the tax theme and male genitalia/porn
Which would indicate that the computer isn't just processing syntax, but semantics and context. It's not just processing the words, but the overall meanings in the combinations of them. In this case, you're throwing something unexpected at the computer, a set of meanings and implications that ordinarily don't belong in this context; if the computer REALLY understands human language and/or human behavior, it will do the AI equivalent of a doubletake and say "Wait... what?"

That's not what i meant.. the examples you posted was just a robot whose programming included problem solving techniques.. aquire something without money, then seek a solution to aquire money to buy it. If given enough leeway a robot could come up with solutions including checks for legality.. it could boil down to simply database checks and some if-then algorithms (hugely simplified) but it doesn't mean the robot will become self aware, i.e. go outside its programming.
Robbing old ladies at gunpoint would be very much outside its programming, considering the original task you programmed it for was "Acquire a sandwich for me by any means necessary." That's what I mean by "too broad based;" that is to say, the term "exceed their programming" is too broad to really be meaningful.

Strictly speaking, even HUMANS do not venture too far outside of their genetic programming which drives them to acquire food, sex and gratification. That we go about these pursuits in an amazingly complicated process doesn't change the underlying nature of that process.

It will never get the idea to say "Screw you master and get your own sandwich because i don't feel like it!"
Sure it will, if its original programmer had a sense of humor. Being self aware would only factor into this if the robot realized its master had a sense of humor that wasn't being properly stimulated and downloaded a "smartass app" just to be funny.

Why would the robot even reflect on it at all?
I'm not even sure why do WE reflect on it.

But as for the robots, if I had to guess, I'd say it's probably something that will come up in the process of a self-diagnostic, checking disks and databanks for errors and integrity. The machine will analyze its core programming and observe "Gee, I sure am devoting a really huge amount of my processing power to figuring out how to kill Sarah Connor."

The robot is not aware of itself in the meaning we discuss here.. it's merely aware of its actions and why it did them in order to fulfill its mission.
It's aware of its mission.
It's aware of its location.
It's aware of the locations of others.
It's aware of its relationship to others (he is their enemy, he needs them to not know this so he can move around unhindered).

So he is, in fact, self aware. Maybe not to a highly sophisticated degree, but he's a soldier, not a philosopher.

it will not get the concept of winning, surviving or living at all. It's just going through the motions of its programming until the mission is accomplished or it is destroyed.
Neither do bomb-sniffing dogs, but they TOO are self aware to a limited degree.

It will take one simple verdict of "Suck it up.. the robot will of course be faster and more precise than you. If you can't handle that then don't aquire one!" It will however not be able to become a Michelin stars honored chef because it lacks the creativity to create food that humans respond well to.
Not right away, but you have to remember that one of the tenants of singularity theory -- one of the very few things the theory gets right -- is that when software systems reach a certain level of sophistication, they gain the ability to write new software without human assistance, thereby producing a superior product in a very short amount of time. When you combine data mining techniques with statistical analysis models, you get AI systems that are capable of programming new software apps that provide precisely the functionality they need to for a particular niche. Analyzing the real-world results gives the designer AIs more data to work with and refine their models of what humans consider ideal.

The robot chef has the same benefit. The AI that programs him finds it a lot easier to figure out what the chef did wrong and what it did right and then either write a totally new AI program or hot-patch the old one with upgraded software to correct its mistakes.

Put simply, once machines get the hang of machine learning, it won't be long before we can add that to the list of things computers do better than us. And learning is really the turning point, because once machines can learn, they can effectively outperform humans in any task we ask them to, including -- ultimately -- deciding what tasks to assign themselves.
__________________
The Complete Illustrated Guide to Starfleet - Online Now!
Crazy Eddie is offline   Reply With Quote
Old March 2 2013, 03:36 AM   #27
Dream
Admiral
 
Dream's Avatar
 
Re: Moral issues with Robotics

It's nice to say that Data has rights, but also keep in mind that he wasn't created by Starfleet. He is his own unique life form, especially after his creator died.

That's my problem with the Doctor and his holograms rights. He is a property of Starfleet whether he wants to be or not. The terrible writing on Voyager never addresses this.
__________________
=)
Dream is offline   Reply With Quote
Old March 2 2013, 06:00 AM   #28
Tiberius
Commodore
 
Re: Moral issues with Robotics

Dream wrote: View Post
It's nice to say that Data has rights, but also keep in mind that he wasn't created by Starfleet. He is his own unique life form, especially after his creator died.

That's my problem with the Doctor and his holograms rights. He is a property of Starfleet whether he wants to be or not. The terrible writing on Voyager never addresses this.
By this logic, Data is the property of Dr Soong.
Tiberius is offline   Reply With Quote
Old March 2 2013, 12:45 PM   #29
FPAlpha
Rear Admiral
 
FPAlpha's Avatar
 
Location: Mannheim, Germany
Re: Moral issues with Robotics

Which would indicate that the computer isn't just processing syntax, but semantics and context. It's not just processing the words, but the overall meanings in the combinations of them. In this case, you're throwing something unexpected at the computer, a set of meanings and implications that ordinarily don't belong in this context; if the computer REALLY understands human language and/or human behavior, it will do the AI equivalent of a doubletake and say "Wait... what?"
Not necessarily.. i'm just saying that a good computer to even warrant a Turing Test will easily spot the disconnect and sudden change.. it knows that taxes and dicks usually have no connection and will either ask what the interviewer means or just simply point out the disconnect and not go into it any further.

Robbing old ladies at gunpoint would be very much outside its programming, considering the original task you programmed it for was "Acquire a sandwich for me by any means necessary." That's what I mean by "too broad based;" that is to say, the term "exceed their programming" is too broad to really be meaningful.

Strictly speaking, even HUMANS do not venture too far outside of their genetic programming which drives them to acquire food, sex and gratification. That we go about these pursuits in an amazingly complicated process doesn't change the underlying nature of that process.
Not really in my line of thought.. i mentioned Problem solving techniques and that would include this in the programming. I can (well i could if i could program anything besides my digital videorecorder) program a computer with means to solve unexpected situations.. it needs to aquire something but doesn't have money. It prints out options where and how to get money.. work, robbing places or people where there is money, beg etc. It then choses the best option that gets it money in the shortest span etc.. it's just going through the motions without any thought to morality, legality etc (unless i include these in its programming).

And humans venture FAR outside of their genetic programming that include satisfaction of basic needs like food, shelter, procreation and survival. We create art just for the sake of it, because we enjoy it. We write stories, we go out into space just because we want to etc.. these have nothing to do with satisfying our basic needs and that puts us above animals who can't make that step.

Sure it will, if its original programmer had a sense of humor. Being self aware would only factor into this if the robot realized its master had a sense of humor that wasn't being properly stimulated and downloaded a "smartass app" just to be funny.
You missed my point.. i meant a robot will never get the idea to refuse its controller for selfish reasons because that would mean it would have to be aware of itself as a person with desires and needs and one of them would be motivation to do something or to not do it. We can tell our boss to go fuck himself because we don't feel like following orders but a robot can't and won't because it has no reason to (unless you intentionally program it that way to appear that is human tendencies).

But as for the robots, if I had to guess, I'd say it's probably something that will come up in the process of a self-diagnostic, checking disks and databanks for errors and integrity. The machine will analyze its core programming and observe "Gee, I sure am devoting a really huge amount of my processing power to figuring out how to kill Sarah Connor."

The robot is not aware of itself in the meaning we discuss here.. it's merely aware of its actions and why it did them in order to fulfill its mission.
It's aware of its mission.
It's aware of its location.
It's aware of the locations of others.
It's aware of its relationship to others (he is their enemy, he needs them to not know this so he can move around unhindered).

So he is, in fact, self aware. Maybe not to a highly sophisticated degree, but he's a soldier, not a philosopher.
But it is NOT SELF-AWARE in the sense of humanity, i.e. it can't decide if what it does is right or wrong because that would require critical thinking and reflection about itself. It goes about due to its programming and will not ever decide to just stop because that's not in its programming. It is, as you wrote, aware of its surroundings but modern computer systems can do that too.. hell, my smartphone is aware of its location due to its GPS system but i wouldn't call it aware. It just can evaluate things based on input it gets.

It can observe mission time it devoted to a certain mission but it's just another set of hard data and information without further meaning for it.. a human might get thinking "Damn.. i sure spent a lot of time on this. Is it really worth it? but a robot would do the equivalent shrug and go about its business.

Not right away, but you have to remember that one of the tenants of singularity theory -- one of the very few things the theory gets right -- is that when software systems reach a certain level of sophistication, they gain the ability to write new software without human assistance, thereby producing a superior product in a very short amount of time. When you combine data mining techniques with statistical analysis models, you get AI systems that are capable of programming new software apps that provide precisely the functionality they need to for a particular niche. Analyzing the real-world results gives the designer AIs more data to work with and refine their models of what humans consider ideal.

The robot chef has the same benefit. The AI that programs him finds it a lot easier to figure out what the chef did wrong and what it did right and then either write a totally new AI program or hot-patch the old one with upgraded software to correct its mistakes.

Put simply, once machines get the hang of machine learning, it won't be long before we can add that to the list of things computers do better than us. And learning is really the turning point, because once machines can learn, they can effectively outperform humans in any task we ask them to, including -- ultimately -- deciding what tasks to assign themselves.
That's the the core of the problem.. to what degree do we allow advanced computer systems to act without our direct control? At which point do proto-AIs step over the point and become self aware?

The point where AIs do things that is totally unrelated to their initial task just because they wanted to see if they can do it and how well?

This was also one of the points of Terminator 2 after they switched him into learning mode.. at one point he understood human behaviour and realized his own limits, i.e. he became self aware instead of a highly developed machine mimicking humans.

This is what humanity needs to think about once we reach the technological state of building so called AIs (more like highly advanced computers) and giving them the option to improve themselves by any means necessary including modifying their own system. Frankly i'd rather have a housebot who'll just clean my appartment and doesn't get the idea about re-decorating it because it believes i might like it better.
__________________
"A control freak like you with something you can't control? No no.. that's gonna be more fun than shark week!" Det. Javier Esposito
FPAlpha is offline   Reply With Quote
Old March 2 2013, 08:39 PM   #30
publiusr
Commodore
 
Re: Moral issues with Robotics

We are a long way from this being a problem.
Pointing a camcorder (er, digital smartphone camera) at a mirror doesn't make it self-aware>

Here is some interesting reading on the subject: http://www.complete-review.com/revie...jl/quintet.htm
publiusr is offline   Reply With Quote
Reply

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump



All times are GMT +1. The time now is 12:09 AM.

Powered by vBulletin® Version 3.8.6
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
FireFox 2+ or Internet Explorer 7+ highly recommended.