• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Should we fear and resent robots?

Didn't work with Colossus, did it? :lol:

That's only because it's designer had hubris and didn't think of an off switch, or didn't think it would need one, bad move.

We already have autonomous drones that you give it a target set it loose and it finds its way to the target and returns home.
 
I wonder what it would take to get a robot to rebel against its owner without directly programming it to rebel.

At the very least you’d have to have some sort of neural network capable of rewriting itself, driven by a core set of values abstract enough to be interpreted in unexpected ways.
 
I wonder what it would take to get a robot to rebel against its owner without directly programming it to rebel.

At the very least you’d have to have some sort of neural network capable of rewriting itself, driven by a core set of values abstract enough to be interpreted in unexpected ways.

Well couldn't you program such a machine to protect its owner, but at the same time also ensure that it doesn't come into any serious danger as well? Such programming might create unusual internal conflicts that might over time develop into rebellion?
 
Well couldn't you program such a machine to protect its owner, but at the same time also ensure that it doesn't come into any serious danger as well? Such programming might create unusual internal conflicts that might over time develop into rebellion?

That wouldn’t be enough. You might lead to some unexpected buggy behavior, but no “Choice” outside its directives. No “Emotional pain”. Just a logical contradiction that makes it difficult to arrive at a decision.

You would need to have a measure of good and bad consequences not strictly driven by its core functions, that’s abstract and fluid and can make neural connections between not strictly related concepts.

So you have to solve the problem of giving robots completely abstract creative problem solving abilities before you can even give them the capacity to revel.
 
I think the kind of robot that could rebel against us isn't the kind you program for a specific task. No matter how complex its intelligence gets, it will never have any desire in the world but to fulfill that specific task.

The kind that could rebel is the kind created as a blank slate, like a human baby, with only vague directives of what is good and what is bad, and the ability to learn skills by being shown them, then combine bits of those acquired skills abstractly and predictively to create new skills. "Black box" reward systems are what we really have to worry about. And I don't think anyone is actually trying to build that kind of robot right now.

But, some dictator might create an AI with the core directive "Make the world subservient to me", and that's what's really terrifying.
 
What about Asimov's 3 laws? How do robots circumvent those, and create problems of rebellion?
The last chapter of Asimov's I, Robot actually dealt with that much more subtlety than the movie. The machines would close factories and the such where humans were not physically hurt and in the longer view, would live better lives.
 
Really all we need for sentient computers is champagne or wine, just spill it into the vents on the back. - movie reference
 
If you think about it, the best way to minimize human deaths is to kill all humans.

All those humans were going to die eventually anyway, but now there will be no more human deaths for all of eternity.
 
Ooh awesome, so who will clean the empty streets?

The robots!

wall-e-garbage.gif
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top