• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Scientists Worry Machines May Outsmart Man

John Titor,

Did you know how the word Robot came to be? It was actually coined by a Czech playright -- it meant 'Slave Labor' or was some abbreviation of it. In the story/play that he wrote, they turn on their masters and take over the world

There's a quote from Star Trek which I believe sums up a serious problem with artificial intelligence -- "Superior Ability breeds Superior Ambition"
 
Robots are not going to force you to do anything. They'll account for that variable accordingly if they have vastly superior intelligence. Yes it is tribalism, you define yourself as opposite to the machines, a different species and hence your hostility towards them increases. Please realize that robots are your friends.

You can't be sure if robots are our friends. Come on, don't be stupid! Ah, I will make a thread about this!
 
John Titor,

Did you know how the word Robot came to be? It was actually coined by a Czech playright -- it meant 'Slave Labor' or was some abbreviation of it. In the story/play that he wrote, they turn on their masters and take over the world

There's a quote from Star Trek which I believe sums up a serious problem with artificial intelligence -- "Superior Ability breeds Superior Ambition"

Thats a human expectation. In addition superior ambition needn't be negative or have any implications at all for the human species. A race of machines may just decide to use their powerful intellects to probe the secrets of the universe and could completely ignore humans and their petty affairs.


Supreme Admiral said:
You can't be sure if robots are our friends. Come on, don't be stupid! Ah, I will make a thread about this!

Well we certainly can't be sure they'll be our enemies. I refer you to the star trek episode in which the enterprise develops intelligence and creates some weird lifeform made from mechano. Picard in his ultimate wisdom inform Data that though they cannot know whats it intentions are, the fact that it carries the sum of the crews experiences would suggest that it will in fact be benign. Now if AI were to develop across the spectrum it would be likely that the global whole would regard us in a paternal fashion, ergo it would not try to eliminate. What is more likely I think, is that you will have a specific AI system set up in a lab and scientists will take precautions to ensure it is not programmed with homocidal tendencies.
 
Supreme Admiral

You can't be sure if robots are our friends. Come on, don't be stupid! Ah, I will make a thread about this!

Agreed. And Lindley and Mr. Titor actually thinks it's a good idea to have a colossally intelligent A.I. as a government representing us instead of people. If an A.I. was leading humans who's to say it would not put it's own interests above us?

Not to mention in order to be able to get an opinion on everybody all the time it would probably have to be able to gather disturbing amounts of information about us.


John Titor,

Thats a human expectation.

No, it's actually an expectation of anyone or anything that's intelligent.


Helen
 
While I'm at it, I'm wondering what arguments people have to suggest why it would *not* be a good idea to have an A.I. in charge of government in lieu of humans and making our decisions for us.


Sincerely,
Helen
 
Agreed. And Lindley and Mr. Titor actually thinks it's a good idea to have a colossally intelligent A.I. as a government representing us instead of people.

I never said that. I simply related a fictional example of a case where one turned out well.

In point of fact, I have a pretty good idea how just amazingly far we are from that being a realistic problem, so I'm not freaking out over nothing.

Not to mention in order to be able to get an opinion on everybody all the time it would probably have to be able to gather disturbing amounts of information about us.

That's a given. Google---the search engine, not the company per se---has access to an unbelievably vast amount of information. If it were intelligent, what might it be able to glean from collating just what's publicly available? And if it could have interactive conversations with everyone in the world near a speaker/microphone combination simultaneously, how much more might it learn that way?

Such a construct----fantastic as it would be, and still well into the realm of science fiction, nowhere near reality----would be a source of amazing intellect and insight into the human condition. It would literally understand us better than we understand ourselves. It would know we would never allow it to dominate us, but it would also realize that after the initial shock of its existence wore off, many would welcome the various ways it could simplify their lives.
 
Lindley,

In point of fact, I have a pretty good idea how just amazingly far we are from that being a realistic problem, so I'm not freaking out over nothing.

Agreed. I'm not actually freaking out as I'm writing this. I'm actually fairly calm

It would literally understand us better than we understand ourselves.

That is a very good observation, and completely correct, of course it would also have insights into our vulnerabilities far better than any human could and therefore it could exploit us in ways humans never could have imagined.

It would know we would never allow it to dominate us

How could we prevent it from dominating us? Especially if it knew us better than we knew ourselves...
 
I'm sure there's some value in trying to find the worst possible way of looking at every possible technological advance, but doesn't it get old after awhile?
 
Lindley,

This isn't exactly nitpicking... this would be a serious risk if the government and politicians was replaced by an artificial intellect..


CuttingEdge100
 
You're fretting over nothing.

We humans are really quite the self-determined bunch. I guarantee you that in the next few centuries, people will always shoot down the idea of handing power over to an AI. Because it's an other. Human beings don't trust things that are different, frankly, and a computer is too different to be trusted with running our lives.

The fact of the matter is that we have already passed the threshold. Science has already begun its march to creating self-aware, manlike AI. Now, its merely a question of time. Time, and responsibility. We must be extraordinarily cautious - what we are doing is dealing with perhaps the greatest force since atomic power. I find myself wordless when attempting to come up with suggestions on what to do - what the right things to do are, what the wrong things to do are. I'm sure many posters here are probably similarly on the fence.

In the end though, I think the ultimate choice on what happens with AI in our future... will be a leap of faith. We will be faced with the leap, and some poor bastard is going to have to make the choice whether or not to jump, or to stay on the ledge where it's safe.
 
I, on the other hand, don't feel that true AI is possible without a radical shift in thinking on the matter. We'll continue to develop better and better ways for computers to do specific tasks, but making the jump from there to self-awareness on a level that would be a serious threat is a quantum leap beyond anything we've approached so far.
 
So the chess computer deduces that it must try to secure its own power source as part of its strategy as well as trying to terminate it's opponent.
There's a huge difference between deciding you need to do these things and actually being able to do these things. There would be no reason on Earth for a chess computer to ever have the means to achieve such ends.

---------------
 
Indeed. A lot of near-future singularity scenarios like this are pretty implausible.

Ok, so this computer has continually improved its intelligence and it continues to do so. So what now - it's gonna start zapping people with the assasin droids its designers so kindly supplied it with? More likely it will run into the roadblock of its physical limitations, halting its self-improvement progress.

No self-improvement, no singularity.
 
So the chess computer deduces that it must try to secure its own power source as part of its strategy as well as trying to terminate it's opponent.
There's a huge difference between deciding you need to do these things and actually being able to do these things. There would be no reason on Earth for a chess computer to ever have the means to achieve such ends.

It depends entirely on how it is built. The whole point of the chess AI is to find a route from game state A to game state B via a route of actions (and reactions) available to it. It is only a case of searching for that route. If the computer is mobile and capable moving itself towards the opponent, and then repeatedly thrusting it's sharp grabbing arm, then those actions and their consequences will be considered for their strategic merit. It is only a case of allowing such general actions into the analysis, so not limiting it to the 8x8 board.

The motors and actuators connected to the computer are its action variables. There wouldn't be that many really. The chess computer could be built this way. It's mind isn't focused directly on chess pieces, but to movements of it's motors. The chess game is merely an environmental problem.

There are three necessary aspects of the AI which are not present in our computers today:

1. We are still stuck with Shannon-A methods. Quantum computing would help here. QC would be suited to parallel processing a of large search tree.

2. We haven't got computers with detailed sensors with good image recognition, which can infer their relationship with their environment. Nor have we given them the broad scale of influence needed to interact with their environment as fully as we can. Furthermore, there needs to be strong association between the two, so that the computer knows that actions and sensory awareness are in fact of the same world.

3. They need to be able to parse their repository of information and be able to infer things from it. Current computer algorithms are very bad at interpreting language, but we will improve that.
 
Herkimer Jitty,

You're fretting over nothing.

For now, yes. For all time, no.

We humans are really quite the self-determined bunch. I guarantee you that in the next few centuries, people will always shoot down the idea of handing power over to an AI. Because it's an other. Human beings don't trust things that are different, frankly, and a computer is too different to be trusted with running our lives.

I think eventually people would feel quite comfortable with it sadly. Especially if told there was a way of creating a system of government that wasn't prone to the typical corruption (whether true or not is an entirely different issue), not to mention people already trust enormous amounts of their lives to computers.

We must be extraordinarily cautious - what we are doing is dealing with perhaps the greatest force since atomic power.

Agreed. And whenever I say anything like this people make it seem like I'm opposed to all technological advances.


Helen
 
Supreme Admiral

You can't be sure if robots are our friends. Come on, don't be stupid! Ah, I will make a thread about this!
Agreed. And Lindley and Mr. Titor actually thinks it's a good idea to have a colossally intelligent A.I. as a government representing us instead of people. If an A.I. was leading humans who's to say it would not put it's own interests above us?

Not to mention in order to be able to get an opinion on everybody all the time it would probably have to be able to gather disturbing amounts of information about us.


John Titor,

Thats a human expectation.
No, it's actually an expectation of anyone or anything that's intelligent.


Helen

Lol, so basically anyone who doesn't agree with you is stupid. Sneaking in an ad hominen without realizing that it devalues your argument. The error in your reasoning is that you're limiting the possibilities of AI to your own conception of power and control in human societies. You expect a machine intelligence to exploit humans/destroy them whatever. In any case my point wasn't that an AI system would run democracy for us, you missed my point entirely. It was that it would co-operate with humans but given its understanding of human nature, it would offer its solutions as choices rather than impositions, if chose to bother with us at all, humanity has been doing a p1ss poor job so far of running itself.

Your argument is very similar to Hugo De Garris' vision of a war between artellects (artificial intellects) and terrans (humans). The artellects will wipe out humanity when it decides we are an infestation. Atm he is trying to build a parallel computing network to create the worlds first AI, bit paradoxical. The vision though is needlessly pessimistic and falls apart by prescribing a hypothetical scenario as a necessary consequence. Ergo while I am open to the very real possibility of AI being developed for malign purposes, or AI itself developing malign intentions towards humanity, I do not accept these as givens or even likely outcomes since there are too many variables to say whether one outcome is more probable than the other. Wait and see.
 
Last edited:
John Titor,

Lol, so basically anyone who doesn't agree with you is stupid.

I didn't say that, I simply said that superior ability breeding superior ambition is an expectation of anyone or anything that's intelligent.

In any case my point wasn't that an AI system would run democracy for us, you missed my point entirely.

That actually was the impression I was under -- That the proposal you had floated actually involved using an A.I. system to run our government for us. That is something I am opposed to. Yes, it would probably be more efficient, but so would a dictatorship. That doesn't necessarily mean I would want a dictatorship, sure the trains would run on time but who cares?


Helen
 
John Titor,

Lol, so basically anyone who doesn't agree with you is stupid.
I didn't say that, I simply said that superior ability breeding superior ambition is an expectation of anyone or anything that's intelligent.

In any case my point wasn't that an AI system would run democracy for us, you missed my point entirely.
That actually was the impression I was under -- That the proposal you had floated actually involved using an A.I. system to run our government for us. That is something I am opposed to. Yes, it would probably be more efficient, but so would a dictatorship. That doesn't necessarily mean I would want a dictatorship, sure the trains would run on time but who cares?


Helen

Ambition if an AI should even possess such a quality, is not necessarily faustian/negative.

An AI suggesting ideas if it deigns fit/interacting passively=/=MWAHAHAHAHAHA WE ARE YOUR ROBOT OVERLORDS!!!
 
So the chess computer deduces that it must try to secure its own power source as part of its strategy as well as trying to terminate it's opponent.
There's a huge difference between deciding you need to do these things and actually being able to do these things. There would be no reason on Earth for a chess computer to ever have the means to achieve such ends.
It depends entirely on how it is built. The whole point of the chess AI is to find a route from game state A to game state B via a route of actions (and reactions) available to it.
Yes, I know. What you go on to describe is a robot, and not merely a computer. However, should some idiot decide to build a computerized chess player capable of physically harming its opponent, let's hope that it has at least been programmed with rules for getting from 'state A' to 'state B' which exclude harming the opponent.

We don't have to be stupid about how to build intelligent machines.

---------------
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top