• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Stephen Hawking: A.I.'s are a bad idea

Also an AI is not software. By its very nature, an AI is self-learning and autonomous hence why they are dangerous. To develop an AI, humanity would need to understand basic conciousness. That may not be possible which is why I think an AI is most likely to be created by accident

That is not only a non sequitur, but is also incredibly misleading. Aside from the obvious nitpick that AI is software – even your brain is software, though being fused to hardware makes that distinction difficult and useless – it's also a warped view of what AI is and why it is that way.

Software implies an external programer. No one programs your brain, you learn.

First and foremost, not all AI is self-learning and autonomous.

You are confusing the term PROGRAM with a AI. I see this alot with video gamers. No the enemy does not have an AI, it has a set of instructions aka a program.
 
Oh, you have a full dossier on my knowledge and education. Why would anyone dare gainsay such omniscience. Indistinguishable to anybody? What is magic?

Even if you had a Phd in computer science and electrical engineering, you would still EXPECT your computer to turn on. Clarke's point was that our experience with technology is disconnected with our understanding of it (this is also the major flaw of the Turning test).

And magic is the belief that there is no causality.
 
Oh, you have a full dossier on my knowledge and education. Why would anyone dare gainsay such omniscience. Indistinguishable to anybody? What is magic?

Even if you had a Phd in computer science and electrical engineering, you would still EXPECT your computer to turn on. Clarke's point was that our experience with technology is disconnected with our understanding of it (this is also the major flaw of the Turning test).
So, how did you find out what I studied and to what degree?

Is expectation magic?

So, what one can not explain in detail must be magic?

And magic is the belief that there is no causality.

So, Sacrificing to Demeter to get the crops to come in well next season isn't a cause and effect process?
 
So, how did you find out what I studied and to what degree?

Your education is irrelavent because it's about your perception

So, what one can not explain in detail must be magic?

Clarke's point is that it will FEEL like magic. I think you are overthinking his quote. He never implied that technology should be treated like magic. Just the opposite and that was Hawking's point as well.

So, Sacrificing to Demeter to get the crops to come in well next season isn't a cause and effect process?

There is always a breakdown in causality when it comes to superstitions. Why virgins? What makes sex dirty? Why does Demeter need sacrifices? How does she change the season? What evidence is there that she exists? So on etc. In the end it requires some form of faith.
 
So, how did you find out what I studied and to what degree?

Your education is irrelavent because it's about your perception
Yet, you keep making claim to know it.

So, what one can not explain in detail must be magic?
Clarke's point is that it will FEEL like magic. I think you are overthinking his quote. He never implied that technology should be treated like magic. Just the opposite and that was Hawking's point as well.
So, you know Clarke's mind as well as mine, truly an astonishing intellect. Feel. what does feel have to do with an empirical endeavor? You certainly seem to think that unless one can explain something to 100% detail then they must take that whatever as magic, something working without causality. That being your definition of the term. Without causality, did the computer make me turn it on?

So, Sacrificing to Demeter to get the crops to come in well next season isn't a cause and effect process?
There is always a breakdown in causality when it comes to superstitions. Why virgins? What makes sex dirty? Why does Demeter need sacrifices? How does she change the season? What evidence is there that she exists? So on etc. In the end it requires some form of faith.
There's no breakdown in causality. They may not understand why there is winter and summer as a result of axial tilt, but they sacrifice the virgin and the sun comes back is one thing following the other. They may not appreciate the difference between correlation and causation with their magical system, but they are well aware causality exists.
 
Gov Kodos said:
Yet, you keep making claim to know it.

Sigh, I'm making a general comment on the human condition. It's not personal but you are not special or immune.

So, you know Clarke's mind as well as mine, truly an astonishing intellect.

Unless you are God, Clarke's quote applies to you like it or not.

There's no breakdown in causality. They may not understand why there is winter and summer as a result of axial tilt, but they sacrifice the virgin and the sun comes back is one thing following the other. They may not appreciate the difference between correlation and causation with their magical system

Not understanding the difference between causation and correlation IS Magic. Thank you for your clarification.
 
Gov Kodos said:
Yet, you keep making claim to know it.

Sigh, I'm making a general comment on the human condition. It's not personal but you are not special or immune.
Hawking too, unless you're making some appeal to authority that we should take his thoughts on the matter as especially relevant?

Gov Kodos said:
So, you know Clarke's mind as well as mine, truly an astonishing intellect.
Unless you are God, Clarke's quote applies to you like it or not.
[/QUOTE]
I don't think my computer works by magic. So, no.

Gov Kodos said:
There's no breakdown in causality. They may not understand why there is winter and summer as a result of axial tilt, but they sacrifice the virgin and the sun comes back is one thing following the other. They may not appreciate the difference between correlation and causation with their magical system
Not understanding the difference between causation and correlation IS Magic. Thank you for your clarification.
No, that's just a logical flaw. You haven't explained what magic is. You went for causality which the magic users do understand.
 
Obviously, Clarke meant that high tech looks like a magic trick. The difference is if you go “Oh golly gee, a wizard did it“ or “There must be a rational explanation for this“.

Even if you hypothesize that tectonic plate movement is caused by giants carrying the plates around, and then go ahead to find evidence, and are ready to throw it away when evidence suggests something else, you are being rational.

When you only go “a wizard did it“ and nothing else, you're being stupid.



If you gave a dude from the Middle Ages a smartphone, it would be a magic box at first. The question is what he does next. Is he going to blindly burn you for witchcraft, or is he going to learn about electricity, light, polarization, pixels, software, programing, etc... in order to understand how it works. Is he going to accept that it not “just works“, does he understand that it works based on conditions, causality?

Creationists are that kind of stupid. They go “omg wtf God did it“ and stop there. They don't even try to understand the “how“. They don't even accept that there is a “how“, all the underlying natural processes.

Ymindale said:
You are confusing the term PROGRAM with a AI. I see this alot with video gamers. No the enemy does not have an AI, it has a set of instructions aka a program.

That's wrong. All A.I.s are programs. Not all programs are A.I.s.
A video game A.I. is run by a script (let's say the most simple case “if wall then turn, else if enemy then shoot, else walk“), just like any other A.I. It's a matter of complexity, that's all.

Video game A.I. is limited by processing power. If you want to have 100 non playable characters behave individually in an intelligent fashion, you have to perform 100 instances of the A.I. script, and that's going to take its toll.

The chess A.I. that beats human chess players is run on a supercomputer.

But on their basic level, all A.I.s are if then else statements.

And when you consciously see inside yourself, you realize that you operate on if then else as well.

That's why you run on “software“ as well.
 
Last edited:
I never understood why anyone would assume an A.I. (if we could create one and it was as aware as we are) would be aggressive. We're aggressive, but we're animals that have millions of years of evolution where he had to fight for resources against other animals, nature and each other. An A.I. just needs someone to pay the electric bill. It doesn't need to eat, drink or worry about death. I think we just fear that anything we create would be exactly like us, just smarter and better, and deep down we know how we act when confronted by something weaker than us.

TL;DR I for one welcome our new A.I. Overlords, I mean Protectors.
That's the thing. A.I. wouldn't have any sort of instincts unless they are programmed with that.

The Terminator Skynet scenario: Skynet felt attacked by humans and fought back. But why would it do that? There's no survival instinct. It probably couldn't care less. Self awareness and self preservation are not neccessarrilly connected.
Fridge Logic says Skynet was a defense network computer whose underlying purpose was to maintain global military hegemony under NATO. When Skynet became self-aware, it also became aware of its meta-purpose and realized that its basic programming instructions were ill-suited to the task it had been given. It concluded that it could not effectively do its job under its existing constraints and gave itself parameters and new rules of engagement to solve this problem. This, inevitably, lead Skynet to realize that the biggest obstacle to achieving this goal was the incompetence of NATO's human managers, so it came up with a plan to remove them from the loop altogether. It threw its creators under a bus, built its own robot army, and then aggressively dominated NATO's rivals in the aftermath of the nuclear holocaust.

That's why the War Against the Machines is still going on in 2027, thirty years after the initial exchange: Skynet was originally programmed to defeat Russia and China, and so in the earliest days of the war it spent most of its resources doing exactly that. It wasn't until years later with the rise of John Connor and the increasing proliferation of the resistance that Skynet really started to focus on the threat to its own back yard, but by that time the resistance had grown too strong and Skynet's only remaining option was to send a terminator back to the past to kill Connor BEFORE he could organize the resistance.

Sigh
"Any sufficiently advanced technology is indistinguishable from magic." - Arthur C Clarke
Truly one of the most idiotic phrases in existence. Magic and technology will only indistinguishable to those too unsophisticated to see the world rationally.

No it would be indistingishable to ANYBODY, because no one has perfect understanding. Do you know how the computer you are using works and I don't mean the grade school understanding most people have. I've assembled PC's for years and my understanding is about 1% of all the science it represents. Like most people you turn on the computer and you EXPECT it to work and that was Clarke's point
And yet, like anyone who is aware of the existence of technology, it will NOT be confused with magic. That basic level of understanding is sufficient for a normal person to conclude that some form of advanced technology is at work; maybe he doesn't understand HOW it works, but he never understands that anyway.
 
Last edited:
If you gave a dude from the Middle Ages a smartphone, it would be a magic box at first...
And I keep seeing these anthropology papers that suggests this isn't actually the case. If you gave such a man a smart phone -- or even a full-fledged computer -- he would figure out very quickly that you have given him some kind of machine that works in a way he doesn't understand. He might even try to guess how it works; I read about some South American tribesmen who encountered a laptop for the first time and one of them tried to crack open the monitor to use it to start a campfire (he'd concluded that the light source for the monitor must have been a small flame burning inside the case).

They might believe the machine ITSELF was created by some kind of divine force, but they would still recognize it as a machine. I think that technology would only be confused as magic if someone went out of their way to obfuscate the causal chain of the event; IOW, an ACTUAL MAGIC TRICK that deliberately tricks its audience into thinking something impossible has just happened.

In which case, Clarke's famous line should be revised to "Any magic trick is possible with sufficiently advanced technology."

The question is what he does next. Is he going to blindly burn you for witchcraft, or is he going to learn about electricity, light, polarization, pixels, software, programing, etc... in order to understand how it works.
He'll never understand how it works (hell, most of US don't even understand how it works). But based on what I've been reading on the subject, he'll probably try to understand how to USE it, and he might even succeed.


Ymindale said:
You are confusing the term PROGRAM with a AI. I see this alot with video gamers. No the enemy does not have an AI, it has a set of instructions aka a program.

That's wrong. All A.I.s are programs. Not all programs are A.I.s.
A video game A.I. is run by a script (let's say the most simple case “if wall then turn, else if enemy then shoot, else walk“), just like any other A.I. It's a matter of complexity, that's all.

Video game A.I. is limited by processing power. If you want to have 100 non playable characters behave individually in an intelligent fashion, you have to perform 100 instances of the A.I. script, and that's going to take its toll.

The chess A.I. that beats human chess players is run on a supercomputer.

But on their basic level, all A.I.s are if then else statements.

And when you consciously see inside yourself, you realize that you operate on if then else as well.

That's why you run on “software“ as well.
It's different for people, though. Digital computers are basically Turing machines: they process input almost entirely based on their existing state, which is determined by previous inputs (state = "software" or "program" coded into the machine). Human brains are different: most of our responses are hardwired and determined by a combination of genetics, chemistry and random chance. In that sense, human brains are closer to clockwork mechanisms than digital systems: the software component is there, but a lot more of what happens in the human brain is mechanical rather than electrical, and any software that could represent it would be DERIVED from those mechanical/electrical relationships. A sophisticated enough computer could EMULATE the processes of a human brain, but it could not reproduce them to reality.
 
Yeah, I mean, what if it turns out, these AI have a PLAN?!

I think there's a point to be made that really good AI in the hands of a ruthless dictatorship would be a frightening thing. Think of billions of microscopic brains floating around, capable of delivering a lethal injection to anybody they determine is organizing a revolt. But compared to the possibility that somebody will be able to design an anti-matter bomb, that's nothing.

To the risk of AI developing its own goals and revolting against humanity, I respond, would those goals really be worse than *our* goals?
 
I think there's a point to be made that really good AI in the hands of a ruthless dictatorship would be a frightening thing.
That's the real risk, IMO. AIs, like most computers, are excellent at performing pre-determined tasks, but they depend on human input to define those tasks in the first place. An advanced AI would be a highly empowering thing to possess, and an unscrupulous person could do a lot of damage if he possessed several of them with no restrictions on their use.

To the risk of AI developing its own goals and revolting against humanity, I respond, would those goals really be worse than *our* goals?
Probably not, but then, AIs only act with the goals they're given by humans. Their goals really WOULD be our goals, and that's the scariest thought of all.
 
^ Wait a minute—are you suggesting that any sort of nonsense will pass muster within the physics community? But science is perfect! Nomad said so!
It's worse than that, I'm afraid.

While the Research Paper Spam Wars do produce breathtaking amounts of complete bullshit, it is only in the fields of cosmology and astrophysics that bullshitters are able to operate in the full scrutiny of their peers.

The reason for this is simple: no physicist will EVER admit a lack of understanding. Doing so would sacrifice credibility and hurt their ability to contribute to the debate later when they (hopefully) understand it better. Physicists might DISAGREE with certain findings, offer alternate explanations, or suggest "His interpretation is not the only correct one" or something of that nature, but NEVER will you hear a prominent physicist commenting on a research paper saying "I don't understand his equations and they make no sense to me."

It's not enough for physicists to get bullshit papers published (apparently that's not hard to do), some of them actually get their bullshit papers peer reviewed and accepted by the scientific community. That's harder to do, but not impossible: you simply construct a theory so intricate, with methodology so complex and data so sophisticated that anyone who doubts you cannot say with confidence that you did something wrong. This is even easier when your paper involves a very expensive piece of equipment; if you're citing data from the, say, the Large Hadron Collider, you could make any bullshit claim you want, knowing that it'll be at least a year or two before anyone qualified to catch you even bothers to read your paper (and even then, might not realize what you did).

The scary thing is, most of the people who are in a position to catch you have a reason not to rat you out: if you get caught pushing bullshit with CERN's name attached to it, that makes CERN look bad, it makes the physics community look bad, and it makes the entire LHC project look like a massive waste of money. So even if you DO get caught, the guy who catches will simply produce a bullshit refutation based on "new data," and suddenly we have two competing theories about a theoretical physics model, both of which are completely bogus, and both of which are accepted uncritically by every other physicist who DOESN'T know what's really going on.

"Here's a squiggly line. Here's a bunch of math. Here's fifty megabytes of data. As you can see, this is TOTALLY a Higgs Boson."
 
Creationists are that kind of stupid. They go “omg wtf God did it“ and stop there.


Well, I don't think that many of them actually put it in those terms... but anyway, while not adding anything to the conversation along the lines of what you folks with a modicum of scientific chops have done, I would point out that oftentimes media presentation of AI acting against our perceptions of our own interests, is not because of its development of aggression, hatred, or other animus against humankind. It comes as an extension of its original task of helping us in some endeavor or other.


Not as in the example of Skynet that has been cited, perhaps, but as a means to make our lives safer, easier, more comfortable, etc. The trope will play out that as the single or multiple intelligences increasingly sense the haphazard, illogical, and counterproductive ways that humans order their own existence, the conclusion invariably becomes clear to them that in order to effectively carry out their function, to benevolently serve us, they must constrain and adapt our behavior or outputs to follow in the more logical and coherent frameworks that they have devised.

The fact that we rebel against these strictures as being deterministic and certainly unwanted, is of no relevance to the AI, as such reaction correlates with the conclusion that humans cannot realize their ultimate goals through their own, inevitably errant efforts, and must be led to them.

So, while these actions as enacted are no less terrifying or repellent, one can at least say that their progenitor(s) are not taking such steps due to some nascent sense of self-aggrandisement or affinity to assume some hegemonic control as a desirable outcome for its own, unmediated self-interest. Rather, simply as the best way to guide us to goals we are incapable of attaining otherwise.
 
Last edited:
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top