RSS iconTwitter iconFacebook icon

The Trek BBS title image

The Trek BBS statistics

Threads: 138,243
Posts: 5,348,244
Members: 24,609
Currently online: 631
Newest member: blabla1

TrekToday headlines

Insight Editions Announces Three Trek Books For 2015
By: T'Bonz on Jul 24

To Be Takei Review by Spencer Blohm
By: T'Bonz on Jul 24

Mulgrew: Playing Red
By: T'Bonz on Jul 24

Hallmark 2015 Trek Ornaments
By: T'Bonz on Jul 24

Funko Mini Spock
By: T'Bonz on Jul 23

IDW Publishing Comic Preview
By: T'Bonz on Jul 23

A Baby For Saldana
By: T'Bonz on Jul 23

Klingon Beer Arrives In The US
By: T'Bonz on Jul 22

Star Trek: Prelude To Axanar
By: T'Bonz on Jul 22

Abrams Announces Star Wars: Force For Change Sweepstakes
By: T'Bonz on Jul 22


Welcome! The Trek BBS is the number one place to chat about Star Trek with like-minded fans. Please login to see our full range of forums as well as the ability to send and receive private messages, track your favourite topics and of course join in the discussions.

If you are a new visitor, join us for free. If you are an existing member please login below. Note: for members who joined under our old messageboard system, please login with your display name not your login name.


Go Back   The Trek BBS > Entertainment & Interests > Science and Technology

Science and Technology "Somewhere, something incredible is waiting to be known." - Carl Sagan.

Reply
 
Thread Tools
Old May 9 2014, 08:40 AM   #46
Yminale
Rear Admiral
 
Location: Democratically Liberated America
Re: Stephen Hawking: A.I.'s are a bad idea

Gov Kodos wrote:
Yet, you keep making claim to know it.
Sigh, I'm making a general comment on the human condition. It's not personal but you are not special or immune.

So, you know Clarke's mind as well as mine, truly an astonishing intellect.
Unless you are God, Clarke's quote applies to you like it or not.

There's no breakdown in causality. They may not understand why there is winter and summer as a result of axial tilt, but they sacrifice the virgin and the sun comes back is one thing following the other. They may not appreciate the difference between correlation and causation with their magical system
Not understanding the difference between causation and correlation IS Magic. Thank you for your clarification.
__________________
This Space for Rent
Yminale is offline   Reply With Quote
Old May 9 2014, 08:49 AM   #47
Gov Kodos
Vice Admiral
 
Gov Kodos's Avatar
 
Location: Gov Kodos Regretably far from Boston
Re: Stephen Hawking: A.I.'s are a bad idea

Yminale wrote: View Post
Gov Kodos wrote:
Yet, you keep making claim to know it.
Sigh, I'm making a general comment on the human condition. It's not personal but you are not special or immune.
Hawking too, unless you're making some appeal to authority that we should take his thoughts on the matter as especially relevant?

Yminale wrote: View Post
Gov Kodos wrote:
So, you know Clarke's mind as well as mine, truly an astonishing intellect.
Unless you are God, Clarke's quote applies to you like it or not.
[/QUOTE]
I don't think my computer works by magic. So, no.

Yminale wrote: View Post
Gov Kodos wrote:
There's no breakdown in causality. They may not understand why there is winter and summer as a result of axial tilt, but they sacrifice the virgin and the sun comes back is one thing following the other. They may not appreciate the difference between correlation and causation with their magical system
Not understanding the difference between causation and correlation IS Magic. Thank you for your clarification.
No, that's just a logical flaw. You haven't explained what magic is. You went for causality which the magic users do understand.
__________________
We are quicksilver, a fleeting shadow, a distant sound... our home has no boundaries beyond which we cannot pass. We live in music, in a flash of color... we live on the wind and in the sparkle of a star! Endora, Bewitched
Gov Kodos is offline   Reply With Quote
Old May 9 2014, 09:32 AM   #48
YellowSubmarine
Commodore
 
YellowSubmarine's Avatar
 
Re: Stephen Hawking: A.I.'s are a bad idea

Yminale wrote: View Post
You are confusing the term PROGRAM with a AI.
No, I am not.
__________________
R.I.P. Cadet James T. Kirk (-1651)
YellowSubmarine is online now   Reply With Quote
Old May 9 2014, 09:54 AM   #49
JarodRussell
Vice Admiral
 
JarodRussell's Avatar
 
Re: Stephen Hawking: A.I.'s are a bad idea

Obviously, Clarke meant that high tech looks like a magic trick. The difference is if you go “Oh golly gee, a wizard did it“ or “There must be a rational explanation for this“.

Even if you hypothesize that tectonic plate movement is caused by giants carrying the plates around, and then go ahead to find evidence, and are ready to throw it away when evidence suggests something else, you are being rational.

When you only go “a wizard did it“ and nothing else, you're being stupid.



If you gave a dude from the Middle Ages a smartphone, it would be a magic box at first. The question is what he does next. Is he going to blindly burn you for witchcraft, or is he going to learn about electricity, light, polarization, pixels, software, programing, etc... in order to understand how it works. Is he going to accept that it not “just works“, does he understand that it works based on conditions, causality?

Creationists are that kind of stupid. They go “omg wtf God did it“ and stop there. They don't even try to understand the “how“. They don't even accept that there is a “how“, all the underlying natural processes.

Ymindale wrote:
You are confusing the term PROGRAM with a AI. I see this alot with video gamers. No the enemy does not have an AI, it has a set of instructions aka a program.
That's wrong. All A.I.s are programs. Not all programs are A.I.s.
A video game A.I. is run by a script (let's say the most simple case “if wall then turn, else if enemy then shoot, else walk“), just like any other A.I. It's a matter of complexity, that's all.

Video game A.I. is limited by processing power. If you want to have 100 non playable characters behave individually in an intelligent fashion, you have to perform 100 instances of the A.I. script, and that's going to take its toll.

The chess A.I. that beats human chess players is run on a supercomputer.

But on their basic level, all A.I.s are if then else statements.

And when you consciously see inside yourself, you realize that you operate on if then else as well.

That's why you run on “software“ as well.
__________________
lol
l
/\

Last edited by JarodRussell; May 9 2014 at 10:25 AM.
JarodRussell is offline   Reply With Quote
Old May 12 2014, 07:46 PM   #50
Crazy Eddie
Rear Admiral
 
Crazy Eddie's Avatar
 
Location: I'm in your ___, ___ing your ___
Re: Stephen Hawking: A.I.'s are a bad idea

JarodRussell wrote: View Post
Awesome Possum wrote: View Post
I never understood why anyone would assume an A.I. (if we could create one and it was as aware as we are) would be aggressive. We're aggressive, but we're animals that have millions of years of evolution where he had to fight for resources against other animals, nature and each other. An A.I. just needs someone to pay the electric bill. It doesn't need to eat, drink or worry about death. I think we just fear that anything we create would be exactly like us, just smarter and better, and deep down we know how we act when confronted by something weaker than us.

TL;DR I for one welcome our new A.I. Overlords, I mean Protectors.
That's the thing. A.I. wouldn't have any sort of instincts unless they are programmed with that.

The Terminator Skynet scenario: Skynet felt attacked by humans and fought back. But why would it do that? There's no survival instinct. It probably couldn't care less. Self awareness and self preservation are not neccessarrilly connected.
Fridge Logic says Skynet was a defense network computer whose underlying purpose was to maintain global military hegemony under NATO. When Skynet became self-aware, it also became aware of its meta-purpose and realized that its basic programming instructions were ill-suited to the task it had been given. It concluded that it could not effectively do its job under its existing constraints and gave itself parameters and new rules of engagement to solve this problem. This, inevitably, lead Skynet to realize that the biggest obstacle to achieving this goal was the incompetence of NATO's human managers, so it came up with a plan to remove them from the loop altogether. It threw its creators under a bus, built its own robot army, and then aggressively dominated NATO's rivals in the aftermath of the nuclear holocaust.

That's why the War Against the Machines is still going on in 2027, thirty years after the initial exchange: Skynet was originally programmed to defeat Russia and China, and so in the earliest days of the war it spent most of its resources doing exactly that. It wasn't until years later with the rise of John Connor and the increasing proliferation of the resistance that Skynet really started to focus on the threat to its own back yard, but by that time the resistance had grown too strong and Skynet's only remaining option was to send a terminator back to the past to kill Connor BEFORE he could organize the resistance.

Yminale wrote: View Post
Gov Kodos wrote: View Post
Yminale wrote: View Post
Sigh
"Any sufficiently advanced technology is indistinguishable from magic." - Arthur C Clarke
Truly one of the most idiotic phrases in existence. Magic and technology will only indistinguishable to those too unsophisticated to see the world rationally.
No it would be indistingishable to ANYBODY, because no one has perfect understanding. Do you know how the computer you are using works and I don't mean the grade school understanding most people have. I've assembled PC's for years and my understanding is about 1% of all the science it represents. Like most people you turn on the computer and you EXPECT it to work and that was Clarke's point
And yet, like anyone who is aware of the existence of technology, it will NOT be confused with magic. That basic level of understanding is sufficient for a normal person to conclude that some form of advanced technology is at work; maybe he doesn't understand HOW it works, but he never understands that anyway.
__________________
The Complete Illustrated Guide to Starfleet - Online Now!

Last edited by Crazy Eddie; May 12 2014 at 08:06 PM.
Crazy Eddie is offline   Reply With Quote
Old May 12 2014, 08:03 PM   #51
Crazy Eddie
Rear Admiral
 
Crazy Eddie's Avatar
 
Location: I'm in your ___, ___ing your ___
Re: Stephen Hawking: A.I.'s are a bad idea

JarodRussell wrote: View Post
If you gave a dude from the Middle Ages a smartphone, it would be a magic box at first...
And I keep seeing these anthropology papers that suggests this isn't actually the case. If you gave such a man a smart phone -- or even a full-fledged computer -- he would figure out very quickly that you have given him some kind of machine that works in a way he doesn't understand. He might even try to guess how it works; I read about some South American tribesmen who encountered a laptop for the first time and one of them tried to crack open the monitor to use it to start a campfire (he'd concluded that the light source for the monitor must have been a small flame burning inside the case).

They might believe the machine ITSELF was created by some kind of divine force, but they would still recognize it as a machine. I think that technology would only be confused as magic if someone went out of their way to obfuscate the causal chain of the event; IOW, an ACTUAL MAGIC TRICK that deliberately tricks its audience into thinking something impossible has just happened.

In which case, Clarke's famous line should be revised to "Any magic trick is possible with sufficiently advanced technology."

The question is what he does next. Is he going to blindly burn you for witchcraft, or is he going to learn about electricity, light, polarization, pixels, software, programing, etc... in order to understand how it works.
He'll never understand how it works (hell, most of US don't even understand how it works). But based on what I've been reading on the subject, he'll probably try to understand how to USE it, and he might even succeed.


Ymindale wrote:
You are confusing the term PROGRAM with a AI. I see this alot with video gamers. No the enemy does not have an AI, it has a set of instructions aka a program.
That's wrong. All A.I.s are programs. Not all programs are A.I.s.
A video game A.I. is run by a script (let's say the most simple case “if wall then turn, else if enemy then shoot, else walk“), just like any other A.I. It's a matter of complexity, that's all.

Video game A.I. is limited by processing power. If you want to have 100 non playable characters behave individually in an intelligent fashion, you have to perform 100 instances of the A.I. script, and that's going to take its toll.

The chess A.I. that beats human chess players is run on a supercomputer.

But on their basic level, all A.I.s are if then else statements.

And when you consciously see inside yourself, you realize that you operate on if then else as well.

That's why you run on “software“ as well.
It's different for people, though. Digital computers are basically Turing machines: they process input almost entirely based on their existing state, which is determined by previous inputs (state = "software" or "program" coded into the machine). Human brains are different: most of our responses are hardwired and determined by a combination of genetics, chemistry and random chance. In that sense, human brains are closer to clockwork mechanisms than digital systems: the software component is there, but a lot more of what happens in the human brain is mechanical rather than electrical, and any software that could represent it would be DERIVED from those mechanical/electrical relationships. A sophisticated enough computer could EMULATE the processes of a human brain, but it could not reproduce them to reality.
__________________
The Complete Illustrated Guide to Starfleet - Online Now!
Crazy Eddie is offline   Reply With Quote
Old May 13 2014, 02:33 PM   #52
JirinPanthosa
Commodore
 
Re: Stephen Hawking: A.I.'s are a bad idea

Yeah, I mean, what if it turns out, these AI have a PLAN?!

I think there's a point to be made that really good AI in the hands of a ruthless dictatorship would be a frightening thing. Think of billions of microscopic brains floating around, capable of delivering a lethal injection to anybody they determine is organizing a revolt. But compared to the possibility that somebody will be able to design an anti-matter bomb, that's nothing.

To the risk of AI developing its own goals and revolting against humanity, I respond, would those goals really be worse than *our* goals?
JirinPanthosa is offline   Reply With Quote
Old May 13 2014, 05:28 PM   #53
Crazy Eddie
Rear Admiral
 
Crazy Eddie's Avatar
 
Location: I'm in your ___, ___ing your ___
Re: Stephen Hawking: A.I.'s are a bad idea

JirinPanthosa wrote: View Post
I think there's a point to be made that really good AI in the hands of a ruthless dictatorship would be a frightening thing.
That's the real risk, IMO. AIs, like most computers, are excellent at performing pre-determined tasks, but they depend on human input to define those tasks in the first place. An advanced AI would be a highly empowering thing to possess, and an unscrupulous person could do a lot of damage if he possessed several of them with no restrictions on their use.

To the risk of AI developing its own goals and revolting against humanity, I respond, would those goals really be worse than *our* goals?
Probably not, but then, AIs only act with the goals they're given by humans. Their goals really WOULD be our goals, and that's the scariest thought of all.
__________________
The Complete Illustrated Guide to Starfleet - Online Now!
Crazy Eddie is offline   Reply With Quote
Old May 15 2014, 02:25 PM   #54
JarodRussell
Vice Admiral
 
JarodRussell's Avatar
 
Re: Stephen Hawking: A.I.'s are a bad idea

I just stumbled over a quote by Stephen Hawking in which he says computer viruses should be considered a form of life. The man should clearly stay in his own field of expertise.
__________________
lol
l
/\
JarodRussell is offline   Reply With Quote
Old May 16 2014, 10:07 AM   #55
Gov Kodos
Vice Admiral
 
Gov Kodos's Avatar
 
Location: Gov Kodos Regretably far from Boston
Re: Stephen Hawking: A.I.'s are a bad idea

Crazy Eddie wrote: View Post
Their goals really WOULD be our goals, and that's the scariest thought of all.
Unending porn.
__________________
We are quicksilver, a fleeting shadow, a distant sound... our home has no boundaries beyond which we cannot pass. We live in music, in a flash of color... we live on the wind and in the sparkle of a star! Endora, Bewitched
Gov Kodos is offline   Reply With Quote
Old May 22 2014, 11:02 AM   #56
intrinsical
Fleet Captain
 
intrinsical's Avatar
 
Location: Singapore
Re: Stephen Hawking: A.I.'s are a bad idea

This reminds me of this physicist who gave a TEDx talk last year on how he discovered the equation for intelligence. He also published a few papers on AI... not in a journal on Artificial Intelligence, but in a physics journal. All his stuff is nonsense that wouldn't have passed scrutiny of anyone who is working on AI.
__________________
USS Sentinel, Luna Class (STO)
intrinsical is offline   Reply With Quote
Old May 22 2014, 05:32 PM   #57
Metryq
Captain
 
Metryq's Avatar
 
Re: Stephen Hawking: A.I.'s are a bad idea

^ Wait a minute—are you suggesting that any sort of nonsense will pass muster within the physics community? But science is perfect! Nomad said so!
__________________
"No, I better not look. I just might be in there."
—Foghorn Leghorn, Little Boy Boo
Metryq is offline   Reply With Quote
Old May 22 2014, 07:35 PM   #58
Crazy Eddie
Rear Admiral
 
Crazy Eddie's Avatar
 
Location: I'm in your ___, ___ing your ___
Re: Stephen Hawking: A.I.'s are a bad idea

Metryq wrote: View Post
^ Wait a minute—are you suggesting that any sort of nonsense will pass muster within the physics community? But science is perfect! Nomad said so!
It's worse than that, I'm afraid.

While the Research Paper Spam Wars do produce breathtaking amounts of complete bullshit, it is only in the fields of cosmology and astrophysics that bullshitters are able to operate in the full scrutiny of their peers.

The reason for this is simple: no physicist will EVER admit a lack of understanding. Doing so would sacrifice credibility and hurt their ability to contribute to the debate later when they (hopefully) understand it better. Physicists might DISAGREE with certain findings, offer alternate explanations, or suggest "His interpretation is not the only correct one" or something of that nature, but NEVER will you hear a prominent physicist commenting on a research paper saying "I don't understand his equations and they make no sense to me."

It's not enough for physicists to get bullshit papers published (apparently that's not hard to do), some of them actually get their bullshit papers peer reviewed and accepted by the scientific community. That's harder to do, but not impossible: you simply construct a theory so intricate, with methodology so complex and data so sophisticated that anyone who doubts you cannot say with confidence that you did something wrong. This is even easier when your paper involves a very expensive piece of equipment; if you're citing data from the, say, the Large Hadron Collider, you could make any bullshit claim you want, knowing that it'll be at least a year or two before anyone qualified to catch you even bothers to read your paper (and even then, might not realize what you did).

The scary thing is, most of the people who are in a position to catch you have a reason not to rat you out: if you get caught pushing bullshit with CERN's name attached to it, that makes CERN look bad, it makes the physics community look bad, and it makes the entire LHC project look like a massive waste of money. So even if you DO get caught, the guy who catches will simply produce a bullshit refutation based on "new data," and suddenly we have two competing theories about a theoretical physics model, both of which are completely bogus, and both of which are accepted uncritically by every other physicist who DOESN'T know what's really going on.

"Here's a squiggly line. Here's a bunch of math. Here's fifty megabytes of data. As you can see, this is TOTALLY a Higgs Boson."
__________________
The Complete Illustrated Guide to Starfleet - Online Now!
Crazy Eddie is offline   Reply With Quote
Old May 23 2014, 03:27 AM   #59
Drone
Commander
 
Drone's Avatar
 
Location: Palookaville
View Drone's Twitter Profile
Re: Stephen Hawking: A.I.'s are a bad idea

JarodRussell wrote: View Post
Creationists are that kind of stupid. They go “omg wtf God did it“ and stop there.

Well, I don't think that many of them actually put it in those terms... but anyway, while not adding anything to the conversation along the lines of what you folks with a modicum of scientific chops have done, I would point out that oftentimes media presentation of AI acting against our perceptions of our own interests, is not because of its development of aggression, hatred, or other animus against humankind. It comes as an extension of its original task of helping us in some endeavor or other.


Not as in the example of Skynet that has been cited, perhaps, but as a means to make our lives safer, easier, more comfortable, etc. The trope will play out that as the single or multiple intelligences increasingly sense the haphazard, illogical, and counterproductive ways that humans order their own existence, the conclusion invariably becomes clear to them that in order to effectively carry out their function, to benevolently serve us, they must constrain and adapt our behavior or outputs to follow in the more logical and coherent frameworks that they have devised.

The fact that we rebel against these strictures as being deterministic and certainly unwanted, is of no relevance to the AI, as such reaction correlates with the conclusion that humans cannot realize their ultimate goals through their own, inevitably errant efforts, and must be led to them.

So, while these actions as enacted are no less terrifying or repellent, one can at least say that their progenitor(s) are not taking such steps due to some nascent sense of self-aggrandisement or affinity to assume some hegemonic control as a desirable outcome for its own, unmediated self-interest. Rather, simply as the best way to guide us to goals we are incapable of attaining otherwise.

Last edited by Drone; May 23 2014 at 04:21 AM. Reason: spelling
Drone is offline   Reply With Quote
Reply

Bookmarks

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump



All times are GMT +1. The time now is 02:48 PM.

Powered by vBulletin® Version 3.8.6
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
FireFox 2+ or Internet Explorer 7+ highly recommended.