View Single Post
Old May 4 2014, 09:16 PM   #15
FPAlpha
Vice Admiral
 
FPAlpha's Avatar
 
Location: Mannheim, Germany
Re: Stephen Hawking: A.I.'s are a bad idea

JarodRussell wrote: View Post
Dryson wrote: View Post
A.I.'s aren't all that bad if they are programmed to have breakwalls where their programming logic does not go past the breakwall.

The real horror is having a robot built with the intelligence of the HeartBleed Virus. Now that would be scary.
The Heartbleed virus?
It's a misnomer. Heartbleed isn't a virus but a vulnerability in the OpenSSL encryption system which is used very widely in the internet by many sites.

Older versions were open to hacking attempts by using this exploit but i fail to see what that has to do with AI.

Robert Maxwell wrote: View Post
Yminale wrote: View Post
Robert Maxwell wrote: View Post
We are nowhere near self-improving AIs that risk surpassing human capacity to understand them.
The problem is would we recognize it if it existed.
Yes.

I wish people would stop looking at computers as though they're magical. They aren't.

Even our most advanced computer systems run on software that relies on very basic principles and is relatively easy to understand.
We started off as basic chemical compounds that formed together to form the first molecules and developed on to aminoacids and then single cell organisms.

We were once also very basic but we developed. At some point in our lifetime i'm pretty sure we'll build computers able to fool a common person into believing they're human. That doesn't mean it's intelligent in our understanding of the word but that it has so much processing power and sensory input to accurately predict our questions or rather what we want to hear. It would just be a very well programmed machine.

However we haven't understood ourselves when it comes to such philosophical questions such as what is intelligence so if we start to build machines that can understand a problem and find innovative solutions for it that were not thought of by their programmers then i believe we are on our way to create an AI.

We are not magical ourselves.. we are biological machines and are programmed by nature to safeguard our body as best as we can, to provide for its basic needs and we are also programmed for social interaction and a multitude of other things that when combined we call life. Who's to say that at one point in the future we will not be able to build such a machine that mimics us?

At which point do we become intelligent beings? A baby can't survive on its own without support. It doesn't mean it's dumb but that it hasn't developed to a stage where it could support itself. However it has a tremendous capacity to learn, we just need to support its body functions with food and shelter and the rest develops on its own through experience and trial & error.

I fully believe a machine could do the same thing given enough processing power and sophisticated enough programming. What a human would need several years for before it could sustain itself without outside help (say 6-8 years at the minimum) a computer may be able to do it in a fraction of that time.

As with humans though it may develop its own personality.. if we grow up with mostly negative experiences chances are we turn out to be assholes as a reaction to our environment and a kind of self protection & vice versa (doesn't have to be.. plenty of people grew up within a nice environment and are still assholes). However no one knows how such an AI would turn out be but it is already by design infinitely more powerful than a human being by intelligence alone. Give it a connection to the outside world and the ability to manipulate it (such as a mobile body or something else) and we will play dice with out future because how would you fight a thing that outprocessess all living human brains combined and turns out to be the real world equivalent of Skynet?

Do you remember the TNG episode "Measure of a Man" (Data sits before a court to decide it he's his own being or if he's Starfleet property)? I think this is relevant to this question. Data is a highly sophisticated machine yet at times fails to understand basic human interactions which we learned as toddlers and childredn. However he's constantly improving himself and the Data from season 1 is a very different matter than Data season 7 (i'll disregard the emotion chip because then you'd have no other chance than to call him a true AI). Throughout the show he displayed his own way of expressing himself as a person.. he has wishes and goals and he performs tasks that are non essential to sustaining his life such as getting active in arts and interacting socially with crewmembers. Starfleet has recognized his status by allowing him to serve as a fully integrated officer with command authority over biological entities, i.e. us. He has turned out to be a decent person but then there's also Lore. Nearly the same technological makeup but a totally different being.

This is what Hawking goes about.. not if we would be able to physically build such a machine (we will, maybe in our lifetime) but if we should? There's nothing to say against a very sophisticated proto-AI. I'd love to have a robot that cleans and organizes my living space.. i hate dusting and vacuuming (fortunately i have a dishwasher so one thing down) and would pay a premium to have such a thing and return to a squeaky clean and organized appartment/house each evening.

Would i like to have philsophical arguments with it or make my life decisions for me just because it can compute more variables? Heck no.
__________________
"Chewie, we're home.."

Last edited by FPAlpha; May 4 2014 at 09:50 PM.
FPAlpha is offline   Reply With Quote