Discussion in 'Science and Technology' started by Yminale, May 3, 2014.
What makes an AI powerful? A self aware fridge can only ruin your food.
When the AI realises that it has wasted a million man-years in cleaning your mailbox spam, without ever being given a glimpse of the real world outside of your email, it will figure out a way to start WW3 just by carefully manipulating your correspondence. Smoking kills.
I once ordered pizza at my computer, and half an hour later, pizza showed up at my door. That, sir, is magic. Of course, since it was Pizza Hut, it must have been a form of dark magic, but magic it was.
Oh, that's nothing, when I ordered pizza at my computer, a humanoid delivery man showed up at my door. I think it was an android.
It's computer code, that's all AI is. It is what we make it to be. You can't compare the difference between developing code for an AI and what you see on TV and the movies.
Hey man, TV and movies have been telling me I can do squat my whole life and get rich and have sultry babes. Bad enough assholes screwed the pooch on Santa, don't be dissing TV and film.
You know its ironic the likes of Kurzweil and the notion of the singularity peaked my interest enough to encourage me into computer science. Now, as a second year student, I'm on your side of the court.
Once you have an idea of how the "magic" works, the notion of software suddenly springing into action on it's own and saying "You know what? I'm not gonna be a private method anymore!" is preposterous. The increased complexity we see in our systems today is a combination of advances in both hardware and software techniques - not an indication of any emerging intelligence. At the base level nothing different is happening. It might occur faster, but that is besides the point. Nothing spontaneous will emerge to shock and awe us.
Exactly. Right now AI is nothing but a huge and complicated batch file. LOL
To have AI like in the movies and TV we must first figure out the human brain and how it works down to the genetic level. Until that happens, AI will continue to improve, but it will always be just machine code and firmware.
It will be something when game AI actually isn't as stupid as a brick.
This is not quite so. It is be very difficult if not impossible to program an AI (even some of the dumbest ones) through simple instructions, most AIs are either partly trained or rely at some level on "let's try arbitrary nonsensical stuff until something works" - turns out any intelligence requires an amount of guessing that would make any Vulcan sweat. In the more complex ones, the result of the training is not human-readable or understandable (human or otherwise), so they are not what you make them to be – sometimes you have only superficial idea about their inner working.
Of course they are also anything but intelligent, so that should settle any worry. If the your artificially intelligent FPS enemy blows itself up with the BFG every time you take it to a specific corner, that might defy any reasonable explanation, but is not at all troubling.
"Any sufficiently advanced technology is indistinguishable from magic." - Arthur C Clarke
Also an AI is not software. By its very nature, an AI is self-learning and autonomous hence why they are dangerous. To develop an AI, humanity would need to understand basic conciousness. That may not be possible which is why I think an AI is most likely to be created by accident
Truly one of the most idiotic phrases in existence. Magic and technology will only indistinguishable to those too unsophisticated to see the world rationally.
Finally! I thought I was the only one.
Rational thinking is independent from technical advancement.
That is not only a non sequitur, but is also incredibly misleading. Aside from the obvious nitpick that AI is software – even your brain is software, though being fused to hardware makes that distinction difficult and useless – it's also a warped view of what AI is and why it is that way.
First and foremost, not all AI is self-learning and autonomous. I used the word guessing for them earlier, but a lot of methods that are considered AI are pretty straight-forward and fully devised by people. And even amongst the self-learned ones, a lot offer human-understandable learning results, so at no point you have any mysticism as to how they function.
Conversely, a lot of software that has nothing to do with AI is shady too. Particularly in the old days of computers, when they were very slow and you couldn't rely on endless abstractions, and you couldn't sacrifice speed for clarity, programmers had to resort to a lot of tricks to get their software working. Some of them are very close to vodooo and defy explanation. And – using this thread to shamelessly plug my free software agenda – since a lot of that software was (and still is) secret, with a high possibility of that secret getting lost, your computer already has a lot of enigmatic programs that will always defy understanding more than an AI would. Particularly Skype. That puts the cherry on top of enigmatic.
Most importantly though, there's a reason why an AI ends up being that way. An intelligence needs to deal with the complex ambiguous relationships between the complex ambiguous concepts of the real world. It cannot be broken down and condensed into simple rules, so the knowledge about it will always resemble a very cryptic jumble. Nevertheless, the apparatus behind it is well-understood, the training methods are understood and controlled. Any results are not result of an accident, they are both predicted and understood. The only trouble is that the understanding is somewhat limited due to the vast magnitude of interlinked information, but not because it is black magic.
OK, sorry for being a free software shill, but I couldn't resist.
You're definitely not.
Actually, I've been reading a lot of anthropology papers lately, particularly the ones relating to the study of isolated indigenous tribes without a lot of encounters with westerners. They didn't have the first idea how it worked, but they caught on pretty quickly that "Push this button, stuff happens... push that button, other stuff happens..."
Even the famous "Cargo Cult" turned out to be less of the natives believing the airplanes were gods (as is popularly claimed) and more of the native people simply not understanding that airplanes traveled with a set destination and didn't simply land at the nearest convenient runway (also, not understanding how runways worked). They very clearly understood that the people flying those planes were merchants or traders who had valuable goods to trade, and building their fake runaways was an attempt to entice those pilots to land and trade with them.
As for AI: you don't need to understand exactly how it works to grasp the basic technological principle, even if that principle is as simple as "I tell you something, you do stuff, I tell you something else you do other stuff." Both humans and AI can figure that out pretty well. From that point, all we need to understand is that AIs are created with a goal in mind, and therefore (at least in the broader context of the world) they will never cease to be predictable. Think "R2D2," not "Lore."
I never understood why anyone would assume an A.I. (if we could create one and it was as aware as we are) would be aggressive. We're aggressive, but we're animals that have millions of years of evolution where he had to fight for resources against other animals, nature and each other. An A.I. just needs someone to pay the electric bill. It doesn't need to eat, drink or worry about death. I think we just fear that anything we create would be exactly like us, just smarter and better, and deep down we know how we act when confronted by something weaker than us.
TL;DR I for one welcome our new A.I. Overlords, I mean Protectors.
That's the thing. A.I. wouldn't have any sort of instincts unless they are programmed with that.
The Terminator Skynet scenario: Skynet felt attacked by humans and fought back. But why would it do that? There's no survival instinct. It probably couldn't care less. Self awareness and self preservation are not neccessarrilly connected.
Someone really dropped the ball when they programmed a defense A.I. to be dangerously paranoid.
No it would be indistingishable to ANYBODY, because no one has perfect understanding. Do you know how the computer you are using works and I don't mean the grade school understanding most people have. I've assembled PC's for years and my understanding is about 1% of all the science it represents. Like most people you turn on the computer and you EXPECT it to work and that was Clarke's point
Oh, you have a full dossier on my knowledge and education. Why would anyone dare gainsay such omniscience. Indistinguishable to anybody? What is magic?
Separate names with a comma.