View Single Post
Old May 30 2012, 01:53 PM   #59
Yelp for People
Fleet Admiral
Yelp for People's Avatar
Location: Robert Maxwell's Profile
View Yelp for People's Twitter Profile Send a message via ICQ to Yelp for People Send a message via AIM to Yelp for People Send a message via Windows Live Messenger to Yelp for People Send a message via Yahoo to Yelp for People
Re: David Brin's latest novel, and a TED talk

It's so nice when people make things into individual bullet points I can address/rebut.

RAMA wrote: View Post
sojourner wrote: View Post
Yeah, well, when it's history and fact come talk to me about it. Until then it's as likely as the Rapture, which you can also come talk to me about when it's history and fact.
See but it can't be history because it hasn't happened yet(should be obvious right?)...I think that's a weak fact all the argument's against a singularity are pretty weak...first, while I have already conceded the ultimate end results are uncertain, but if I postulate accelerated change that leads to "runaway" AI, and this seems to be the thing most people ARE certain about, it then follows this AI will reach a saturation point. AI experts, then others assumed supplanting human intelligence would mean a change in evolution where the smarter beings would then replace the others.

The most common arguments against the possible singularity:

1. Knee-jerk human centrism, we are "unique": Neither machine or human derived AI are proper human beings, so they must reject the notion they could exist. We have also discovered we are not necessarily unique in almost every human endeavor to the natural world...planets exist everywhere and many might contain life. We are also no longer the center of the universe.
That's certainly a strawman. Humans are "unique" in the sense that we don't know of any species like ourselves, but not unique in the sense something like us couldn't exist elsewhere or be created artificially.

2. Human brain is too complex: In fact, the greatest era of brain discovery is happening right now as we speak, it turns out the brain is actually quantifiable. This is direct contradiction to all we've heard about how the brain is so complex. For this I'll refer you to here:
It's not that the human brain is too complex, it's that it works in a totally different way from all our modern computing technology. You may think this is a minor hurdle, but it isn't. It takes exponentially more digital computing power to simulate an analog brain than a genuine analog brain requires. They work completely differently. It's not a question of complexity, but mechanics.

3. Religion, ethics: Only god created man, or can create intelligence. Ethically, does creating facsimiles, then superior intelligence diminish us? Will creating something that may destroy/bypass us be self-defeating? Everyone has a personal answer here...I reject religious reasoning out of hand. I answer the second question elsewhere in the list. The third is most valid, we may still have potential to stop the technology from advancing but not without a huge, concerted effort, one not likely to happen for many reasons, including economic ones. We MAY be able to program in an Asimovian robotic law, but I doubt that will work with exponential AI.
Come on, those kinds of arguments are so silly as to not even need rebutting. But I'm noticing a lot of your points are really just strawmen so you can make it look like there's no credible criticism of the Singularity hypothesis.

4. Linear thinking: It takes a tremendous amount of effort to convince the avg human that things are not like our perception all the time, ie: that there are microscopic organisms, that we are made of atoms, that no creator was needed to create the universe, and consequently that our slow perception of time and limited, 80+ years avg lifespan keeps us from seeing the big picture.
It didn't take a lot of effort to convince me. Another strawman.

5. We are far behind technologically from what we need to be for a singularity: This always assumes lack of exponential growth, and always fails on those terms. This is the clearest evidence out there...exponentials end but only to make way for the next paradigm change. The existence of the meme also makes the prediction self-fulfilling, there are many creators of this technology and $ to back it working hard to make it appear now. I also feel that while the math makes the singularity possible before 2050, it may not happen, but is probable before 2100.
Where's the exponential growth in AI? I rest my case.

The bottom line is, the Singularity as described requires strong AI. It doesn't exist. It's been "just around the corner" for decades. Instead, we've only been able to come up with expert systems, nothing we'd call a self-aware intelligence. We don't even know how to do this, because we don't know how human consciousness works.

It's not a question of how powerful our computers are. There seems to be this assumption that, if we simply make a computer powerful enough and feed it tons of information, it will become self-aware and intelligent. There is no reason to believe this. Computers are inherently deterministic. They don't just magically do things without being told to, unless they have flawed hardware/software.

Now, you could make the more existential argument that a facsimile of human consciousness is indistinguishable from the real thing, but in that case you should get back to us when you've seen one.

6. It's doomsday: Only if you assume #1. You can look at it two ways, that advanced human AI is a great evolutionary step, or, if we are cast aside through indifference, or possibly even war, then the machines will be representatives of a past human culture. I don't find either too horrible really, though the latter is not my preference.
That sounds like a pretty fringe notion.

7. There is no infinite growth: You don't need infinite growth for supra-intelligent AI to man. Exponentials do indeed come to an end, but the number we are talking about are more than enough for the next 50 years for the singularity to take place.
Who argued for infinite growth? "Infinite growth" is an oxymoron, anyway. We live in a universe of physical limits, "infinite" anything is physically impossible. Again, a strawman.

I realize there are many people who are in love with the idea that we'll all be Singularity transhumans within our lifetime, but there is no good reason to believe this. While our materials technology and chemistry are highly advanced, and I have no doubt medical technology will totally blow us away in the decades to come, our computing technology remains more or less unchanged since its inception. We still use binary digital systems with processors made up of transistors and volatile storage for memory. We've improved the scale immensely, but the basic mode of operation is the same. And building AI using this technology, in the sense most people think of it, has been a research dead-end since at least the '60s.

I have no doubt we will have some awesome technology in the decades to come, but strong AI? Without some major breakthrough in computing technology, it's not happening in our lifetimes. And as far as I understand it, the Singularity hinges very much on the existence of strong AI.
Five stars!
Yelp for People is offline   Reply With Quote