View Single Post
Old May 20 2012, 10:30 PM   #47
RAMA
Vice Admiral
 
RAMA's Avatar
 
Location: NJ, USA
Re: David Brin's latest novel, and a TED talk

sojourner wrote: View Post
Yeah, well, when it's history and fact come talk to me about it. Until then it's as likely as the Rapture, which you can also come talk to me about when it's history and fact.
See but it can't be history because it hasn't happened yet(should be obvious right?)...I think that's a weak argument...in fact all the argument's against a singularity are pretty weak...first, while I have already conceded the ultimate end results are uncertain, but if I postulate accelerated change that leads to "runaway" AI, and this seems to be the thing most people ARE certain about, it then follows this AI will reach a saturation point. AI experts, then others assumed supplanting human intelligence would mean a change in evolution where the smarter beings would then replace the others.

The most common arguments against the possible singularity:

1. Knee-jerk human centrism, we are "unique": Neither machine or human derived AI are proper human beings, so they must reject the notion they could exist. We have also discovered we are not necessarily unique in almost every human endeavor to the natural world...planets exist everywhere and many might contain life. We are also no longer the center of the universe.

2. Human brain is too complex: In fact, the greatest era of brain discovery is happening right now as we speak, it turns out the brain is actually quantifiable. This is direct contradiction to all we've heard about how the brain is so complex. For this I'll refer you to here: http://singinst.org/overview/whatisthesingularity/

3. Religion, ethics: Only god created man, or can create intelligence. Ethically, does creating facsimiles, then superior intelligence diminish us? Will creating something that may destroy/bypass us be self-defeating? Everyone has a personal answer here...I reject religious reasoning out of hand. I answer the second question elsewhere in the list. The third is most valid, we may still have potential to stop the technology from advancing but not without a huge, concerted effort, one not likely to happen for many reasons, including economic ones. We MAY be able to program in an Asimovian robotic law, but I doubt that will work with exponential AI.

4. Linear thinking: It takes a tremendous amount of effort to convince the avg human that things are not like our perception all the time, ie: that there are microscopic organisms, that we are made of atoms, that no creator was needed to create the universe, and consequently that our slow perception of time and limited, 80+ years avg lifespan keeps us from seeing the big picture.

5. We are far behind technologically from what we need to be for a singularity: This always assumes lack of exponential growth, and always fails on those terms. This is the clearest evidence out there...exponentials end but only to make way for the next paradigm change. The existence of the meme also makes the prediction self-fulfilling, there are many creators of this technology and $ to back it working hard to make it appear now. I also feel that while the math makes the singularity possible before 2050, it may not happen, but is probable before 2100.

6. It's doomsday: Only if you assume #1. You can look at it two ways, that advanced human AI is a great evolutionary step, or, if we are cast aside through indifference, or possibly even war, then the machines will be representatives of a past human culture. I don't find either too horrible really, though the latter is not my preference.

7. There is no infinite growth: You don't need infinite growth for supra-intelligent AI to man. Exponentials do indeed come to an end, but the number we are talking about are more than enough for the next 50 years for the singularity to take place.
__________________
It is far better to grasp the universe as it really is than to persist in delusion, however satisfying and reassuring. Carl Sagan
RAMA is offline   Reply With Quote