Re: Some science fiction "firsts"
^Again, I don't deny that it's possible to improve the performance of the brain in certain ways. But the study mentioned in that article I linked to suggested that such improvements would come with a cost, that there would be tradeoffs for any gain, and that eventually you'd reach a point of diminishing returns. It's just wishful thinking to assume the brain can be augmented without limit, or that any system can be scaled upward without limit. That's Kurzweil's fundamental mistake, that failure to recognize that not everything can be extrapolated forward indefinitely
Moore's Law is not an inviolable law of nature, just a description of a process Moore observed in his time. Moore himself never expected it to apply indefinitely into the future; in fact, the cutoff point at which he assumed it would cease applying is already in our past. So you can't automatically assume that computer capacity will continue to scale up indefinitely just because it did so in the past, and you sure as hell can't assume that there are no obstacles to bringing that same unlimited amplification to the human brain, because there are countless other variables you'd need to factor into that equation.
I think Singularity advocates sometimes forget that the Singularity is supposed to be a point beyond which our ability to extrapolate the future fails
because we don't have enough information to make any intelligent conjectures. So to claim certainty about what the Singularity will mean is oxymoronic.
I think just about all these qualms have been countered at one time or another in the last 10 years...the last one first: It's absolutely true and Kurzweil himself makes this statement in his last book(far from being oblivious)...however, it still doesn't mean that we as curious, intelligent beings won't try to, as with Charles Stross' Accelerando. There area few logical extrapolations which seem to make sense but are by no means definitive as part of the 6 epochs idea:
I believe I answered the exponential limit claim already...exponentials reach limits only until surpassed by a new paradigm. My example was processor technology. Something claimed by critics for many years...that there would eventually be a materials limit in Moore's Law, but which has again been surpassed: http://www.trekbbs.com/showthread.php?t=153184
Kurzweil's response to Allen on exponentials not being a law of nature:
When my 1999 book, The Age of Spiritual Machines, was published, and augmented a couple of years later by the 2001 essay, it generated several lines of criticism, such as Moore’s law will come to an end, hardware capability may be expanding exponentially but software is stuck in the mud, the brain is too complicated, there are capabilities in the brain that inherently cannot be replicated in software, and several others. I specifically wrote The Singularity Is Near to respond to those critiques.
I cannot say that Allen would necessarily be convinced by the arguments I make in the book, but at least he could have responded to what I actually wrote. Instead, he offers de novo arguments as if nothing has ever been written to respond to these issues. Allen’s descriptions of my own positions appear to be drawn from my 10-year-old essay. While I continue to stand by that essay, Allen does not summarize my positions correctly even from that essay.
Allen writes that “the Law of Accelerating Returns (LOAR). . . is not a physical law.” I would point out that most scientific laws are not physical laws, but result from the emergent properties of a large number of events at a finer level. A classical example is the laws of thermodynamics (LOT). If you look at the mathematics underlying the LOT, they model each particle as following a random walk. So by definition, we cannot predict where any particular particle will be at any future time. Yet the overall properties of the gas are highly predictable to a high degree of precision according to the laws of thermodynamics. So it is with the law of accelerating returns. Each technology project and contributor is unpredictable, yet the overall trajectory as quantified by basic measures of price-performance and capacity nonetheless follow remarkably predictable paths.
If computer technology were being pursued by only a handful of researchers, it would indeed be unpredictable. But it’s being pursued by a sufficiently dynamic system of competitive projects that a basic measure such as instructions per second per constant dollar follows a very smooth exponential path going back to the 1890 American census. I discuss the theoretical basis for the LOAR extensively in my book, but the strongest case is made by the extensive empirical evidence that I and others present.
Allen writes that “these ‘laws’ work until they don’t.” Here, Allen is confusing paradigms with the ongoing trajectory of a basic area of information technology. If we were examining the trend of creating ever-smaller vacuum tubes, the paradigm for improving computation in the 1950s, it’s true that this specific trend continued until it didn’t. But as the end of this particular paradigm became clear, research pressure grew for the next paradigm. The technology of transistors kept the underlying trend of the exponential growth of price-performance going, and that led to the fifth paradigm (Moore’s law) and the continual compression of features on integrated circuits. There have been regular predictions that Moore’s law will come to an end. The semiconductor industry’s roadmap titled projects seven-nanometer features by the early 2020s. At that point, key features will be the width of 35 carbon atoms, and it will be difficult to continue shrinking them. However, Intel and other chip makers are already taking the first steps toward the sixth paradigm, which is computing in three dimensions to continue exponential improvement in price performance. Intel projects that three-dimensional chips will be mainstream by the teen years. Already three-dimensional transistors and three-dimensional memory chips have been introduced.
This sixth paradigm will keep the LOAR going with regard to computer price performance to the point, later in this century, where a thousand dollars of computation will be trillions of times more powerful than the human brain1. And it appears that Allen and I are at least in agreement on what level of computation is required to functionally simulate the human brain2.
Allen then goes on to give the standard argument that software is not progressing in the same exponential manner of hardware. In The Singularity Is Near, I address this issue at length, citing different methods of measuring complexity and capability in software that demonstrate a similar exponential growth
No one claimed there was no limit to computer/AI processing capacity, but as I already said, this limit is immense, and we can quantifiably predict there will be a time when we can reach it.
"Those who can make you believe absurdities, can make you commit atrocities".
Last edited by RAMA; December 14 2011 at 04:31 AM.