View Single Post
Old June 6 2012, 03:13 PM   #116
Vice Admiral
RAMA's Avatar
Location: NJ, USA
Re: David Brin's latest novel, and a TED talk

Robert Maxwell wrote: View Post
That article is completely laughable. As pointed out, it was based on the results of one test. Hardly a statistically meaningful sample.

Secondly, another comment pointed out that most programming is quite mundane logic, nothing to do with interesting or complex algorithms. This is quite true. While the amount of software out there has exploded, very little of it has brought along any novel implementations.

You know what we've done with most of our computing power? We've decided that developer time is too expensive to waste, but computing power is cheap, so today's software developers use various toolkits, frameworks, libraries, interpreted languages, etc. that use more computing power for the same benefit. The advantage is that developers can get more done in less time.

While computing power increases exponentially, and certain algorithms are made orders of magnitude more efficient, software capabilities in general grow more linearly. Take a simple example: who remembers Microsoft Word 6.0 from 1993? Now, look at the most recent version, almost 20 years later. Yes, it is more capable--it has a lot more features. It certainly has massively more code behind it, too. But would you argue that it is over 8000 times more functional and capable? (Just using a Moore's Law comparison.) Does it help you get your documents done 8000 times faster, or offer 8000 features for every one that Word 6.0 had? I didn't pick Word to be a strawman, either--go get any type of application that has been around 20 or 30 years and track how its capabilities have grown in that time. Is it hundreds or thousands of times better in any quantitative way? Most likely not.

Let me just fill you in on a dirty little secret of software development: there's a law of diminishing returns when it comes to code complexity. Beyond a certain point, making a program bigger makes it harder to maintain, more prone to bugs, etc. This is why programs end up being abandoned or get rewritten from scratch with a new design. We keep making exponentially faster computers, but the improvements on the human side of it have been primarily incremental.

This is what the people talking about "exponential growth" seem to keep missing. Yeah, so computers get vastly more powerful--so what? Humans--you know, the people who program the computers--are not improving at anywhere near that rate.

Computers are not going to just pick up the slack and write better algorithms for us. We have to do that work ourselves, and it has been very slow going. The problem is not that our computers aren't powerful enough, it's that our brains aren't that great at solving these sorts of problems--or we'd have done it already. Guys like Kurzweil are dreaming if they think computing power is the main thing holding us back. Today's computers are massively more powerful than a human brain, but we have no clue how to make them behave like one.
There's been a lot of talk about science and evidence on this subject, so far the best info we have is squarely on my the article from a good independent source again suggests.

Another quote:

In a review of linear programming solvers from 1987 to 2002, Bob Bixby says that solvers benefited as much from algorithm improvements as from Moore’s law.
Three orders of magnitude in machine speed and three orders of magnitude in algorithmic speed add up to six orders of magnitude in solving power. A model that might have taken a year to solve 10 years ago can now solve in less than 30 seconds.

That's a real world effect.
Actually nothing is being missed...there are intricate explanations on how the software will relate to hardware, and it's integration with the human brain. You just to need to read about how the problems may be overcome and in greater detail than I ever will type on this board. They will not be overcome by whiners who want to bury their heads in the sand, but by those who are doing something about it. Here Kurzweil debunks the "software is stuck in the mud myth":

..and also here in my previously psoted link where he counters Paul Allen's arguments:

In terms of the practical sense of software, yes, I can get things done faster on my browser today than 17 years ago. Speech recognition for example in 1985, cost $5000 and it bought a 1000 word vocabulary, didn't have continuous speech capability, and required 3 hrs training and wasn't accurate. In 2000, for $50 you could buy software with a 100,000 word vocabulary, had continuous speech capability, and required 5 minutes of voice training for your voice, improved accuracy and understood natural language. Today we have Siri and similar software...which use AI algorithms.

Kurzweil has developed software for 40 his book he explains that it is not just complexity of code--he concedes their may be bloat in the code--but there are attempts to quantify the complexity specifically by the National Institute of Standards and Technology who have established a metric based on program logic, and structure of branching and decision points. By any measure we have so far however, the software already used today has exceeded the the complexity of the tested simulation on human brain capacity. The power necessary for taking advantage of this complexity is exactly what the Singularity is about, and this will not be available till that future time frame. There needs to be a convergence (AI research, software, hardware, etc) that doesn't exist yet.

sojourner wrote: View Post
Don't suppose you read the comments from this article? Particularly the one from Irv?
Answered in my last two posts.

Chemahkuu wrote: View Post
Another part of the problem:

So, I was talking to Philip Rosedale at breakfast about a key question that I’ve been wondering about, which is why some people readily grasp very quickly the notion of accelerating change and its implications and some people are very resistant to the idea. And it’s not a question of technical level or intelligence. There are some brilliant people in computer science who just don’t get it, or kind of get it, but then they really resist appreciating and understanding the implications. So, one could hypothesize that the idea was attacking some of their coping mechanisms or their basic fundamental philosophies.
Much agreed here!! I never question the intelligence of the critics, only their ability to cope, or their lack of imaginative extrapolation:

RAMA is offline   Reply With Quote