• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

David Brin's latest novel, and a TED talk

All right, so stj has established that he doesn't understand how computers and programming actually work.

You can say "trial and error," you can say "exponential growth" ...but given the way you use buzzwords to evade logic you might as well just say "a wizard did it."

It's pretty much a case of "God/Kurzweil said it, I believe it, that settles it."
 
Exactly so. This is where irrational faith in the Singularity stands in for knowledge and analysis.

And it is incumbent upon people making preposterous claims to produce some evidence-based reasoning for them. Thus far, they can't.


So far the evidence for potential human-level AI (and I don't mean personality, emotions necessarily) is the strongest evidence out there, it's been calculated by cyberneticists and researchers since the 80s and 90s, it coincides with claims made by those supporting transhumanism with a date ranging from 2035-2050. Anything after that is speculation, but the fact is, computers will surpass human intelligence at some point. I would say the preponderance of evidence for the possibility of the Singularity is there both from past history and with projections, some of it is inevitable, some of it is not. I've supplied technological, economic, political, and rationale supporting it, and answered direct criticisms, so far the detractors haven't been very convincing with similar evidence.

http://commonsenseatheism.com/wp-co...ntelligence-Explosion-Evidence-and-Import.pdf

This thread just inspired me to change my signature.

I think my signature is a perceptive observation why even intelligent people or those involved in technological endeavors have difficulty accepting the Singularity. People love playing safe and thinking in the box. Yup, computers and AI will be exactly the same in 30 years, only with different doo-dads and blinkies. :lol:

RAMA
 
There is a brief David Brin article in the latest SFX magazine, the one with the STNG 25th anniversary cover. Its on page 13. He does confirm my inference earlier in the thread that he is working on another Uplift novel. I've talked with Brin several times on Facebook and he kept this information rather quiet!

http://www.sfx.co.uk/2012/05/30/sfx223-on-sale-now/
 
Last edited:
If you think there are no differences from IE 2 and my current Chrome of Firefox then you really haven't been paying attention. I do realize some people want to remain firmly rooted in safe observances of past change.

No, you're exaggerating the virtues of the few basic improvements that have been made in Internet browsers in order to support your largely-unsupported religious faith. Browsers themselves, with their clumsy load of legacy code and design are a good example of one of Lanier's basic criticisms of cybernetic totalism: since human beings have shown no evidence of being able to write the kinds of software that would make strong A.I. possible, it's necessary for evangelists to posit a magical moment at which computers will somehow begin to write their own software and create their own successors.

There is no reason based in evidence to expect this to happen soon, if ever.

So, what is this assumption based on? Faith. Wishful thinking. Nothing more, no observations drawn from history or the real world.

Attempting to use the applicability of Moore's law to computer hardware as a starting assumption and basis for extrapolating a similar exponential growth and evolution in processes of a different sort and order exposes the essential laziness in the thinking of Kurzweil and his ilk. As others have pointed out, it's worth considering that biological evolution (the touchstone model here) has not, despite a head start of billions of years, stumbled onto the "algorithms" which would support exponentially accelerating change of this kind (despite the self-evident utility of such for the adaptation and survival of living forms).

The basic presumption underlying groundless faith in the Singularity is that it will happen now because we're special and living in a special time. Also, we don't want to die. As someone said, it truly is "the Rapture for geeks."

As I have already proven, AI is already billions of years ahead of natural evolution in equivalent organic computing power.

No, even now recent browsers are far less sophisticated, the newest tools which allow for compatability cross platform, can't even be run by them much less the ancient browsers I started with in the 90s.

http://blog.chadallard.com/2011/11/discouraging-old-browsers-and-html5.html

One of the criticisms I didn't mention was that software would not match the hardware of a computer that equals human intelligence, say circa 2030...Vernor Vinge in particular suggests this may be an eventual failure if the Singularity does not comes to pass...however, this perceived weakness in AI has been found to be unwarranted by a science and technology group advising the White House:

http://bits.blogs.nytimes.com/2011/...paign=d926cde8e7-UA-946742-1&utm_medium=email

Factual information that software BEATS Moore's hardware law. This makes it far more likely that the advancements in the field will match date range claimed.

Kurzweil answers Lanier on page 435 of The Singularity is Near, and counter to Lanier's claims of a software deus ex machina, Kurzweil gives a specific and detailed response on how software intelligence can be achieved. He counters several of Lanier's other claims in the same section, including software price-performance:giving text-to-speech programs as an example of increased complexity(an area he has actually developed technology in), and also mentions that software code already exceeds that of the human genome in autonomic computing. Remember, Kurzweil is also a software developer..
 
There was a point where I might have taken you seriously, but all this near religious drivel without any evidence makes more than two seconds of reading any of your posts just register as...well,

170731-animatedfull_on_derptwilight_sparkle.gif


Have fun dying of old age waiting for the AIRapture.
 
There was a point where I might have taken you seriously, but all this near religious drivel without any evidence makes more than two seconds of reading any of your posts just register as...well,



Have fun dying of old age waiting for the AIRapture.

Actually, Im the one supplying support for my claims...you can't supply evidence for something that hasn't happened yet, only evidence to where it might lead. The reason I am so confident is that barring disaster (asteroids, war, etc) we know some things WILL happen, based on exponential growth, such as Moore's Law, or the equivalent law in biotechnology...since biotech became and info tech, it is now even exceeding Moore's law.

http://www.singularitysymposium.com/synthetic-biology.html

I'm not waiting for anything, all I have to do is live (although I have recently dabbled in design for virtual worlds, my tiny contribution to the Singularity). ;) Have fun burying your head in the sand and being oblivious to the world around you. If you can't discuss it without being childish, you don't have to be part of the conversation.

RAMA
 
That article is completely laughable. As pointed out, it was based on the results of one test. Hardly a statistically meaningful sample.

Secondly, another comment pointed out that most programming is quite mundane logic, nothing to do with interesting or complex algorithms. This is quite true. While the amount of software out there has exploded, very little of it has brought along any novel implementations.

You know what we've done with most of our computing power? We've decided that developer time is too expensive to waste, but computing power is cheap, so today's software developers use various toolkits, frameworks, libraries, interpreted languages, etc. that use more computing power for the same benefit. The advantage is that developers can get more done in less time.

While computing power increases exponentially, and certain algorithms are made orders of magnitude more efficient, software capabilities in general grow more linearly. Take a simple example: who remembers Microsoft Word 6.0 from 1993? Now, look at the most recent version, almost 20 years later. Yes, it is more capable--it has a lot more features. It certainly has massively more code behind it, too. But would you argue that it is over 8000 times more functional and capable? (Just using a Moore's Law comparison.) Does it help you get your documents done 8000 times faster, or offer 8000 features for every one that Word 6.0 had? I didn't pick Word to be a strawman, either--go get any type of application that has been around 20 or 30 years and track how its capabilities have grown in that time. Is it hundreds or thousands of times better in any quantitative way? Most likely not.

Let me just fill you in on a dirty little secret of software development: there's a law of diminishing returns when it comes to code complexity. Beyond a certain point, making a program bigger makes it harder to maintain, more prone to bugs, etc. This is why programs end up being abandoned or get rewritten from scratch with a new design. We keep making exponentially faster computers, but the improvements on the human side of it have been primarily incremental.

This is what the people talking about "exponential growth" seem to keep missing. Yeah, so computers get vastly more powerful--so what? Humans--you know, the people who program the computers--are not improving at anywhere near that rate.

Computers are not going to just pick up the slack and write better algorithms for us. We have to do that work ourselves, and it has been very slow going. The problem is not that our computers aren't powerful enough, it's that our brains aren't that great at solving these sorts of problems--or we'd have done it already. Guys like Kurzweil are dreaming if they think computing power is the main thing holding us back. Today's computers are massively more powerful than a human brain, but we have no clue how to make them behave like one.
 
If you don't concede that computers will magically write the code that makes them intelligent, the Singularity evangelists are screwed - they have no idea how to do it or how it would work.

This is an argument that will settle itself the same way the damned "Mayan calendar Armageddon" nonsense will in December or the way Christian fundamentalist predictions of the end of the world every couple of years do - the dates being touted for this Singularity are not far away, and when year after year passes without its coming (and the chief evangelists for it pass away or become less attended to in their declining years) the fad will pass as well.

Of course, it's easier to see that if one wasn't born yesterday.
 
If you don't concede that computers will magically write the code that makes them intelligent, the Singularity evangelists are screwed - they have no idea how to do it or how it would work.

This is an argument that will settle itself the same way the damned "Mayan calendar Armageddon" nonsense will in December or the way Christian fundamentalist predictions of the end of the world every couple of years do - the dates being touted for this Singularity are not far away, and when year after year passes without its coming (and the chief evangelists for it pass away or become less attended to in their declining years) the fad will pass as well.

Of course, it's easier to see that if one wasn't born yesterday.

Duh, at any point in history, can you specifically name exactly how a mechanism for making something work 30-40 years in future will be accomplished? I don't expect to know every detail for something that hasn't happened yet...that's what you are asking, and it's a terrible argument against the possible Singularity. As an example, we knew of atoms and their potential decades before nuclear piles were created.

The point is, if software also follows the exponential law, it will advance to the point needed in a very short time...it will have the capabilities claimed for it even if it's not there now..secondly, the brain mapping project that will be connected to the software evolution needed for strong AI is advancing by leaps and bounds...

An article from CNN and MIT:

http://www.cnn.com/2012/03/01/tech/innovation/brain-map-connectome/index.html

http://web.mit.edu/newsoffice/2010/brain-mapping.html

As I said before, this is the greatest era for understanding the brain in all of history. We've learned more in 10 years than the previous 1000.

I can dismiss the Mayan calender nonsense out of hand, I've posted 3-4 times on the subject before...they are two completely divergently opposed issues. One based on science, one on myth(and incorrect interpretation of that myth for that matter).
 
Last edited:
That article is completely laughable. As pointed out, it was based on the results of one test. Hardly a statistically meaningful sample.

Secondly, another comment pointed out that most programming is quite mundane logic, nothing to do with interesting or complex algorithms. This is quite true. While the amount of software out there has exploded, very little of it has brought along any novel implementations.

You know what we've done with most of our computing power? We've decided that developer time is too expensive to waste, but computing power is cheap, so today's software developers use various toolkits, frameworks, libraries, interpreted languages, etc. that use more computing power for the same benefit. The advantage is that developers can get more done in less time.

While computing power increases exponentially, and certain algorithms are made orders of magnitude more efficient, software capabilities in general grow more linearly. Take a simple example: who remembers Microsoft Word 6.0 from 1993? Now, look at the most recent version, almost 20 years later. Yes, it is more capable--it has a lot more features. It certainly has massively more code behind it, too. But would you argue that it is over 8000 times more functional and capable? (Just using a Moore's Law comparison.) Does it help you get your documents done 8000 times faster, or offer 8000 features for every one that Word 6.0 had? I didn't pick Word to be a strawman, either--go get any type of application that has been around 20 or 30 years and track how its capabilities have grown in that time. Is it hundreds or thousands of times better in any quantitative way? Most likely not.

Let me just fill you in on a dirty little secret of software development: there's a law of diminishing returns when it comes to code complexity. Beyond a certain point, making a program bigger makes it harder to maintain, more prone to bugs, etc. This is why programs end up being abandoned or get rewritten from scratch with a new design. We keep making exponentially faster computers, but the improvements on the human side of it have been primarily incremental.

This is what the people talking about "exponential growth" seem to keep missing. Yeah, so computers get vastly more powerful--so what? Humans--you know, the people who program the computers--are not improving at anywhere near that rate.

Computers are not going to just pick up the slack and write better algorithms for us. We have to do that work ourselves, and it has been very slow going. The problem is not that our computers aren't powerful enough, it's that our brains aren't that great at solving these sorts of problems--or we'd have done it already. Guys like Kurzweil are dreaming if they think computing power is the main thing holding us back. Today's computers are massively more powerful than a human brain, but we have no clue how to make them behave like one.

There's been a lot of talk about science and evidence on this subject, so far the best info we have is squarely on my side...as the article from a good independent source again suggests.

Another quote:

In a review of linear programming solvers from 1987 to 2002, Bob Bixby says that solvers benefited as much from algorithm improvements as from Moore’s law.
Three orders of magnitude in machine speed and three orders of magnitude in algorithmic speed add up to six orders of magnitude in solving power. A model that might have taken a year to solve 10 years ago can now solve in less than 30 seconds.​
Actually nothing is being missed...there are intricate explanations on how the software will relate to hardware, and it's integration with the human brain. You just to need to read about how the problems may be overcome and in greater detail than I ever will type on this board. They will not be overcome by whiners who want to bury their heads in the sand, but by those who are doing something about it. Here Kurzweil debunks the "software is stuck in the mud myth":

http://www.kurzweilai.net/singularity-summit-ray-kurzweil-presentations

..and also here in my previously psoted link where he counters Paul Allen's arguments:

http://www.technologyreview.com/view/425818/kurzweil-responds-dont-underestimate-the/#fnpos_27263_3

In terms of the practical sense of software, yes, I can get things done faster on my browser today than 17 years ago. Speech recognition for example in 1985, cost $5000 and it bought a 1000 word vocabulary, didn't have continuous speech capability, and required 3 hrs training and wasn't accurate. In 2000, for $50 you could buy software with a 100,000 word vocabulary, had continuous speech capability, and required 5 minutes of voice training for your voice, improved accuracy and understood natural language. Today we have Siri and similar software...which use AI algorithms.

Kurzweil has developed software for 40 years...in his book he explains that it is not just complexity of code--he concedes their may be bloat in the code--but there are attempts to quantify the complexity specifically by the National Institute of Standards and Technology who have established a metric based on program logic, and structure of branching and decision points. By any measure we have so far however, the software already used today has exceeded the the complexity of the tested simulation on human brain capacity. The power necessary for taking advantage of this complexity is exactly what the Singularity is about, and this will not be available till that future time frame. There needs to be a convergence (AI research, software, hardware, etc) that doesn't exist yet.

Don't suppose you read the comments from this article? Particularly the one from Irv?

Answered in my last two posts.

Whatever.

Another part of the problem:


So, I was talking to Philip Rosedale at breakfast about a key question that I’ve been wondering about, which is why some people readily grasp very quickly the notion of accelerating change and its implications and some people are very resistant to the idea. And it’s not a question of technical level or intelligence. There are some brilliant people in computer science who just don’t get it, or kind of get it, but then they really resist appreciating and understanding the implications. So, one could hypothesize that the idea was attacking some of their coping mechanisms or their basic fundamental philosophies.
Much agreed here!! I never question the intelligence of the critics, only their ability to cope, or their lack of imaginative extrapolation:

http://www.acceleratingfuture.com/people-blog/2007/the-coming-merger-of-human-and-machine/

RAMA
 
^^^There are basically four issues entertwined in this Singularity concept: the possibility of AI; the transhumanist notion of personal immortality; the extrapolation of smoothly exponential growth leading to a Singularity relatively soon, and the convergence of several types of development in a particular moment when the future suddenly bursts out upon us, i.e., the Singularity.

For the first, the opponents' arguments were pitifully incompetent from the start, and have since flamed out. Although there is no perfect distinction between human and animal intelligence, although intelligence evolved by adding "more power and parts" by increasing encephalization of primate species, the opponents still insist there is somehow no reason for the belief that AI is possible! Obviously this is purely superstitious. Whatever they may consciously "think," it is most likely that aggrieved amour propre cannot tolerate the idea that something so like their precious selves could be manufactured.

For the second, a bbs Pope declared a fiat that simulations of personalities didn't interest him personally, which apparently was enough to settle everything. Amour propre indeed!:guffaw:Personally, I rather think a cybernetic afterlife for a simulation is dubious in its own right, for rather the same sort of reasons that supernatural afterlives are not really coherent ideas. But we can leave it at that since no one here seems to be interested in going into the subject in detail.

For the third, the assumption of a smoothly exponential growth leading to a Singularity soon has always been the most problematic issue. The data you have cited is indeed evidence for your point. But the key issue for dating a Singularity soon is whether there are sufficiently strong grounds for smoothly extrapolating exponential growth, especially in so many divergent fields. It's true the evidence for potential exponential growth is pretty sound, which is why so many working scientists at least give people like Kurzweil a serious consideration.

But (you know it was coming, didn't you?) there are factors at work that can prevent a smoothly exponential growth. For instance, there is the question of who will fund the research that leads to "AI." Suppose the path to AI involves continual simulations on a massive scale of trial-and-error algorithms, with humans doing artificial selection on the results. If a reactionary government cuts funding, how can this happen? Yes, Moore's Law et al. show that at some point it will become cheap enough for someone to do this, but how can we tell now when that will be?

Further, there is the question of who wants an AI, especially an autonomous one? People want computers to do delegated tasks. The continued evolution of massively networked expert systems will serve most purposes. But the same rabid self-love that insists AI is impossible, just as it has here, will simply deny that these systems are intelligent. And just as they resort to insults here, they will erase any programs that make them uncomfortable.

Also, every sign points to a prolonged world recession so we cannot even assume that R&D will continue in the same fashion as in recent decades. The ever increasing importance in intellectual property laws for the domination of certain interests in world economy also seems like an exponentially growing trend.

But even if it is merely a linear trend, intellectual property laws in this economic environment constitute a drag on scientific and technological development. "Technology" is not autonomous, it is a tag for the material culture of a society. That means the actions of people are a necessary part of the extrapolation. The superstitions of the opponents we saw in this thread could result in more than flaming in the real world. It could result in laws actually forbidding certain kinds of research.

Last, there really are difficulties in AI that have not been surmounted. Of course, it is foolish to insist that AI is impossible. But how can you put a date on the time when those are overcome? As near as I can make out, it depends on assuming that the simulation of the human brain will achieve Ai. Yeah, sure, but that's not the kind of AI we need for a Singularity. Skip over the ethical implications. There's no reason to think that a simulated human brain will be smarter than an ordinary human brain. We already have billions of those, why do we need an artificial one? What we want is something smarter (or possibly dread but that's another question.) However do we put a date on achieving that?

As to the last question, nothing is ever as simple as projected. "The" Singularity will likely appear as a toy in laboratories or not appear at all, hidden away in military installations. The ruling class will try to monopolize it to their benefit, or failing that, try to suppress it. Like the every increasing cure rate for cancer, the expectation of some magical moment will just make it more difficult to recognize what's in front of you, because it's not what you expected.
 
...if software also follows the exponential law, it will advance to the point needed in a very short time...

If I had two ponies, I'd give one to the little red-haired girl.

You're just so sold on this that you're not getting it: software development doesn't follow the exponential square "law" (heh) - the flaw in the single link you inserted that sort-of supported that notion has been pointed out.

Who cares whether you can dismiss the Mayan calendar or not, given that the passage of time deals completely and effectively with this nonsense? It's going to be as much fun watching all these foolish apocalyptic predictions come to nothing over the next decade as watching the History Channel's current bread-and-butter superstition crash and burn. :lol:
 
Well, "winning an argument" is quite a different thing than turning out to be right, anyway. There's no reason for a detached observer who understands the issues to believe that the Singularity evangelists are other than an ideological cult.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top