David Brin's latest novel, and a TED talk

Discussion in 'Science Fiction & Fantasy' started by RAMA, May 13, 2012.

  1. Yminale

    Yminale Rear Admiral Rear Admiral

    Joined:
    Dec 30, 2002
    Location:
    Democratically Liberated America
    I have listed 2 items that I believe can not be emulated simply because they are non-quantifiable. Even if you modeled every neuron in the human brain, I still don't think you could create human level of intelligence simply because the interactions are too complex and that's just neurons. We still don't know what astrocytes and oligodendrites contribute and they outnumber the neurons 10 to 1. I also agree with you that an AI is not required for the singularity. Cloud sourcing maybe the beginning of the singularity as we integrate human abilities with computer architecture.
     
  2. Yminale

    Yminale Rear Admiral Rear Admiral

    Joined:
    Dec 30, 2002
    Location:
    Democratically Liberated America
    The fact that it's mundane process is irrelevant. Creativity can not be quantified and analyzed therefore it can't be copied. the only way non-organic systems could compete is through shear brute force using an untold number of iterations to produce something but then it wouldn't be intelligence

    And of course you are completely wrong. Sure you could emulate a complex system using a black box approach. People do that all the time to create systems that pass the Turing test but in order to do that you need to know all the possible inputs AND outputs and that's where you fail. Creativity has an INFINITE number of possibilities. Then there is also the fact that if you don't understand something, you certainly can't improve it and that's the whole point of singularity.

    Most problems can not be broken down to boring, step-by-step thinking. Let's say you have a choice of two ice cream flavors, rocky road and butterscotch. You've never tasted either and you can only choose one. Which would you choice and no you can't use some random number generator like flipping a coin. You could do it by some non-linear reasoning. Could a machine do it. Now your other point is that, it really isn't important. I disagree, the ability to handle chaos is very important since there an infinite number of possibilities and you can't plan for all of them. Sure you could emulate intuition through a random number generator but then it wouldn't be intelligence.


    Adaptability. Without these two aspects, a machine intelligence would eventually face a problem it can't solve or a question it can't answer. That's the whole plotline to ST:TMP by the way :lol:


    I would argue that specialization is a good thing. Look at the difference between a PC (generalist) and a console (specialist). Specialization means less resource therefore less cost. Which begs the question why would we need an AI in the first place.
     
  3. stj

    stj Rear Admiral Rear Admiral

    Joined:
    Dec 27, 2006
    Location:
    the real world
    Human beings don't need to know all the possible sentences to create a single sentence. The insistence that this creativity can't be quantified and analyzed seems to me sheer mysticism.

    The particular example of a problem given, someone who's never had either choosing between rocky road and butterscotch, strikes me as the kind of problem where it is likely that human beings themselves act more or less randomly. Even if it were somehow a non-linear process, non-linear equations are soluble, they're just more difficult. The distinction between P and NP problems seems more relevant from what I understand. But then, quantum computing has all that attention lavished on it because it promises a way to tackle NP problems.

    As for adaptability being an essential attribute of human-type awareness, that too strikes me as mysticism. In any event, nature shows us that adaptability and creativity can be approximated by evolution, without any awareness involved at all. If programs were allowed to evolve, so to speak, I'm sure the results would be amazingly creative. I'm not at all sure we'd get human consciousness, but that seems to have been a rare result in nature. What I'm really sure is that we would very likely find most of the results useless, and some positively harmful, and likely deem the whole exercise as an expensive waste.
     
  4. Yminale

    Yminale Rear Admiral Rear Admiral

    Joined:
    Dec 30, 2002
    Location:
    Democratically Liberated America
    But machines do or they create gibberish.

    Anything with a great deal of uncertainty would seem to be mystical in nature.

    It's odd but some studies show they don't.

    Yes but then it wouldn't be intelligence.
     
  5. stj

    stj Rear Admiral Rear Admiral

    Joined:
    Dec 27, 2006
    Location:
    the real world
    I believe that machines have created sentences that aren't gibberish (I've read sentences purportedly from machines,) without having all possible sentences in their memories.

    I don't believe that uncertainty is mystical. When you say that humans have the ability to create sentences without the use of processes that can be duplicated in some fashion by programs, without explaining why this is so, that I think is mysticism.

    And the insistence that intelligence=human-style self awareness (and only that) also strikes me as mysticism.
     
  6. Admiral Buzzkill

    Admiral Buzzkill Fleet Admiral Admiral

    Joined:
    Mar 8, 2001
    Yes, this is because the Singularity evangelists and their followers think along religious, apocalyptic lines rather than scientific ones.

    In fact, few if any of the basic criticisms of this silly fantasy have been addressed.
     
  7. Robert Maxwell

    Robert Maxwell memelord Premium Member

    Joined:
    Jun 12, 2001
    Location:
    space
    I would just like to clarify one thing. When I said that there are certain aspects of human intelligence that can't be duplicated, I meant presently, not eternally. I was also pointing out that we are nowhere near such duplication.

    As for the Singularity, transhumanism, and all that--while they might come to pass someday (or not), our current levels of knowledge and technology are much too far off from what would be required to convince me we'll achieve something like this in the next 50 years or so.

    Maybe in a few hundred, or a thousand, assuming we manage to solve our environmental, resource, and energy problems--and that's a big assumption.
     
  8. stj

    stj Rear Admiral Rear Admiral

    Joined:
    Dec 27, 2006
    Location:
    the real world
    Reading carefully the basic criticisms were:

    "The Singularity is kind of the Mayan Calendar for people who aren't credulous scientific illiterates - it's an apocalyptic scenario that gets them feeling all tingly."

    "Yeah, I know what it is. I just don't have "faith" it will come to fruition."

    Aside from being substance-free, they are somewhat abusive. The misuse of the word "faith" is also disingenouous in my opinion. RAMA was aware of the novelty of the concept and patiently explained. The follow-up criticisms were:

    "Of course the Singularity is a 'possibility.' " Scare quotes are an argument? Actually, conceding a singularity is a possibility automatically renders all comparisons with the Rapture null and void. The guy that coined the phrase comparing them in practice rather tends to assume much of the Singularity argument.

    "So, they're placing faith on the fact that their extrapolation is correct?"

    Equivocating on the meanings of "faith" is still disingenuous.

    "The singularity is about as certain as the rapture. In other words, it's not. There is no guarantee that technology will advance at an accelerated rate especially in all areas. You need to temper your optimism with some realism."

    The Rapture is impossible. The Singularity is not. The comparison is a false one.

    "Ah, but what I asked was proof for the statement: 'The real fantasy here is linear thinking in technology which is demonstratably false' - that is, a disproof of alternatives to accelerationism, one that makes it clear that these alternatives are as credible as geocentrism or Flat Earthers. It is not enough for John Smart to have a credible narrative; his must be the only credible narrative."

    Moore's Law is exponential, which proves that linear thing in technological futurism is demonstrably false. A general or global accelerationism in technology is also going to be exponential because the interlinks between the various technological advancements will grow exponentially. Whether it is going to be a logistic curve, or whether the exponent will be 0.000 1 is another issue of course.:)

    "Science is not a religion."

    This is a criticism of the Singularity? Rather, it is a criticism of the critics who keep equating the Singularity with superstition.

    "Until then it's as likely as the Rapture, which you can also come talk to me about when it's history and fact."

    The Rapture is impossible, a stupid and crazy idea, while the Singularity, despite my personal disagreements, is not.

    "Humans are "unique" in the sense that we don't know of any species like ourselves, but not unique in the sense something like us couldn't exist elsewhere or be created artificially."

    The Singularity proponents quite agree. Why then the accusations of being superstitious nitwits?

    "It takes exponentially more digital computing power to simulate an analog brain than a genuine analog brain requires."

    And Moore's Law (and other such "laws" about the advancement of computing technology) are exponential. Yes, there is every reason to think the brain can be simulated within a relatively short period.

    "Come on, those kinds of arguments [against the Singularity] are so silly as to not even need rebutting."

    Since this is so, why the insistence on abusing the proponents as religious?

    "Where's the exponential growth in AI? I rest my case."

    Exponential growth is very slow at first. The relatively belated appearance of extraordinarily rapid growth in a brief period is precisely the disjunction that prompts the name "Singularity." The only real basic criticism of the AI aspect of the Singularity lies in demonstrating some real reason why exponential increases in computing power cannot by brute calculation produce AI. That's the problem for the critics here, they can't make such an argument. They argue only from personal incredulity.

    "Kurzweil in particular often seems driven by father issues rather than imagination, much less science."

    Genetic fallacy, ad hominem and just generally substituting bile for logic.

    "...since human beings have shown no evidence of being able to write the kinds of software that would make strong A.I. possible, it's necessary for evangelists to posit a magical moment at which computers will somehow begin to write their own software and create their own successors.

    There is no reason based in evidence to expect this to happen soon, if ever.

    So, what is this assumption based on? Faith. Wishful thinking. Nothing more, no observations drawn from history or the real world.

    Attempting to use the applicability of Moore's law to computer hardware as a starting assumption and basis for extrapolating a similar exponential growth and evolution in processes of a different sort and order exposes the essential laziness in the thinking..."

    As you can see, most of this is repetitive, a lot of it misusing the word "faith" yet again. I've italicized the heart of the criticism, which is first, that we haven't achieved strong AI as yet. Anything besides linear extrapolation does not serve as a reason, etc. In other words, all the abuse is is justified by assuming the argument. It also says we can't extrapolate from historically attested exponential advances because AI is "a different sort and order..." No one has disagreed that we don't have "strong" AI, whatever that may mean. This is from the same poster who pointed out the disagreement among AI researchers about what they're trying to achieve! The argument here contradicts that: If AI doesn't necessarily mean simulating human awareness, then being a "different sort and order" is entirely irrelevant. Not only is the argument pointless, it is unsupported here.

    "As others have pointed out, it's worth considering that biological evolution (the touchstone model here) has not, despite a head start of billions of years, stumbled onto the "algorithms" which would support exponentially accelerating change of this kind (despite the self-evident utility of such for the adaptation and survival of living forms)."

    The problem with the argument from personal incredulity is what you don't know. The immune system is a living example of the type of a self-changing algorithm. It is not at all clear that intelligence is useful for adaptation and survival, despite the claim it is "self-evident." After all, humanity has only managed to survive about 200 000 yrs, and the average for species is 1 000 000 yrs. Get back to me later on this one. As for exponential growth in general, it leads to boom-and-bust cycles or turns out to be a logistic growth. The first is not self-evidently useful. The denial of exponential growth also denies the logistic curve, even though we find that in nature.

    "You know what's fueled the past couple hundred years of human advancement? Fossil fuels."

    Sounds like the beginning of an argument against the Singularity as a guaranteed Heaven on Earth.

    "The assumption that simulating a human brain will result in generalized AI is also totally faulty, mainly because we so poorly understand how the interactions of neurons and chemicals results in the properties we see as intelligence and consciousness. How are you going to simulate something you don't even understand?"

    Actually it is possible to simulate something you don't understand. Since exponential growth in computing power is historically established, then brute force simulation of the brain may produce some sort of AI, despite the looney assertions there is no reason to think so. What you need is an actual argument is not just that no exponential growth in computing power could possibly simulate the brain, but an argument that simulating the brain cannot produce AI.
    There hasn't been any offered except the mystical insistence that human-style awareness is essential to AI and that human-style awareness is uncopiable, period. This isn't an argument.

    "What they're really positing is that we don't have to understand how to do [programming for AI,] because "understanding" is just about to blossom forth from the machines themselves."

    They're positing that brute force simulation will produce some sort of AI. It is insisted by "emulation" isn't the same thing, but that is more mysticism than an argument. The exponential growth in computing power establishes the reasonableness of the argument. The notion that AI must be a human-style awareness is a tacit assumption of this criticism. It is still an unsupported assumption. Also, it does not address the Singularity advocates who are not convinced that human-style awareness is necessary. For those, the progress already made constitutes steps toward AI as machine, not human, intelligence. There is no magic moment when "intelligence" emerges, not in biology nor in computers.

    "The problem with "downloading minds" isn't one of "taking umbrage" but that it's the equivalent to "bottling unobtainium" - you've got a verb acting there on a noun that stands in for something with poorly or undefined characteristics and which there's little reason to think actually exists as an entity."

    This is so confused it's hard to believe it was offered as a criticism. If the "mind" is not a metaphysical entity (I entirely agree) then it is in principle simulable. In other words, this "criticism" inadvertently concedes the point. Also, this poster, somehow, knows this "poorly or undefined" something that might not even exist is still, somehow, identifiably a "different sort or order..." from machine computation. This nonsense has nothing to do with why someone should care about a simulation of his or her mind.

    "What emerges in human beings is mainly the ego construct, that internal model which as you say is self-aware: it imagines itself as an entity moving within the context of environment and holds to the delusion that it more or less orders and controls events.

    A lot of AI research used to concentrate on replicating the ego - it's what would really be examined by the Turing test. Is self-awareness actually intelligence?"

    Precisely why the Singularity can happen without duplicating human-style awareness, not a criticism.

    "We could have a long discussion about the 'ego' aspect--what some call a 'user illusion.' It is the aspect of the human mind we understand least, but I think the one most essential to duplicate in order to have a machine expand beyond its own programming, purely of its own volition."

    Why? Did our minds expand beyond our ancestors purely of our own volition? The mysticism is getting very deep.

    "Creativity may be one aspect of human intelligence we can't emulate simply because it can't be quantified. You could emulate intuition with a random number generator but then it wouldn't be intelligence."

    Why not? Does it make sense to say that an emulation isn't what it emulates, even though it emulates it?

    "Then there is also the fact that if you don't understand something, you certainly can't improve it and that's the whole point of singularity."

    Refuted by trial and error. And with exponential growth in computation, trial and error is huge.

    "Adaptability. [and human-style consciousness] Without these two aspects, a machine intelligence would eventually face a problem it can't solve or a question it can't answer."

    Human intelligence faces problems it can't solve and questions it can't answers. This seems to insist that the Singularity must produce a man-made God or it isn't the Singularity. In debating terms, I think this is called "moving the goalposts."

    Reading carefully, it's obvious that the basic criticisms are confused and mystical and at best no more than "Gaw!"


     




    [FONT=Arial][SIZE=2][FONT=Arial][SIZE=2][/SIZE][/FONT][/SIZE][/FONT]
     
  9. Robert Maxwell

    Robert Maxwell memelord Premium Member

    Joined:
    Jun 12, 2001
    Location:
    space
    I'm just going to respond to this part:

    Brute calculation will not produce AI. It simply won't. How do I know this? Well, for one thing, I know how computers work, from the software level down to the microscopic electronic hardware. Why do I have to prove that you can't brute-force AI? Isn't the onus upon those who say you can?

    In any case, computers just don't work that way. An extremely powerful computer won't magically do things that a less powerful computer can't--it will just do them faster. That's all.

    We can write algorithms which are very good at handling certain complex situations, things like natural language processing and map routing. But today's computers are absolutely no good at devising novel solutions when presented with a problem, which is what one would expect a good AI to be capable of.

    Making computers faster and more powerful doesn't do anything but exactly that. This assumption that it will somehow result in AI through brute power is nonsensical.
     
  10. Admiral Buzzkill

    Admiral Buzzkill Fleet Admiral Admiral

    Joined:
    Mar 8, 2001
    Exactly so. This is where irrational faith in the Singularity stands in for knowledge and analysis.

    And it is incumbent upon people making preposterous claims to produce some evidence-based reasoning for them. Thus far, they can't.
     
  11. sojourner

    sojourner Admiral In Memoriam

    Joined:
    Sep 4, 2008
    Location:
    Just around the bend.
    This thread just inspired me to change my signature.
     
  12. Admiral Buzzkill

    Admiral Buzzkill Fleet Admiral Admiral

    Joined:
    Mar 8, 2001
    You're supposed to capitalize "Singularity." It's like the Resurrection.
     
  13. sojourner

    sojourner Admiral In Memoriam

    Joined:
    Sep 4, 2008
    Location:
    Just around the bend.
    Corrected.
     
  14. stj

    stj Rear Admiral Rear Admiral

    Joined:
    Dec 27, 2006
    Location:
    the real world
    You don't have to prove anything. You just have to give a reason that isn't the conclusion. On the other hand, there most certainly is one way a sufficiently powerful computing system can produce novelty, trial and error. Experience shows this! There is another reason, not as conclusive I think but still a point which must be refuted, which is a sufficiently powerful system can use a data base of background facts vastly in excess of anything remotely imaginable now. Given this, yes, the onus is on you.

    The real difficulty in the Singularity concept is the extrapolation, not the assumption that AI is simulable. Your continued assumption that thinking cannot be copied is purely mystical.
     
  15. Kemaiku

    Kemaiku Admiral Admiral

    Joined:
    Dec 23, 2004
    Location:
    Northern Ireland
    And your insistence that this intelligence can be brought out of practically nowhere by shoving in more power and parts isn't?
     
  16. Admiral Buzzkill

    Admiral Buzzkill Fleet Admiral Admiral

    Joined:
    Mar 8, 2001
    It's both uninformed and a matter of blind belief, like most traditional religious dogma.

    Understanding that assumptions and terminology about "intelligence" are being tossed around by the cybernetic totalists without any close examination or real definition of those things - hell, one can't prove from their writings that they've thought much about what those things are - isn't "mysticism." It's accurate observation.

    You want to see how dependent upon unexamined cliche these folks are? Ask them what would motivate an A.I.

    It's a bit of a trick question.
     
  17. Robert Maxwell

    Robert Maxwell memelord Premium Member

    Joined:
    Jun 12, 2001
    Location:
    space
    stj, you keep missing the critical piece here: the algorithms to do what you're talking about don't exist yet, and we don't know how to create them. We are just now learning how to data mine effectively, and the problem gets more difficult the larger your dataset becomes.

    This "trial and error" you speak of still has to be directed by humans, otherwise it won't amount to anything.
     
  18. Admiral Buzzkill

    Admiral Buzzkill Fleet Admiral Admiral

    Joined:
    Mar 8, 2001
    It's also not clear that the evangelists understand that the principles and processes of biological evolution have no real bearing on what they're suggesting nor do they function well as an analogy. Their oft pointed-out lack of understanding or interest in science outside the narrow spheres of their specialization is really obvious when they start talking about this kind of thing.
     
  19. stj

    stj Rear Admiral Rear Admiral

    Joined:
    Dec 27, 2006
    Location:
    the real world
     
  20. Kemaiku

    Kemaiku Admiral Admiral

    Joined:
    Dec 23, 2004
    Location:
    Northern Ireland
    Reposting all of that proves nothing, your basic points have been refuted, you simply refuse to see that.

    And accusing everyone else of not understanding science, while only posting puesdo-mystical bullshit yourself and having each post in turn point for point proven wrong, just makes you look even more ridiculous.

    Just quit now before you make a bigger fool of yourself.