Some science fiction "firsts"

Discussion in 'Science Fiction & Fantasy' started by RAMA, Jan 17, 2011.

  1. Deckerd

    Deckerd Fleet Arse Premium Member

    Joined:
    Oct 27, 2005
    Location:
    the Frozen Wastes
    I don't want to be a wet blanket but why would any company fund a machine that tries to outstrip the human brain? I mean sure you can have processors that can calculate almost anything faster than a human brain can but human intelligence is not a set of calculations. Human creativity is not a set of calculations. Even if you did have an evil millionaire who wanted to create a program to find them all and in the darkness bind them, it would fail because people without any programming skill whatsoever breed geniuses all the time.
     
  2. RAMA

    RAMA Admiral Admiral

    Joined:
    Dec 13, 1999
    Location:
    NJ, USA
    Here's why...we will always want to expand the capabilites of the human brain, if we want to be the ones who exist as AI or facsimiles of ourselves after the speculated singularity, as opposed to the "machine overlords" we'll have to improve storage, memory, and speed of the human thought process. Contemporary PCs already expand our human RAM and hard drive space for information, in the future we will want that directly tied into us. Even if we hadn't thought of the singularity, the only way to pre-empt biological evolution and speed up memory and thought, is to turn to artificial means.

    The "bad"(good?) news is, researchers are already working on AI all over the world. Many of them believe in the inevitability of what they are doing leading to the takeover. I like to give humanity enough credit that we may supplant this takeover with our own AI evolution.

    Is human intelligence more than the sum of it's parts? Well yes the human brain is amazing, but there are elements of it machines can do better already. I along with most--if not all--of the researchers do not believe in any inante ability of the human brain that is not biologically derived and cannot be replicated or surpassed in some way with AI.
     
  3. Christopher

    Christopher Writer Admiral

    Joined:
    Mar 15, 2001
    ^Except a recent study suggests it may not be feasible to expand human intelligence beyond its current level:

    http://io9.com/5865987/why-our-minds-have-probably-evolved-as-far-as-they-can
     
  4. Deckerd

    Deckerd Fleet Arse Premium Member

    Joined:
    Oct 27, 2005
    Location:
    the Frozen Wastes
    When you say "many of them believe the inevitability" of a takeover of AI; I don't believe you. There's a huge AI department at the university I work in and what they're doing is trying to get programs to learn. By learn I mean become aware of their environment, react to its parameters, remember those parameters and then work within those parameters. That's a long way from composing the Liebestod from Tristan und Isolde. In fact it's never going to happen.
     
  5. RAMA

    RAMA Admiral Admiral

    Joined:
    Dec 13, 1999
    Location:
    NJ, USA

    Yup, lots of those in the AI/robotics field lament how long it's taken to get where we are, but two things mitigate that. 1) Human biological evolution takes places over millions of years, AI has been worked on for mere decades out of that timescale. 2) The growth is exponential, meaning in rapid succesion, not on the normal linear timeline we usually perceive as humans in every day life, so the "slow" progress (which is actually lightning fast on a biological or even geological timescale)will mean such predicted AI in a few decades.

    Yours is not an unusual reaction, because humans generally can only think of machines or intelligence as products independent of other things, and that will not be the case in the future. If you bring theism, human centrism into it, then there is going to be quite a knee jerk reaction to it. Trust me, if the "takeover" is true, you'll want to be an AI, and it may not have to be war, supplanting the machines may mean simply out-adapting/competing/evolving.

    In terms of the actual material accomplishment of your "impossible" task, there is a lot of source material on the subject, Hans Marovec's work is available all over the internet for free. Of course a key work on the explanation of why the human brain is quantifiable, and computer technologies are improving(interestingly, a predicted 3D chip was just reported in Wired Magazine the other day) is available in Singularity is Near

    RAMA
     
  6. xortex

    xortex Commodore

    Joined:
    Apr 25, 2006
    Location:
    Staten Island, NY
    Well it's the Human component link that is the really scary part. It's not what machines can do for us, but what we can make machines capable of - like things like telepathy and creativity and making it limitless as far as we can see unless there is a collective mind like the Borg and there are dimensions that we can't see - the higher dimensions of pure thought. Good ole trial and error again. Whoops I opened up a whole new dimension of demons and angels waging war. Close it. i can't.
     
    Last edited: Dec 8, 2011
  7. Christopher

    Christopher Writer Admiral

    Joined:
    Mar 15, 2001
    I don't think you actually read the article I linked to, or if you did, you missed the point. What the research shows is that, yes, you could increase the brain's ability to do a certain thing, but there are negative consequences to that increase that might cancel out any benefits from it. Amplify a person's imagination too much and they become schizophrenic. Amplify their logic and systematic thought too much and they become autistic. Amplify their ability to discern patterns too much and they become paranoid. By analogy, you could engineer the body to have extra limbs or sense organs or muscles, but the added metabolic cost of having them might cancel out any gain from having them, or the amount of neurological connections that would have to be devoted to them might diminish one's mental or physical functionality. So there are limits to how much you can practically enhance a body's physical abilities, and the same may well be true for enhancing a brain.

    So even if you did use external computer hardware to enhance the brain's performance, it might end up undermining the brain's performance in key ways as well, throwing off the balance that enables it to work. Human intelligence may already be at the point of diminishing returns -- or, to put it more optimistically, in a sort of "Goldilocks zone" for sentience, an optimal balance where our minds have enough complexity and dynamism to be conscious and creative but not so much that they become unstable.
     
  8. RAMA

    RAMA Admiral Admiral

    Joined:
    Dec 13, 1999
    Location:
    NJ, USA

    Which is why it will be a facsimile AI or foglets/programmable matter...or in the shorter term, you'll see stuff like "jacking in" from cyberpunk or Matrix...think of AI as "buffers" to the storage of the brain...there are theoretical limits to the computer ability to process info beyond that, but they are immensely high. Computational Limits

    Well I don't see any evidence of telepathy now, so I don't think we'll see AI doing it...unless its a remote way to read future virtual human brains.

    Edited for screwing up the urls..oops
     
  9. RAMA

    RAMA Admiral Admiral

    Joined:
    Dec 13, 1999
    Location:
    NJ, USA
    I don't see why they can't raise the "optimal balance", you can raise brain performance, but what's to say they can't easily control other elements of the AI-human brain, sort of a self-aware safety net within the brain itself(yes I think I've seen this in SF before)that eliminates by products of increased peformance like schizophrenia. Or who is to say a virtual human/foglet brain simply isn't much hardier than a totally natural or augmented biological brain?
     
  10. xortex

    xortex Commodore

    Joined:
    Apr 25, 2006
    Location:
    Staten Island, NY
    That would be the unrelated third thing - the child.
     
  11. Christopher

    Christopher Writer Admiral

    Joined:
    Mar 15, 2001
    Are you arguing from science or from the desire to believe? Too many people cling to the Singularity as a matter of religious faith -- "the Rapture for geeks," as Ken McLeod calls it. Science demands healthy skepticism. And just in general, the future never turns out to be the way people expect it to be. With so many people today utterly convinced that the Singularity is inevitable, the more that convinces me that it won't happen, certainly not the way people expect.
     
  12. RAMA

    RAMA Admiral Admiral

    Joined:
    Dec 13, 1999
    Location:
    NJ, USA

    Yes I've seen that idea of course, the difference with the Singularity, is there is a lot of data, models, accurate prediction track record and not simply faith, this is where it separates itself from end of the world cults and past futurists, which often were much more speculative and relied on linear models. Some of the best minds in their fields agree with many of the end results, if not all the specifics of the current predicted date of the singularity...I'm fully able admit the date can vary, but it's not a pie in the sky idea...there's a lot of groundwork. Others also admit the singularity scenerio may come to pass, but not in a positive light, this is also likely, which is why I argue that we need to accelerate as humans even moreso.

    In terms of details...well, the singularity might happen, yet many of the details could be off...one technology might be substituted for the other. If you are doubting the technology, there are lots of examples of foglet work, AI, nanotech is now a $2 billion industry...after how many years? Roughly 20 since Engines of Creation.

    One thing people are missing...it occurs to me at a time of accelerating change (which we are factually in) we are going to be able to make more predictions and better predictions of the future than we ever have, at least until there may be a singularity type breakdown, if it indeed happens.

    One of the chief supporters of the positive singularity, lists a counter, point by point to the skeptics in his book and on his website...Kurzweil

    Finally, regardless of the outcome, the discussion of the singularity has changed my point of view on both the future of SF and the world. Its no longer enough most ofthe time for me to see mundane ideas of the future with no info technology involved in the fabric of the culture, where staid, conventional, brute force technologies exist that don't take into account programmable matter and the like. "In Time" was a very good movie to me, but I don't see it as a realistic future in any way, its value lies in it's parable. I recall seeing a recent interview with a famous SF writer (I forget whom at the moment) who said hard SF literature is in a holding pattern as it takes into account the implications of the singularity...
     
  13. Christopher

    Christopher Writer Admiral

    Joined:
    Mar 15, 2001
    ^If you want me to take your argument at all seriously, don't mention Kurzweil. His beliefs seem more rooted in spirituality and wishful thinking than science. At the very least, I consider him overoptimistic.
     
  14. Edit_XYZ

    Edit_XYZ Fleet Captain Fleet Captain

    Joined:
    Sep 30, 2011
    Location:
    At star's end.
    All 'models' predicting the technological singularity are based upon, they require continual exponential growth - of intelligence, of technology, etc.
    Well, if history showed anything, it showed that exponential growth in anything other than abstract mathematics is not sustainable - regardless of your attempts to 'cheat' this rule.
    Technology matures and can't be improved further; etc.

    IF you can keep up continual exponential growth in the AI field (and the signs are that you can't), you may - or may not (perhaps 'intelligence' in humans is a mature 'technology') - be able to have a being more intelligent than humans, functioning. But, in any case, you won't be able to keep improving that intelligence; sooner or later, you'll hit a wall.
    Singularity proponents gamble that this 'wall' is beyond the singularity - and they have no convincing arguments for it.

    It's almost certain there isn't a logic fundamentally 'better' than the one known to us - meaning, we have already hit the wall in this area; you may have a being thinking faster than us (quantitatively), but not qualitatively 'better'.
     
  15. RAMA

    RAMA Admiral Admiral

    Joined:
    Dec 13, 1999
    Location:
    NJ, USA
    I love this: "Leaving the Opera"!

    [​IMG]
     
  16. RAMA

    RAMA Admiral Admiral

    Joined:
    Dec 13, 1999
    Location:
    NJ, USA
    This qualm is easily explained away..exponential growth has limits till it reaches the next paradigm shift, there is already a next generation of processor technology(s) ready to supplant the current one...in fact, the aforementioned 3D chip technology is one of them, and it appeared just two days ago. The fact that there have been 5 paradigms already that fit the pattern makes it less like wishful thinking and more like a probability.:techman:
     
  17. RAMA

    RAMA Admiral Admiral

    Joined:
    Dec 13, 1999
    Location:
    NJ, USA

    If he wasn't already a well known inventor, computer scientist, futurist with quite a resume' you may be right, and although he does have a lot of "followers" its for good reason, he's very accurate in his predictions based on data. His ideas have almost nothing to do with spirituality except on a metaphorical level, it has to do with our failings in our language to describe the accelerating level of technology, and ultimate consequences, which for all intents and purposes may seem spiritual to people. I always looked for evidence of wishful thinking in his views and I don't see it.
     
  18. Christopher

    Christopher Writer Admiral

    Joined:
    Mar 15, 2001
    Yes. Real-life processes aren't simple mathematical curves; there are many factors that interact and affect one another, and eventually any short-term trend is going to slow or stop or even be reversed. Generally, the norm is equilibrium; rapid change occurs when the circumstances are right and there's a need or incentive for it, but eventually a new equilibrium will be reached and things will stabilize again.

    Sure, computers are transforming our lives in ways our forebears couldn't predict, and that might continue. Eventually we may have computers so advanced that they can precisely model and predict things like weather, natural disasters, economic patterns, social and psychological dysfunctions, etc. and give us reliable mechanisms for avoiding problems and disasters before they happen, bringing a new age of peace and security and prosperity to all. And they may bring new breakthroughs in physics and technology that will let us expand into space and improve our standard of living and restore the Earth's ecosystem. But the people who enjoy it will probably not be any more fundamentally intelligent than we are. Will they have more immediate access to any information they need? Sure, and they'll be able to draw on the problem-solving ability of the rest of humanity through crowdsourcing as well as that of the superfast computers. But they'll still probably think on much the same level that we do. And there's no guarantee that the computers will be any more intelligent, just faster and more powerful.
     
  19. xortex

    xortex Commodore

    Joined:
    Apr 25, 2006
    Location:
    Staten Island, NY
    Scientists experiment because they don't know the outcome of events until after they've done it. Except by the time they learn it was a mistake, it's already too late, they can't undo it and have let the cat out of the bag. 'Lawnmower Man' was an example of this. The mad sadistic greedy vengeful scientist is one of sci-fi's oldest tropes. GR and Trek shied away from it as he didn't want it or space pirates as standard sci-fi cliches to fall back on. He didn't even want to use Klingons.
     
  20. Happy Xmas (War Is Over)

    Happy Xmas (War Is Over) Fleet Admiral Premium Member

    Joined:
    Nov 4, 2001
    Location:
    If you want it
    He didn't want to use Klingons?????? In TNG initially, but that soon went out the window. That was more TNG standing on its own than anything else. TOS had a few "mad" scientists. Dr. Korby and Dr. Adams come to mind. Then there is Daystrom and good old Janice Lester. As for pirates. Did you notice what Mudd was wearing in "Mudd's Women"?