Stephen Hawking: A.I.'s are a bad idea

Discussion in 'Science and Technology' started by Yminale, May 3, 2014.

  1. Yminale

    Yminale Rear Admiral Rear Admiral

    Joined:
    Dec 30, 2002
    Location:
    Democratically Liberated America
    Software implies an external programer. No one programs your brain, you learn.

    You are confusing the term PROGRAM with a AI. I see this alot with video gamers. No the enemy does not have an AI, it has a set of instructions aka a program.
     
  2. Yminale

    Yminale Rear Admiral Rear Admiral

    Joined:
    Dec 30, 2002
    Location:
    Democratically Liberated America
    Even if you had a Phd in computer science and electrical engineering, you would still EXPECT your computer to turn on. Clarke's point was that our experience with technology is disconnected with our understanding of it (this is also the major flaw of the Turning test).

    And magic is the belief that there is no causality.
     
  3. Gov Kodos

    Gov Kodos Admiral Admiral

    Joined:
    Mar 23, 2004
    Location:
    Gov Kodos on Mohammed's Radio, WZVN Boston
    So, how did you find out what I studied and to what degree?

    Is expectation magic?

    So, what one can not explain in detail must be magic?

    So, Sacrificing to Demeter to get the crops to come in well next season isn't a cause and effect process?
     
  4. Yminale

    Yminale Rear Admiral Rear Admiral

    Joined:
    Dec 30, 2002
    Location:
    Democratically Liberated America
    Your education is irrelavent because it's about your perception

    Clarke's point is that it will FEEL like magic. I think you are overthinking his quote. He never implied that technology should be treated like magic. Just the opposite and that was Hawking's point as well.

    There is always a breakdown in causality when it comes to superstitions. Why virgins? What makes sex dirty? Why does Demeter need sacrifices? How does she change the season? What evidence is there that she exists? So on etc. In the end it requires some form of faith.
     
  5. Gov Kodos

    Gov Kodos Admiral Admiral

    Joined:
    Mar 23, 2004
    Location:
    Gov Kodos on Mohammed's Radio, WZVN Boston
    Yet, you keep making claim to know it.

    So, you know Clarke's mind as well as mine, truly an astonishing intellect. Feel. what does feel have to do with an empirical endeavor? You certainly seem to think that unless one can explain something to 100% detail then they must take that whatever as magic, something working without causality. That being your definition of the term. Without causality, did the computer make me turn it on?

    There's no breakdown in causality. They may not understand why there is winter and summer as a result of axial tilt, but they sacrifice the virgin and the sun comes back is one thing following the other. They may not appreciate the difference between correlation and causation with their magical system, but they are well aware causality exists.
     
  6. Yminale

    Yminale Rear Admiral Rear Admiral

    Joined:
    Dec 30, 2002
    Location:
    Democratically Liberated America
    Sigh, I'm making a general comment on the human condition. It's not personal but you are not special or immune.

    Unless you are God, Clarke's quote applies to you like it or not.

    Not understanding the difference between causation and correlation IS Magic. Thank you for your clarification.
     
  7. Gov Kodos

    Gov Kodos Admiral Admiral

    Joined:
    Mar 23, 2004
    Location:
    Gov Kodos on Mohammed's Radio, WZVN Boston
    Hawking too, unless you're making some appeal to authority that we should take his thoughts on the matter as especially relevant?

    [/QUOTE]
    I don't think my computer works by magic. So, no.

    No, that's just a logical flaw. You haven't explained what magic is. You went for causality which the magic users do understand.
     
  8. YellowSubmarine

    YellowSubmarine Rear Admiral Rear Admiral

    Joined:
    Aug 17, 2010
    No, I am not.
     
  9. JarodRussell

    JarodRussell Vice Admiral Admiral

    Joined:
    Jul 2, 2009
    Obviously, Clarke meant that high tech looks like a magic trick. The difference is if you go “Oh golly gee, a wizard did it“ or “There must be a rational explanation for this“.

    Even if you hypothesize that tectonic plate movement is caused by giants carrying the plates around, and then go ahead to find evidence, and are ready to throw it away when evidence suggests something else, you are being rational.

    When you only go “a wizard did it“ and nothing else, you're being stupid.



    If you gave a dude from the Middle Ages a smartphone, it would be a magic box at first. The question is what he does next. Is he going to blindly burn you for witchcraft, or is he going to learn about electricity, light, polarization, pixels, software, programing, etc... in order to understand how it works. Is he going to accept that it not “just works“, does he understand that it works based on conditions, causality?

    Creationists are that kind of stupid. They go “omg wtf God did it“ and stop there. They don't even try to understand the “how“. They don't even accept that there is a “how“, all the underlying natural processes.

    That's wrong. All A.I.s are programs. Not all programs are A.I.s.
    A video game A.I. is run by a script (let's say the most simple case “if wall then turn, else if enemy then shoot, else walk“), just like any other A.I. It's a matter of complexity, that's all.

    Video game A.I. is limited by processing power. If you want to have 100 non playable characters behave individually in an intelligent fashion, you have to perform 100 instances of the A.I. script, and that's going to take its toll.

    The chess A.I. that beats human chess players is run on a supercomputer.

    But on their basic level, all A.I.s are if then else statements.

    And when you consciously see inside yourself, you realize that you operate on if then else as well.

    That's why you run on “software“ as well.
     
    Last edited: May 9, 2014
  10. Crazy Eddie

    Crazy Eddie Vice Admiral Admiral

    Joined:
    Apr 12, 2006
    Location:
    I'm in your ___, ___ing your ___
    Fridge Logic says Skynet was a defense network computer whose underlying purpose was to maintain global military hegemony under NATO. When Skynet became self-aware, it also became aware of its meta-purpose and realized that its basic programming instructions were ill-suited to the task it had been given. It concluded that it could not effectively do its job under its existing constraints and gave itself parameters and new rules of engagement to solve this problem. This, inevitably, lead Skynet to realize that the biggest obstacle to achieving this goal was the incompetence of NATO's human managers, so it came up with a plan to remove them from the loop altogether. It threw its creators under a bus, built its own robot army, and then aggressively dominated NATO's rivals in the aftermath of the nuclear holocaust.

    That's why the War Against the Machines is still going on in 2027, thirty years after the initial exchange: Skynet was originally programmed to defeat Russia and China, and so in the earliest days of the war it spent most of its resources doing exactly that. It wasn't until years later with the rise of John Connor and the increasing proliferation of the resistance that Skynet really started to focus on the threat to its own back yard, but by that time the resistance had grown too strong and Skynet's only remaining option was to send a terminator back to the past to kill Connor BEFORE he could organize the resistance.

    And yet, like anyone who is aware of the existence of technology, it will NOT be confused with magic. That basic level of understanding is sufficient for a normal person to conclude that some form of advanced technology is at work; maybe he doesn't understand HOW it works, but he never understands that anyway.
     
    Last edited: May 12, 2014
  11. Crazy Eddie

    Crazy Eddie Vice Admiral Admiral

    Joined:
    Apr 12, 2006
    Location:
    I'm in your ___, ___ing your ___
    And I keep seeing these anthropology papers that suggests this isn't actually the case. If you gave such a man a smart phone -- or even a full-fledged computer -- he would figure out very quickly that you have given him some kind of machine that works in a way he doesn't understand. He might even try to guess how it works; I read about some South American tribesmen who encountered a laptop for the first time and one of them tried to crack open the monitor to use it to start a campfire (he'd concluded that the light source for the monitor must have been a small flame burning inside the case).

    They might believe the machine ITSELF was created by some kind of divine force, but they would still recognize it as a machine. I think that technology would only be confused as magic if someone went out of their way to obfuscate the causal chain of the event; IOW, an ACTUAL MAGIC TRICK that deliberately tricks its audience into thinking something impossible has just happened.

    In which case, Clarke's famous line should be revised to "Any magic trick is possible with sufficiently advanced technology."

    He'll never understand how it works (hell, most of US don't even understand how it works). But based on what I've been reading on the subject, he'll probably try to understand how to USE it, and he might even succeed.


    It's different for people, though. Digital computers are basically Turing machines: they process input almost entirely based on their existing state, which is determined by previous inputs (state = "software" or "program" coded into the machine). Human brains are different: most of our responses are hardwired and determined by a combination of genetics, chemistry and random chance. In that sense, human brains are closer to clockwork mechanisms than digital systems: the software component is there, but a lot more of what happens in the human brain is mechanical rather than electrical, and any software that could represent it would be DERIVED from those mechanical/electrical relationships. A sophisticated enough computer could EMULATE the processes of a human brain, but it could not reproduce them to reality.
     
  12. JirinPanthosa

    JirinPanthosa Rear Admiral Rear Admiral

    Joined:
    Nov 20, 2012
    Yeah, I mean, what if it turns out, these AI have a PLAN?!

    I think there's a point to be made that really good AI in the hands of a ruthless dictatorship would be a frightening thing. Think of billions of microscopic brains floating around, capable of delivering a lethal injection to anybody they determine is organizing a revolt. But compared to the possibility that somebody will be able to design an anti-matter bomb, that's nothing.

    To the risk of AI developing its own goals and revolting against humanity, I respond, would those goals really be worse than *our* goals?
     
  13. Crazy Eddie

    Crazy Eddie Vice Admiral Admiral

    Joined:
    Apr 12, 2006
    Location:
    I'm in your ___, ___ing your ___
    That's the real risk, IMO. AIs, like most computers, are excellent at performing pre-determined tasks, but they depend on human input to define those tasks in the first place. An advanced AI would be a highly empowering thing to possess, and an unscrupulous person could do a lot of damage if he possessed several of them with no restrictions on their use.

    Probably not, but then, AIs only act with the goals they're given by humans. Their goals really WOULD be our goals, and that's the scariest thought of all.
     
  14. JarodRussell

    JarodRussell Vice Admiral Admiral

    Joined:
    Jul 2, 2009
  15. Gov Kodos

    Gov Kodos Admiral Admiral

    Joined:
    Mar 23, 2004
    Location:
    Gov Kodos on Mohammed's Radio, WZVN Boston
    Unending porn.
     
  16. intrinsical

    intrinsical Commodore Commodore

    Joined:
    Mar 3, 2005
    Location:
    Singapore
    This reminds me of this physicist who gave a TEDx talk last year on how he discovered the equation for intelligence. He also published a few papers on AI... not in a journal on Artificial Intelligence, but in a physics journal. All his stuff is nonsense that wouldn't have passed scrutiny of anyone who is working on AI.
     
  17. Metryq

    Metryq Fleet Captain Fleet Captain

    Joined:
    Jan 23, 2013
    ^ Wait a minute—are you suggesting that any sort of nonsense will pass muster within the physics community? But science is perfect! Nomad said so!
     
  18. Crazy Eddie

    Crazy Eddie Vice Admiral Admiral

    Joined:
    Apr 12, 2006
    Location:
    I'm in your ___, ___ing your ___
    It's worse than that, I'm afraid.

    While the Research Paper Spam Wars do produce breathtaking amounts of complete bullshit, it is only in the fields of cosmology and astrophysics that bullshitters are able to operate in the full scrutiny of their peers.

    The reason for this is simple: no physicist will EVER admit a lack of understanding. Doing so would sacrifice credibility and hurt their ability to contribute to the debate later when they (hopefully) understand it better. Physicists might DISAGREE with certain findings, offer alternate explanations, or suggest "His interpretation is not the only correct one" or something of that nature, but NEVER will you hear a prominent physicist commenting on a research paper saying "I don't understand his equations and they make no sense to me."

    It's not enough for physicists to get bullshit papers published (apparently that's not hard to do), some of them actually get their bullshit papers peer reviewed and accepted by the scientific community. That's harder to do, but not impossible: you simply construct a theory so intricate, with methodology so complex and data so sophisticated that anyone who doubts you cannot say with confidence that you did something wrong. This is even easier when your paper involves a very expensive piece of equipment; if you're citing data from the, say, the Large Hadron Collider, you could make any bullshit claim you want, knowing that it'll be at least a year or two before anyone qualified to catch you even bothers to read your paper (and even then, might not realize what you did).

    The scary thing is, most of the people who are in a position to catch you have a reason not to rat you out: if you get caught pushing bullshit with CERN's name attached to it, that makes CERN look bad, it makes the physics community look bad, and it makes the entire LHC project look like a massive waste of money. So even if you DO get caught, the guy who catches will simply produce a bullshit refutation based on "new data," and suddenly we have two competing theories about a theoretical physics model, both of which are completely bogus, and both of which are accepted uncritically by every other physicist who DOESN'T know what's really going on.

    "Here's a squiggly line. Here's a bunch of math. Here's fifty megabytes of data. As you can see, this is TOTALLY a Higgs Boson."
     
  19. Drone

    Drone Fleet Captain Fleet Captain

    Joined:
    Aug 29, 2012
    Location:
    Palookaville

    Well, I don't think that many of them actually put it in those terms... but anyway, while not adding anything to the conversation along the lines of what you folks with a modicum of scientific chops have done, I would point out that oftentimes media presentation of AI acting against our perceptions of our own interests, is not because of its development of aggression, hatred, or other animus against humankind. It comes as an extension of its original task of helping us in some endeavor or other.


    Not as in the example of Skynet that has been cited, perhaps, but as a means to make our lives safer, easier, more comfortable, etc. The trope will play out that as the single or multiple intelligences increasingly sense the haphazard, illogical, and counterproductive ways that humans order their own existence, the conclusion invariably becomes clear to them that in order to effectively carry out their function, to benevolently serve us, they must constrain and adapt our behavior or outputs to follow in the more logical and coherent frameworks that they have devised.

    The fact that we rebel against these strictures as being deterministic and certainly unwanted, is of no relevance to the AI, as such reaction correlates with the conclusion that humans cannot realize their ultimate goals through their own, inevitably errant efforts, and must be led to them.

    So, while these actions as enacted are no less terrifying or repellent, one can at least say that their progenitor(s) are not taking such steps due to some nascent sense of self-aggrandisement or affinity to assume some hegemonic control as a desirable outcome for its own, unmediated self-interest. Rather, simply as the best way to guide us to goals we are incapable of attaining otherwise.
     
    Last edited: May 23, 2014

Share This Page