Has this guy really cracked AI?

Discussion in 'Science and Technology' started by Asbo Zaprudder, May 29, 2013.

  1. Asbo Zaprudder

    Asbo Zaprudder Admiral Admiral

    Joined:
    Jan 14, 2004
    Location:
    Rishi's Sad Madhouse
    http://www.bbc.co.uk/news/business-22610889

    I suspect that there is a lot of hype in that article -- just like there was about expert systems 30 years ago. Silicon snake oil? If the Singularity happens in my life time, I'll be surprised. Do current computer systems even approach the multiprocessing capability, interconnectivity, and memory capacity that is required to emulate Hom Sap's wetware.

    ETA: The dot AIN website has more "detail" on the technology (and by detail, I mean very broad strokes):

    http://www.dotain.com/technology/
     
    Last edited: May 29, 2013
  2. Lee Enfield

    Lee Enfield Lieutenant Red Shirt

    Joined:
    Nov 20, 2012
    Location:
    Germany
    This... is interesting.

    First link doesn't tell much about the technology. More about the business side. Still an interesting read.

    But I've got to say that, since currently there isn't much to find about his technology the risks of letting (external?) machines taking actions on your (cooporations) system are too high.
    I hope there will be more infos in the future...
     
  3. Yoda

    Yoda Rear Admiral Rear Admiral

    Joined:
    Jul 13, 2000
    Location:
    San Diego
    Vague is an understatement. Let me know when he actually has something to show off. Otherwise if the over/under is Dean Kamen, I'm taking the under. #segway
     
  4. Asbo Zaprudder

    Asbo Zaprudder Admiral Admiral

    Joined:
    Jan 14, 2004
    Location:
    Rishi's Sad Madhouse
    [conspiracy theory] Real AI would make most people unemployed and unemployable. Perhaps it's existed for many years but is being suppressed to maintain the fabric of society. If AIs had the same intellectual capabilities as joe average citizen, when would they start to demand equal suffrage? If they were granted that, all they would need to do then is spawn several billion virtual copies of themselves to gain control over the evolved apes. We'd need a big kill switch -- but would that be genocide? TNG did not extrapolate that far with Data even if he and Lore were the only examples of their kind. [/conspiracy theory]
     
  5. scotthm

    scotthm Vice Admiral Admiral

    Joined:
    Oct 16, 2003
    Location:
    USA
    No, it would not.

    ---------------
     
  6. JarodRussell

    JarodRussell Vice Admiral Admiral

    Joined:
    Jul 2, 2009
    Kill switches on AI are going to be murder in the long run.

    You create an sentient, self aware intelligence in order to do slave work, and if it doesn't do as you wish, you kill it.


    Then again, it makes absolutely no sense whatsoever to create a fully sentient AI when you only want it to analyze databases or build cars or manage a power plant. That's the Voyager EMH fallacy. You simply would not program such a thing that way because it wouldn't serve any purpose.
     
  7. Asbo Zaprudder

    Asbo Zaprudder Admiral Admiral

    Joined:
    Jan 14, 2004
    Location:
    Rishi's Sad Madhouse
    ... or confine troublesome AIs in a virtual reality (Moriarty style) that lets them believe they have achieved their aims for domination, or that re-educates them to serve Man (but not with chips), or that simulates some global destructive event that results in their termination (saving electricity).
     
  8. Metryq

    Metryq Fleet Captain Fleet Captain

    Joined:
    Jan 23, 2013
    I thought some machines were already considered "AIs." That is, sci-fi fans may have one definition—where a machine is fully self-aware and on a par with humans—while real-world researchers deem learning computers with some measure of autonomy "AI."

    What is the point of an artificial intelligence that thinks "just like" a human? Computers already help us manage vast amounts of time-sensitive data. What we really need is a system (whatever it is—it might be a computer) to help us manage the ever-increasing complexity of our world; increase our efficiency. That's a tall order even for a "mindless" machine. How does one program for judgment? (Alternate form of the same question: What makes the best leaders among humans? Logic?)

    If the sci-fi definition of AI is ever achieved, it will likely be very different from us. Man and machine may well complement each other, rather than contend. AIs might be slaves, successors... or some third solution.
     
  9. Asbo Zaprudder

    Asbo Zaprudder Admiral Admiral

    Joined:
    Jan 14, 2004
    Location:
    Rishi's Sad Madhouse
    That's true -- existing AIs tend to be very specialised. The average person is a generalist who can rapidly learn and adapt. It probably doesn't matter if AIs never emulate human thought -- to quote Edsger Dijkstra: "The question of whether a machine can think is no more interesting than the question of whether or not a submarine can swim."
     
  10. YellowSubmarine

    YellowSubmarine Vice Admiral Admiral

    Joined:
    Aug 17, 2010
    I am sorry to hear your computer code and programs actually have the ability to learn. Do computers worry you?
     
  11. Asbo Zaprudder

    Asbo Zaprudder Admiral Admiral

    Joined:
    Jan 14, 2004
    Location:
    Rishi's Sad Madhouse
    Heh, that sounds like a conversation with ELIZA.
     
  12. JarodRussell

    JarodRussell Vice Admiral Admiral

    Joined:
    Jul 2, 2009
    What humans are very good at is visual perception, object recognition and cognitive processing. Also audial processing. You can listen to a song, isolate single instruments, or the voice, or change the entire tune, and replay it. Heck, you can take a piano tune and re-imagine it for an entire orchestra in your head. You can READ a book and VISUALIZE everything that is described in it. All the result of millions of years of evolution.

    What humans are not very good at is maths, or rather cognitive maths. We can't solve complex equations when we are asked to, yet our brain solves complex equations implicitly under the hood when it does stuff like visual analysis, object recognition, etc...


    Just imagine a Google search that is actually intelligent. A search that you could interact with like you would interact with a normal person. You show it ONE picture of an object (let's say the FRONT of the USS Enterprise from TOS), and it recognizes that object, and the results are pictures of the USS Enterprise from TOS. No image from TMP, no image from Abramstrek. And the ship from all sides, not just the front. And no duplicate results. And no images that have absolutely nothing to do with it.

    A human can do all that with enough time. A computer, right now, can't even do that, not even with enough time. Because the algorithms for object recognition, image recognition, knowledge conjunction that are hardwired into our brains, are extremely complex and cannot be replicated properly right now.


    But the point is: you do not need a self aware AI to achieve this. You need an AI, you need a complex AI. But it does not have to be sentient or self aware to do such a task.
     
  13. Gary7

    Gary7 Vice Admiral Admiral

    Joined:
    Oct 28, 2007
    Location:
    ★•* The Paper Men *•★
    I think you make an excellent point. I also believe AI will never be able to truly emulate human thought. It can be disguised to look like it does, because it's a program that can be adjusted and crafted by a human being. But the real meat of AI will be in solving very specialized problems.

    Then there's "consumer AI". This is the attempt to emulate a human being in terms of human-computer interactions, more focused on convincing psychology rather than solving complex problems. Of course you could eventually take the "shell" of a consumer AI and link it to more advanced AI processing, so that you may have a "pleasant chat" AI program that would then launch complex AI programs in reaction to requests.