Is Watson the most advanced super computer in existence?

Discussion in 'Science and Technology' started by DarthTom, Nov 18, 2013.

  1. DarthTom

    DarthTom Fleet Admiral Admiral

    Joined:
    Apr 19, 2005
    Location:
    Atlanta, Georgia
    Some computers currently have the, "ability to learn," or at least have been programmed to analyze a situation and learn from it.

    Even a Rumba vacuum robot, "learns," the area it needs to vaccum over time through trial and error. By some definitions that's faster, "learning," than even an infant can process.

    Part of the problem IMO about defning "intelligence," related to machines is what qualtative metrics do you use?

    By some people's definition the ability to beat a jepoardy grand champion at the game makes the machine very, "intelligent." By other people's definition it's simply that computers like Watson are able to store and prcess vast amounts of data and recall it more efficiently and faster than humans can - hence they aren't intelligent per se.

    Also, computers are without an argument much better at solvling complex math problems than virtually any human can.

    Most philosophers believe the yard stick created by Descartes of, "I think therefore I am," or self awareness is the real litmus test to, "intelligence," or lack there of.

    No computer obviously as of yet is self aware and there is considerable debate to as if they ever will be.
     
  2. JarodRussell

    JarodRussell Vice Admiral Admiral

    Joined:
    Jul 2, 2009
    And it will be pretty hard to prove it. You can't prove that I am self aware. You can only determine that for yourself.

    The difference between a truly self aware machine and one that is only so sophisticated that it appears self aware is going to be very thin.
     
  3. Robert Maxwell

    Robert Maxwell memelord Premium Member

    Joined:
    Jun 12, 2001
    Location:
    space
    Yeah. I could spend two minutes writing a program where a computer tells you what it's "thinking." Doesn't mean it actually is.
     
  4. DarthTom

    DarthTom Fleet Admiral Admiral

    Joined:
    Apr 19, 2005
    Location:
    Atlanta, Georgia
    Many marine biologists believe that Dolphins are our most intelligent, "cousin," as a mammal. Are dolphines more, "intelligent," than Watson?

    How about chimps? We can teach a chimp sign language - how does Watson compare with a chimp in terms of intelligence?
     
  5. Robert Maxwell

    Robert Maxwell memelord Premium Member

    Joined:
    Jun 12, 2001
    Location:
    space
    I can't answer your question without hearing your definition of "intelligence."
     
  6. DarthTom

    DarthTom Fleet Admiral Admiral

    Joined:
    Apr 19, 2005
    Location:
    Atlanta, Georgia
    ^^^

    Touche. I suppose for me it's the ability to learn and adapt. As an athiest, I wouldn't put any type of religious test to the question - e.g. a soul.

    As Jarod Russell pointed out - self awarness is tricky to prove.
     
  7. Lindley

    Lindley Moderator with a Soul Premium Member

    Joined:
    Nov 30, 2001
    Location:
    Bonney Lake, WA
    I'll accept a computer is "intelligent" when it can solve a problem it wasn't programmed to solve.

    Even that is a fuzzy line, because it's certainly possible for computers to solve problems they weren't expected to solve. But the real question is, does it make sense that they solved that problem in retrospect, given their programming?
     
  8. DarthTom

    DarthTom Fleet Admiral Admiral

    Joined:
    Apr 19, 2005
    Location:
    Atlanta, Georgia
  9. JarodRussell

    JarodRussell Vice Admiral Admiral

    Joined:
    Jul 2, 2009
    The thing is, I don't think humans solve problems they are not programmed or expected to solve. We only do what we can do. We cannot exceed our own programming. It's just that our programming is really complex so that there are many side effects.

    For example writing and reading. We have the ability to control our arms, hands and fingers, and the ability to recognize patterns, and the ability to make logical connections. That's basically it. We make a shape (let's say a circle), we declare that shape stands for "ball", and then we either draw or recognize that drawing.
     
  10. Lindley

    Lindley Moderator with a Soul Premium Member

    Joined:
    Nov 30, 2001
    Location:
    Bonney Lake, WA
    To some extent true. However, our ability to make connections is so complex that we can't define exactly what the extent of our potential problem-solving is. As long as we can define "this program will solve X problem but not Y problem," computers haven't matched us.
     
  11. scotpens

    scotpens Professional Geek Premium Member

    Joined:
    Nov 29, 2009
    Location:
    City of the Fallen Angels
    That's just what they want you to think.

    BWAHAHAHA!

    [​IMG]
     
  12. rhubarbodendron

    rhubarbodendron Vice Admiral Admiral

    Joined:
    May 1, 2011
    Location:
    milky way, outer spiral arm, Sol 3
    Maybe the point of view of a Biologist might help in this discussion.
    I wholeheartedly agree that we first of all must define "intelligence" and it becomes very clear in this thread that we have different ideas of what "intelligence" might be.

    I think computers can remember facts and access pre-programmed solutions quicker than we can. They may one day be equally good at finding parallels and transfering pre-programmed solutions to them, provided the situations are extremely similar. However, I wouldn't count that as intelligence but only as memory.

    The ability to judge on the artistic quality of a painting is in my opinion based mostly on emotions: we usually consider a painting good when it is aestethically pleasing to us and makes us feel well. As we don't know with absolute certainty yet how emotions work, predictions of computers being programmed to have emotions would be wildly speculative. I doubt it but it's only my personal opinion.

    For biologists the official definition of intelligence is "the ability to transfer learned knowledge into completely new situations". (At least that was the definition when I studied Biology, in the 80s).This also includes realizing that what we previousely have learned might have been wrong.

    I am not sure you can programme a computer to doubt itself. This would lead to a fedback loop as it can only think in 0 and 1. If it'd doubt itself it'd get a 0 = 1.
    Human brains work in a far wider range of possibilities than 0 and 1. We think in the complete interval from -endles to + endless.
    So, unless we can overcome the 0/1 limitations in programming computers, I think they will be unable to develop intelligence.

    __________

    Hmm, re-reading this I think I need to explain what I meant by transferring knowledge to completely new situations. Sorry - it's perhaps not the best example but I can't think of a better one atm:

    I find a stone shard with a sharp edge. I realize it cuts. It can be used as a weapon. I don't get close enough to my prey to cut it. How can I get my sharp stone closer? I throw it. How can I ensure that it hits my prey with the right side, cutting it instead of just bruising it? I build a spear.
    A computer could not make this connection. It would think: the stone cuts. I can use it to cut up my prey once I killed it.
    Unless you programme it to do so, the computer couldn't make the mental jump from directly applying the instrument to indirectly applying it. And if you did programme it to do so, you would already have given him a readymade answer to the problem. The computer wouldn't have to find a new solution but just use its memory.
     
  13. Robert Maxwell

    Robert Maxwell memelord Premium Member

    Joined:
    Jun 12, 2001
    Location:
    space
    Also: algorithms. Computers can only solve problems they are programmed to solve, and only in the way they are programmed to solve it. Real life is full of NP-hard problems that humans are currently better at solving. Again, this is down to humans being able to make quick big-picture analyses of complex situations, something computers cannot do yet, as we don't know how to program it.

    A computer could be programmed to emulate emotions, but of course, it wouldn't actually have emotions. Likewise, a computer could give the illusion of being intelligent, but that doesn't mean it actually is.

    That's a reasonable definition, and one computers generally fail.

    A given algorithm could produce multiple outputs and assign confidence values to each. This is what Watson does. But, again, these are values arrived at based on analysis of available data. They aren't "gut feelings" or anything like that. They're inherently quantitative.

    A computer could make that connection if it has been programmed to understand the notion of throwing as well as the physics involved. I actually don't think that one is a big leap.

    * This rock is sharp.
    * A sharp object, when applied edge-on with sufficient force to a softer object, can damage it.
    * This animal is too dangerous to get close enough to slice with the rock.
    * I know how to throw, and I can throw with enough force to inflict damage on the animal.

    Without the last bit of knowledge, a computer would be helpless. It could not develop this knowledge on its own--it would have to be programmed.

    That doesn't mean it couldn't be programmed to learn, and for it to not start out with knowledge of physics and throwing, but to learn them on its own. Even then, it is still operating within the bounds of its programming. Without human feedback, however, it is likely to draw bad conclusions.

    One of the things we will have a very hard time doing in software is simulating human cognitive development. We aren't born knowing much at all. Our first few years are spent learning how to interact with the world around us, and in that process we get lots of feedback, both from our own senses (learning what hurts, what feels good, etc.) and from our caretakers (from whom we learn speech, emotional expression, expectations, etc.)

    A computer could be programmed to do something like this, but I think that would go hand-in-hand with it having a physical body capable of interacting with the world. At this point, our robots are still quite crude compared to the capabilities of a human, which would impede their ability to learn and interact. The kind of pattern recognition animal brains are good at is also something that's a difficult, brute force problem for digital computers. Whereas you and I can recognize someone's face without even trying, a computer must go to substantial effort to do the same.

    This is part of why I think computers, as they exist today, will remain hopelessly limited in terms of emulating the full spectrum of human intelligence. Incidentally, studies of actual computer intelligence (some of which are cited in this thread) bear that out: computers may appear very, very competent in some areas, while completely clueless in others. This is part and parcel of the machine's digital nature, relatively low power, and our own poor understanding of just how cognition works.
     
  14. rhubarbodendron

    rhubarbodendron Vice Admiral Admiral

    Joined:
    May 1, 2011
    Location:
    milky way, outer spiral arm, Sol 3
    But as you said it'd always just be an emulation. Not the real thing. As long as we don't know how we work, we can't pass that knowledge on to our computers.
     
  15. Edit_XYZ

    Edit_XYZ Fleet Captain Fleet Captain

    Joined:
    Sep 30, 2011
    Location:
    At star's end.
    What is intelligence?
    Is it qualitatively different from what computers have?
    Or only quantitatively different, much like the auto driving systems from a decade ago?

    In my opinion, a machine cannot be called intelligent until it is proven to be creative. Until now, no such proof is forthcoming.
    Always, the output from a computer does not contain more than what was put in; the output is, at most, applications of the general principles inputted. For example, the modern AI - which consists of big data mined according to some statistical principles; a translator program will be unable to correctly translate texts which have no correspondents in its big data.
    A human mind can understand the concepts used ('understanding' being a black box) which allows the mind to...how to put it: to escape the informational system within which the mind worked. This allows humans to come up with more than what was put in; with genuine novelty; creativity.


    An interesting information - all computers are Turing machines, always self-consistent.
    The human mind is not always self-consistent, but can entertain inconsistent ideas without becoming useless - considering every statement, no matter how contradictory, provable/correct. This should be impossible - or, at least, no one has any idea how to replicate this performance with a machine:
    http://www.leaderu.com/truth/2truth08.html
    Note - one of the conclusions of the 'Godel's proof' argument used by J.R. Lucas is that, in order to 'understand' the concept of truth, one must be, at least in part, inconsistent. Perhaps 'understanding', in general, requires one to be partly inconsistent.
    No self-consistent Turing machine can understand said concept, escaping its informational system.
     
  16. DarthTom

    DarthTom Fleet Admiral Admiral

    Joined:
    Apr 19, 2005
    Location:
    Atlanta, Georgia
    Interesting article on CNN website I read about today related to this:

    "'Neil," must be Hal and Sal's grandfather. LOL

    CNN
     
    Last edited: Dec 18, 2013
  17. rhubarbodendron

    rhubarbodendron Vice Admiral Admiral

    Joined:
    May 1, 2011
    Location:
    milky way, outer spiral arm, Sol 3
    wow, that's an interesting article. Thanks for posting it! I particularly love the last paragraph :D
    It'll be interesting to see how Neil develops.

    *gasp* I find myself agreeing with you! My fever must be worse than I thought [​IMG]
    LOL no, please don't be offended, Edit! I was just making fun of myself.
    It's cool that we have finally found a topic we agree on. :) That calls for a celebration! *rolls in a barrel of Bavarian beer*
    Cease fire? Please? *holds out hand at Edit*
     
  18. scotpens

    scotpens Professional Geek Premium Member

    Joined:
    Nov 29, 2009
    Location:
    City of the Fallen Angels
    Will all these advances in image recognition, maybe someday Photobucket's censorship software will be able to recognize the difference between actual naked boobs and almost-naked boobs.
     
  19. rhubarbodendron

    rhubarbodendron Vice Admiral Admiral

    Joined:
    May 1, 2011
    Location:
    milky way, outer spiral arm, Sol 3
    that would at least be a positive use. I can imagine a few less pleasant ones.
     
  20. Edit_XYZ

    Edit_XYZ Fleet Captain Fleet Captain

    Joined:
    Sep 30, 2011
    Location:
    At star's end.
    *shakes hand*

    I would be interested in your thoughts on the latter part of my previous post from this thread. It has more substance than the part you quoted.