• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Is Watson the most advanced super computer in existence?

More programmers. And/or the ability to learn.

Some computers currently have the, "ability to learn," or at least have been programmed to analyze a situation and learn from it.

Even a Rumba vacuum robot, "learns," the area it needs to vaccum over time through trial and error. By some definitions that's faster, "learning," than even an infant can process.

Part of the problem IMO about defning "intelligence," related to machines is what qualtative metrics do you use?

By some people's definition the ability to beat a jepoardy grand champion at the game makes the machine very, "intelligent." By other people's definition it's simply that computers like Watson are able to store and prcess vast amounts of data and recall it more efficiently and faster than humans can - hence they aren't intelligent per se.

Also, computers are without an argument much better at solvling complex math problems than virtually any human can.

Most philosophers believe the yard stick created by Descartes of, "I think therefore I am," or self awareness is the real litmus test to, "intelligence," or lack there of.

No computer obviously as of yet is self aware and there is considerable debate to as if they ever will be.
 
And it will be pretty hard to prove it. You can't prove that I am self aware. You can only determine that for yourself.

The difference between a truly self aware machine and one that is only so sophisticated that it appears self aware is going to be very thin.
 
Yeah. I could spend two minutes writing a program where a computer tells you what it's "thinking." Doesn't mean it actually is.
 
Yeah. I could spend two minutes writing a program where a computer tells you what it's "thinking." Doesn't mean it actually is.

Many marine biologists believe that Dolphins are our most intelligent, "cousin," as a mammal. Are dolphines more, "intelligent," than Watson?

How about chimps? We can teach a chimp sign language - how does Watson compare with a chimp in terms of intelligence?
 
^^^

Touche. I suppose for me it's the ability to learn and adapt. As an athiest, I wouldn't put any type of religious test to the question - e.g. a soul.

As Jarod Russell pointed out - self awarness is tricky to prove.
 
I'll accept a computer is "intelligent" when it can solve a problem it wasn't programmed to solve.

Even that is a fuzzy line, because it's certainly possible for computers to solve problems they weren't expected to solve. But the real question is, does it make sense that they solved that problem in retrospect, given their programming?
 
I'll accept a computer is "intelligent" when it can solve a problem it wasn't programmed to solve.

Even that is a fuzzy line, because it's certainly possible for computers to solve problems they weren't expected to solve. But the real question is, does it make sense that they solved that problem in retrospect, given their programming?

The thing is, I don't think humans solve problems they are not programmed or expected to solve. We only do what we can do. We cannot exceed our own programming. It's just that our programming is really complex so that there are many side effects.

For example writing and reading. We have the ability to control our arms, hands and fingers, and the ability to recognize patterns, and the ability to make logical connections. That's basically it. We make a shape (let's say a circle), we declare that shape stands for "ball", and then we either draw or recognize that drawing.
 
I'll accept a computer is "intelligent" when it can solve a problem it wasn't programmed to solve.

Even that is a fuzzy line, because it's certainly possible for computers to solve problems they weren't expected to solve. But the real question is, does it make sense that they solved that problem in retrospect, given their programming?

The thing is, I don't think humans solve problems they are not programmed or expected to solve. We only do what we can do. We cannot exceed our own programming. It's just that our programming is really complex so that there are many side effects.

For example writing and reading. We have the ability to control our arms, hands and fingers, and the ability to recognize patterns, and the ability to make logical connections. That's basically it. We make a shape (let's say a circle), we declare that shape stands for "ball", and then we either draw or recognize that drawing.

To some extent true. However, our ability to make connections is so complex that we can't define exactly what the extent of our potential problem-solving is. As long as we can define "this program will solve X problem but not Y problem," computers haven't matched us.
 
No computer obviously as of yet is self aware and there is considerable debate to as if they ever will be.
That's just what they want you to think.

BWAHAHAHA!

1312180440010098.jpg
 
Maybe the point of view of a Biologist might help in this discussion.
I wholeheartedly agree that we first of all must define "intelligence" and it becomes very clear in this thread that we have different ideas of what "intelligence" might be.

I think computers can remember facts and access pre-programmed solutions quicker than we can. They may one day be equally good at finding parallels and transfering pre-programmed solutions to them, provided the situations are extremely similar. However, I wouldn't count that as intelligence but only as memory.

The ability to judge on the artistic quality of a painting is in my opinion based mostly on emotions: we usually consider a painting good when it is aestethically pleasing to us and makes us feel well. As we don't know with absolute certainty yet how emotions work, predictions of computers being programmed to have emotions would be wildly speculative. I doubt it but it's only my personal opinion.

For biologists the official definition of intelligence is "the ability to transfer learned knowledge into completely new situations". (At least that was the definition when I studied Biology, in the 80s).This also includes realizing that what we previousely have learned might have been wrong.

I am not sure you can programme a computer to doubt itself. This would lead to a fedback loop as it can only think in 0 and 1. If it'd doubt itself it'd get a 0 = 1.
Human brains work in a far wider range of possibilities than 0 and 1. We think in the complete interval from -endles to + endless.
So, unless we can overcome the 0/1 limitations in programming computers, I think they will be unable to develop intelligence.

__________

Hmm, re-reading this I think I need to explain what I meant by transferring knowledge to completely new situations. Sorry - it's perhaps not the best example but I can't think of a better one atm:

I find a stone shard with a sharp edge. I realize it cuts. It can be used as a weapon. I don't get close enough to my prey to cut it. How can I get my sharp stone closer? I throw it. How can I ensure that it hits my prey with the right side, cutting it instead of just bruising it? I build a spear.
A computer could not make this connection. It would think: the stone cuts. I can use it to cut up my prey once I killed it.
Unless you programme it to do so, the computer couldn't make the mental jump from directly applying the instrument to indirectly applying it. And if you did programme it to do so, you would already have given him a readymade answer to the problem. The computer wouldn't have to find a new solution but just use its memory.
 
Maybe the point of view of a Biologist might help in this discussion.
I wholeheartedly agree that we first of all must define "intelligence" and it becomes very clear in this thread that we have different ideas of what "intelligence" might be.

I think computers can remember facts and access pre-programmed solutions quicker than we can. They may one day be equally good at finding parallels and transfering pre-programmed solutions to them, provided the situations are extremely similar. However, I wouldn't count that as intelligence but only as memory.

Also: algorithms. Computers can only solve problems they are programmed to solve, and only in the way they are programmed to solve it. Real life is full of NP-hard problems that humans are currently better at solving. Again, this is down to humans being able to make quick big-picture analyses of complex situations, something computers cannot do yet, as we don't know how to program it.

The ability to judge on the artistic quality of a painting is in my opinion based mostly on emotions: we usually consider a painting good when it is aestethically pleasing to us and makes us feel well. As we don't know with absolute certainty yet how emotions work, predictions of computers being programmed to have emotions would be wildly speculative. I doubt it but it's only my personal opinion.

A computer could be programmed to emulate emotions, but of course, it wouldn't actually have emotions. Likewise, a computer could give the illusion of being intelligent, but that doesn't mean it actually is.

For biologists the official definition of intelligence is "the ability to transfer learned knowledge into completely new situations". (At least that was the definition when I studied Biology, in the 80s).This also includes realizing that what we previousely have learned might have been wrong.

That's a reasonable definition, and one computers generally fail.

I am not sure you can programme a computer to doubt itself. This would lead to a fedback loop as it can only think in 0 and 1. If it'd doubt itself it'd get a 0 = 1.
Human brains work in a far wider range of possibilities than 0 and 1. We think in the complete interval from -endles to + endless.
So, unless we can overcome the 0/1 limitations in programming computers, I think they will be unable to develop intelligence.

A given algorithm could produce multiple outputs and assign confidence values to each. This is what Watson does. But, again, these are values arrived at based on analysis of available data. They aren't "gut feelings" or anything like that. They're inherently quantitative.

Hmm, re-reading this I think I need to explain what I meant by transferring knowledge to completely new situations. Sorry - it's perhaps not the best example but I can't think of a better one atm:

I find a stone shard with a sharp edge. I realize it cuts. It can be used as a weapon. I don't get close enough to my prey to cut it. How can I get my sharp stone closer? I throw it. How can I ensure that it hits my prey with the right side, cutting it instead of just bruising it? I build a spear.
A computer could not make this connection. It would think: the stone cuts. I can use it to cut up my prey once I killed it.
Unless you programme it to do so, the computer couldn't make the mental jump from directly applying the instrument to indirectly applying it. And if you did programme it to do so, you would already have given him a readymade answer to the problem. The computer wouldn't have to find a new solution but just use its memory.

A computer could make that connection if it has been programmed to understand the notion of throwing as well as the physics involved. I actually don't think that one is a big leap.

* This rock is sharp.
* A sharp object, when applied edge-on with sufficient force to a softer object, can damage it.
* This animal is too dangerous to get close enough to slice with the rock.
* I know how to throw, and I can throw with enough force to inflict damage on the animal.

Without the last bit of knowledge, a computer would be helpless. It could not develop this knowledge on its own--it would have to be programmed.

That doesn't mean it couldn't be programmed to learn, and for it to not start out with knowledge of physics and throwing, but to learn them on its own. Even then, it is still operating within the bounds of its programming. Without human feedback, however, it is likely to draw bad conclusions.

One of the things we will have a very hard time doing in software is simulating human cognitive development. We aren't born knowing much at all. Our first few years are spent learning how to interact with the world around us, and in that process we get lots of feedback, both from our own senses (learning what hurts, what feels good, etc.) and from our caretakers (from whom we learn speech, emotional expression, expectations, etc.)

A computer could be programmed to do something like this, but I think that would go hand-in-hand with it having a physical body capable of interacting with the world. At this point, our robots are still quite crude compared to the capabilities of a human, which would impede their ability to learn and interact. The kind of pattern recognition animal brains are good at is also something that's a difficult, brute force problem for digital computers. Whereas you and I can recognize someone's face without even trying, a computer must go to substantial effort to do the same.

This is part of why I think computers, as they exist today, will remain hopelessly limited in terms of emulating the full spectrum of human intelligence. Incidentally, studies of actual computer intelligence (some of which are cited in this thread) bear that out: computers may appear very, very competent in some areas, while completely clueless in others. This is part and parcel of the machine's digital nature, relatively low power, and our own poor understanding of just how cognition works.
 
But as you said it'd always just be an emulation. Not the real thing. As long as we don't know how we work, we can't pass that knowledge on to our computers.
 
What is intelligence?
Is it qualitatively different from what computers have?
Or only quantitatively different, much like the auto driving systems from a decade ago?

In my opinion, a machine cannot be called intelligent until it is proven to be creative. Until now, no such proof is forthcoming.
Always, the output from a computer does not contain more than what was put in; the output is, at most, applications of the general principles inputted. For example, the modern AI - which consists of big data mined according to some statistical principles; a translator program will be unable to correctly translate texts which have no correspondents in its big data.
A human mind can understand the concepts used ('understanding' being a black box) which allows the mind to...how to put it: to escape the informational system within which the mind worked. This allows humans to come up with more than what was put in; with genuine novelty; creativity.


An interesting information - all computers are Turing machines, always self-consistent.
The human mind is not always self-consistent, but can entertain inconsistent ideas without becoming useless - considering every statement, no matter how contradictory, provable/correct. This should be impossible - or, at least, no one has any idea how to replicate this performance with a machine:
http://www.leaderu.com/truth/2truth08.html
Note - one of the conclusions of the 'Godel's proof' argument used by J.R. Lucas is that, in order to 'understand' the concept of truth, one must be, at least in part, inconsistent. Perhaps 'understanding', in general, requires one to be partly inconsistent.
No self-consistent Turing machine can understand said concept, escaping its informational system.
 
Interesting article on CNN website I read about today related to this:

"'Neil," must be Hal and Sal's grandfather. LOL

CNN
The Never Ending Image Learner ("NEIL" to its friends) looks at millions of images on the Web, identifying and labeling them. For example, it might recognize a famous building, an animal's eye or a color. It then groups images together in categories, and automatically looks for associations between them, without human supervision.
"Images also include a lot of common-sense information about the world. People learn this by themselves and, with NEIL, we hope that computers will do so as well," said Abhinav Gupta, an assistant research professor at Carnegie Mellon.
The team decided that images were the best place to start their quest for common sense connections, in part because of the vast selection and variety of images available online.
"No one writes common-sense relationships, such as sheep are white or cars have wheels, and therefore it is hard to gather these relationships from sources such as text," Gupta told CNN.
Each examined image is another puzzle piece. Since July NEIL has analyzed more than 5 million images and come up with 3,000 relationships – a small percentage, but a start. The program might make connections between an object and a location, deducing for example that Ferris wheels are often found in amusement parks, or that a zebras are found on savannas.
The program, funded in part by Google, runs 24/7 on two clusters of computers that include 200 processing cores. Someday soon NEIL may begin analyzing video imagery as well.
"People don't always know how or what to teach computers," said Abhinav Shrivastava, a graduate student working on the project. "But humans are good at telling computers when they are wrong."
 
Last edited:
wow, that's an interesting article. Thanks for posting it! I particularly love the last paragraph :D
It'll be interesting to see how Neil develops.

In my opinion, a machine cannot be called intelligent until it is proven to be creative. Until now, no such proof is forthcoming.
Always, the output from a computer does not contain more than what was put in; the output is, at most, applications of the general principles inputted.
*gasp* I find myself agreeing with you! My fever must be worse than I thought
thud.gif

LOL no, please don't be offended, Edit! I was just making fun of myself.
It's cool that we have finally found a topic we agree on. :) That calls for a celebration! *rolls in a barrel of Bavarian beer*
Cease fire? Please? *holds out hand at Edit*
 
Will all these advances in image recognition, maybe someday Photobucket's censorship software will be able to recognize the difference between actual naked boobs and almost-naked boobs.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top