Maybe the point of view of a Biologist might help in this discussion.
I wholeheartedly agree that we first of all must define "intelligence" and it becomes very clear in this thread that we have different ideas of what "intelligence" might be.
I think computers can remember facts and access pre-programmed solutions quicker than we can. They may one day be equally good at finding parallels and transfering pre-programmed solutions to them, provided the situations are extremely similar. However, I wouldn't count that as intelligence but only as memory.
Also: algorithms. Computers can only solve problems they are programmed to solve, and only in the way they are programmed to solve it. Real life is full of
NP-hard problems that humans are currently better at solving. Again, this is down to humans being able to make quick big-picture analyses of complex situations, something computers cannot do yet, as we don't know how to program it.
The ability to judge on the artistic quality of a painting is in my opinion based mostly on emotions: we usually consider a painting good when it is aestethically pleasing to us and makes us feel well. As we don't know with absolute certainty yet how emotions work, predictions of computers being programmed to have emotions would be wildly speculative. I doubt it but it's only my personal opinion.
A computer could be programmed to emulate emotions, but of course, it wouldn't actually
have emotions. Likewise, a computer could give the illusion of being intelligent, but that doesn't mean it actually
is.
For biologists the official definition of intelligence is "the ability to transfer learned knowledge into completely new situations". (At least that was the definition when I studied Biology, in the 80s).This also includes realizing that what we previousely have learned might have been wrong.
That's a reasonable definition, and one computers generally fail.
I am not sure you can programme a computer to doubt itself. This would lead to a fedback loop as it can only think in 0 and 1. If it'd doubt itself it'd get a 0 = 1.
Human brains work in a far wider range of possibilities than 0 and 1. We think in the complete interval from -endles to + endless.
So, unless we can overcome the 0/1 limitations in programming computers, I think they will be unable to develop intelligence.
A given algorithm could produce multiple outputs and assign confidence values to each. This is what Watson does. But, again, these are values arrived at based on analysis of available data. They aren't "gut feelings" or anything like that. They're inherently quantitative.
Hmm, re-reading this I think I need to explain what I meant by transferring knowledge to completely new situations. Sorry - it's perhaps not the best example but I can't think of a better one atm:
I find a stone shard with a sharp edge. I realize it cuts. It can be used as a weapon. I don't get close enough to my prey to cut it. How can I get my sharp stone closer? I throw it. How can I ensure that it hits my prey with the right side, cutting it instead of just bruising it? I build a spear.
A computer could not make this connection. It would think: the stone cuts. I can use it to cut up my prey once I killed it.
Unless you programme it to do so, the computer couldn't make the mental jump from directly applying the instrument to indirectly applying it. And if you did programme it to do so, you would already have given him a readymade answer to the problem. The computer wouldn't have to find a new solution but just use its memory.
A computer could make that connection
if it has been programmed to understand the notion of throwing as well as the physics involved. I actually don't think that one is a big leap.
* This rock is sharp.
* A sharp object, when applied edge-on with sufficient force to a softer object, can damage it.
* This animal is too dangerous to get close enough to slice with the rock.
* I know how to throw, and I can throw with enough force to inflict damage on the animal.
Without the last bit of knowledge, a computer would be helpless. It could not develop this knowledge on its own--it would have to be programmed.
That doesn't mean it couldn't be programmed to
learn, and for it to not start out with knowledge of physics and throwing, but to learn them on its own. Even then, it is still operating within the bounds of its programming. Without human feedback, however, it is likely to draw bad conclusions.
One of the things we will have a very hard time doing in software is simulating human cognitive development. We aren't born knowing much at all. Our first few years are spent learning how to interact with the world around us, and in that process we get lots of feedback, both from our own senses (learning what hurts, what feels good, etc.) and from our caretakers (from whom we learn speech, emotional expression, expectations, etc.)
A computer
could be programmed to do something like this, but I think that would go hand-in-hand with it having a physical body capable of interacting with the world. At this point, our robots are still quite crude compared to the capabilities of a human, which would impede their ability to learn and interact. The kind of pattern recognition animal brains are good at is also something that's a difficult, brute force problem for digital computers. Whereas you and I can recognize someone's face without even trying, a computer must go to substantial effort to do the same.
This is part of why I think computers, as they exist today, will remain hopelessly limited in terms of emulating the full spectrum of human intelligence. Incidentally, studies of actual computer intelligence (some of which are cited in this thread) bear that out: computers may appear very, very competent in some areas, while completely clueless in others. This is part and parcel of the machine's digital nature, relatively low power, and our own poor understanding of just how cognition works.