• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

We might be making inroads into Artificial Thinking

Let me try an experiment. It's a spur of the moment thing so it might not work.. but anyway...

What is intelligence? What are characteristics that will make you say "That thing is intelligent"? To avoid talking about hopelessly outdated notions like Turing Tests, let's assume that the thing we're observing cannot communicate in any language (eg, self driving car). How would you figure out if the thing has intelligence?

What about animals? Are animals intelligent? Are there animals that are more intelligent than other animals? Do you think an animal species will one day, after countless generations of evolution, gain the "Superintelligence" that proponents of technology singularity claim machines will ultimately gain?
 
Last edited:

I know you know -- I was being pedantic. I'll stop now.

The brain can emulate feats performed by a computer (analogue, digital, or quantum) and vice versa. But which is potentially greater in scope? I don't know. Penrose argues the brain can perform tasks that are beyond the capability of digital computers and that this means that AI will never reach the capability of HI, even if quantum computing is used. Personally, I think mathematics is a human construct -- there is no Platonic realm. Every now and then, someone makes an intuitive guess that seem to come out good.
 
Whether the brain is a computer depends on how you define "computer" and perhaps also "brain".
No, it really doesn't.

The human brain is a reflex engine. It reacts to stimuli in various ways. One of those ways is to produce external behaviors in the body it's attached to, the other is to slightly alter its own structure in response to those stimuli. The brain's response to stimuli depends entirely on its physical structure at any given moment. Because each new stimulus produces a slight change in that structure, it never responds to the same stimulus exactly the same way. This is the main reason why training and education are required for human beings: the goal is to bombard the trainee with repeated stimuli of the same type which the trainee then attempts to respond to with a specific action. Over time, the structure of the brain forms in such a way that a specific set of stimuli is more likely to produce a specific set of desirable behaviors. This takes a long time to do, and does not always work very well.

Computers, on the other hand, operate by consistent logical rules. The most basic of these rules is the mathematical distinction between 1 and 0, which map to "true" and "false." This is the limit to what a computer can actually process: a bewilderingly huge number of "true/false" statements across a physical array of several billion transistors. Computer's process information by combining these many true/false statements in all of its many logic gates, and you can, in essence, describe literally ANYTHING if you have enough true/false statements in a table (the Turing Machine).

What a computer does when you feed it an input depends on the state of its true/false values in its memory. It is analogous to the human brain ONLY insofar as how and when inputs alter the true/false value of those transistors depends on how its memory has been arranged in the first place (e.g. "11001001" is a read command and "00110110" is a write command.) The similarities end there: computer memory is a set of data introduced to the processor through logical processes, where human memory is a set of reactions introduced to the brain through stimuli. Strictly speaking, the brain is more similar to a ball of clay than a computer: if you press your hand against the brain, it'll leave an impression that degrades over time and never actually becomes permanent; if you press your hand against the computer, it'll record a "1" at all the places where your skin touches and a "zero" everywhere else.

What is intelligence?
The ability to take in, retain, and comprehend information. Comprehension involves the ability to recognize patterns and meaning in that information and extrapolate those patterns to make accurate predictions about the future or about unknowns. For example:
- Look at a sequence of numbers, figure out the pattern, and find the next five numbers in the sequence
- Look at an object that is partially concealed by another object, and find a way to manipulate the second object so that the first is no longer obscured
- Given a description of person A and her behavior, and a set of circumstances that happen to person A, how does this person feel?
- Write a sentence in the passive voice
- Write a sentence in the active voice
- Estimate the volume of an oddly-shaped object

These are all different "types" of intelligence but they all depend on the same thing: the ability to recognize patterns, and extrapolate those patterns to predict the future.

Human beings gain the ability to recognize patterns by being exposed to those patterns repeatedly in their lives. You see the same thing over and over again, you learn to recognize it the next time you see it. When you see the same thing with slight variations over and over again, you get better at seeing through the slight differences and recognize what's important about it. Sometimes, a new variation throws you off and you have to stop and think "does this still fit the pattern?" and maybe it does, maybe it doesn't, and you have to decide if it's part of the same pattern, or something new, or something in common with another pattern. This is one of the reasons intelligent people are better at dealing with ambiguity than others.

Machine intelligence is similar to humans in that it collects data on certain patterns and finds ways to extrapolate that pattern to predict the future. Computers cannot natively learn to recognize patterns, however, and are totally dependent on software and pre-programming to be able to do this. This limits computer intelligence to the extent of its existing knowledge base and its ability to extract meaningful patterns from that knowledge base.

What about animals? Are animals intelligent?
At certain things, yes. Some of them, VERY. The ability to predict whether or not a predator is going to notice you and/or eat you or fail to notice you and walk on past is an important survival skill for most animals. For predators, the ability to judge distance, determine where a tasty animal is and where it's going, figure out how to catch it and how to kill it, are equally essential.

Are there animals that are more intelligent than other animals? Do you think an animal species will one day, after countless generations of evolution, gain the "Superintelligence" that proponents of technology singularity claim machines will ultimately gain?
No, because "superintelligence" or even humanlike intelligence are not necessarily advantageous from an evolutionary standpoint. Humans are very good at learning how to do things we usually suck at, which makes is accomplished generalists at a time when being a generalist is an advantageous trait. Other species haven't had the time to evolve generalist traits -- YET -- and we simply got here first.
 
I know you know -- I was being pedantic. I'll stop now.

The brain can emulate feats performed by a computer (analogue, digital, or quantum) and vice versa. But which is potentially greater in scope?
"vice versa?" human brains cannot emulate computers. Not even close.

We can IMITATE computers by reproducing a very sloppy approximation of their behavior, but emulation would mean a human being actually running computer software on his or her brain. That's not a thing that can EVER happen. A windows operating system will hold in memory something like a million bytes of digital data every second; that's one million 8-digit combinations of ones and zeroes. A human being emulating Windows 10 would have to memorize one million 8 digit numbers and then add those numbers in his head in real time based on a storm of OTHER 8-digit numbers constantly being shown to him on a screen. Up to a certain point, this is ALMOST dooable, but on a timescale so slow that it would take a mathematical savant several hours to emulate one full minute of operation by that same computer. It is important to understand, then, that human beings do not process information the same way they process numbers; we operate with numbers as discrete concepts representing quantities, objects, values. We don't see them AS numbers, we see them as patterns that behave in a predictable way and we can manipulate to get what we want. So to human beings, there's functionally no difference between a number and a vagina: if you do This thing, then That thing happens.

Computers could theoretically simulate the brain, however, if they had an accurate model of how the brain works. The more accurate the model, the better the simulation. But to a computer, there's functionally no difference between a brain and a vagina: they're both just clouds of numbers with rules that determine when the numbers are supposed to change.

tl;dr: computers deal with numbers, people deal with objects. To computers, all objects are just numbers; to humans, all numbers are just objects.

Penrose argues the brain can perform tasks that are beyond the capability of digital computers
Penrose is dead wrong.
 
^^You're twisting my words somewhat. I wrote that we can emulate some feats that computers can perform. Ducks versus submarines.

I agree that Penrose is wrong. Theoretically, given the correct configuration, a powerful enough quantum computer could emulate any quantum object, including a human brain.
 
^^You're twisting my words somewhat. I wrote that we can emulate some feats that computers can perform. Ducks versus submarines.
Semantics, then. "Emulation" and "imitation" are not the same thing. I can sit down across from you and say everything you say exactly as you say it, but I can't actually copy your emotions or your perceptions and I have no idea what you're thinking. A computer can do the same thing, but can't gather data on my mental states, for the same reason. A computer can emulate another computer, however, and reproduce its internal data states too, even if its external behavior is noticeably different for whatever reason.

I agree that Penrose is wrong. Theoretically, given the correct configuration, a powerful enough quantum computer could emulate any quantum object, including a human brain.
While this is true, the human brain isn't really a "quantum object." It's a complex and chaotic biological object with a lot of very delicate electrochemical processes all going on at once. You wouldn't really even need a "powerful enough quantum computer" for this, just the right software and the right model with enough accuracy to track the interactions of everything that's happening in the brain.

Significantly, even a simulation of a human brain will never actually achieve consciousness. This is because the simulation is still just a numerical construct and is no different than any other numerical construct the computer might otherwise operate on. Even if the responses are equivalent to humanlike intelligence, the data process behind that intelligence is still just an aggregate of 1s and 0s and that is exactly what the computer "thinks" they are.

Consider this: a magician can make it appear as if he's sawing a woman in half and can make the illusion very very convincing. Likewise, an actor can appear to demonstrate pain, surprise, horror or shock and give her performance real life. But the performance is just a trick, and the illusion created by the trick has no phenomenal reality in and of itself: the woman doesn't feel herself being sawed in half, the magician does not feel the blades slicing through her body, and her shock and horror are not caused by pain, trauma or blood loss, and she experiences none of these things. So even in an illusion produced by HUMANS, the genuine experience the illusion represents is absent. In an illusion produced by a computer, the experience of being conscious is equally absent.
 
Personally, I take a materialist and logical positivist world view. Each of us is nothing but a large aggregation of fundamentally quantum entities. So I believe that consciousness is just as possible to implement in a machines as in a human. To me, consciousness doesn't live in some third mental realm that someone such as Penrose envisages lying between the realm of Platonic forms and the realm of the physical world.

http://www.oocities.org/siliconvalley/pines/1684/thought.html

I can't prove that human consciousness and free will are illusions but I suspect that they are. This guy seems to share my (possibly illusory) views:

http://lemire.me/blog/2016/01/18/consciousness-and-free-will-are-illusions-you-are-just-a-robot/

I'm not going to comment further because I'm never going to accept a non-materialist viewpoint that humans are somehow special. Perhaps it's the way I'm built.
 
Personally, I take a materialist and logical positivist world view. Each of us is nothing but a large aggregation of fundamentally quantum entities. So I believe that consciousness is just as possible to implement in a machines as in a human. To me, consciousness doesn't live in some third mental realm that someone such as Penrose envisages lying between the realm of Platonic forms and the realm of the physical world.
First of all, there's the invocation of quantum mechanics again as if it's relevant to a discussion of consciousness. We don't even understand consciousness on the macroscopic level, nor do we have any hard science that makes any sort of predictions of biological functions on the quantum level, so invoking quantum mechanics in neurological or cognitive science is BEYOND premature.

Second of all, consciousness is an EXPERIENCE, not a some invisible quality that "lives" anywhere. You lose/gain consciousness the same way you might lose/gain feeling when your leg falls asleep, and for very much the same reason at that. Consciousness can be altered, distorted or removed altogether by chemicals/medications, lack of blood flow or nerve damage, same as any other sensation. The brain perceives consciousness the same way it perceives any other sensation, it only SEEMS different to us because we can't localize the sensation to any part of our body like we can with most other sensations.

I can't prove that human consciousness and free will are illusions but I suspect that they are.
They pretty much are.

I'm not going to comment further because I'm never going to accept a non-materialist viewpoint that humans are somehow special. Perhaps it's the way I'm built.
Humans are not special. LOTS of things in the world are capable of consciousness. Anything with a sufficiently well developed central nervous system can and does achieve consciousness.

Machines cannot and will not achieve consciousness because they are incapable of perceiving sensations. They are digital entities, they operate on numbers, and nothing else. To machines, all sensations and all experiences are processed as numbers. To humans, all numbers are processed as sensations and experiences. So a computer could no more obtain consciousness than a human being could measure his own body temperature by thinking about it.
 
What is Intelligence?

The ability to take in, retain, and comprehend information. Comprehension involves the ability to recognize patterns and meaning in that information and extrapolate those patterns to make accurate predictions about the future or about unknowns.

I'm embarrassed to say I forgot why I asked the question, but I feel obliged to at least give some sort of response.

Anyway that's a good definition of what Intelligence is, although I wouldn't define the ability to make predictions as "accurate". The accuracy is going to vary depending on circumstance and the difficulty of the problem. For example, I'm pretty sure no intelligence would be accurate at predicting the roll of a fair dice.

Given this definition, I would assert that all animals have at least some level of intelligence. After all, even bacteria learn to figure out where food sources are and know to move towards them. To achieve this, the lowly bacteria has to draw information from it's environment, figure out where the food is and move in the right direction.

I have a partially blind dog who is extremely long-sighted, she cannot see small objects close to her. I like to play with her by placing a bit of food at random locations. It's a wonder to watch her systematic process to find the food. First, she'll go to areas where I have been known to leave treats. When that fails, she does a sweeping search. She starts next to me and expands her search area in an expanding circle.

It's clear she has developed her own a strategy for finding food treats, such an intelligent dog.
 
Last edited:
I'm embarrassed to say I forgot why I asked the question, but I feel obliged to at least give some sort of response.

Anyway that's a good definition of what Intelligence is, although I wouldn't define the ability to make predictions as "accurate". The accuracy is going to vary depending on circumstance and the difficulty of the problem. For example, I'm pretty sure no intelligence would be accurate at predicting the roll of a fair dice.
Intelligence isn't a binary concept; the measure of your intelligence is the degree to which your predictions are accurate. So a reasonably intelligent person who knows about statistics will say "the dice will land on any particular number one in six times." A person who knows more about the dice -- and can measure the weight of them -- can be a lot more accurate than that, which is actually the whole point of that sequence in "The Royale" when Data realizes the dice are unevenly balanced and are literally cheating him just because he isn't playing stylishly enough. So he bends over and says "Daddy needs a knew pair of shoes" and feels the dice shift their weight in his hand, NOW he can predict that the dice are going to roll his way.

Given this definition, I would assert that all animals have at least some level of intelligence.
Which they do.
 
However, the algorithm does not realize there's a huge difference between "George Foreman defeated" and "defeated George Foreman".
The difference is from English grammar: the first is subject-verb, while the second is verb-object. Since "defeat" is a transitive verb, it needs both a subject and an object in subject-verb-object order:
George Foreman defeated Joe Frazier
Joe Frazier defeated George Foreman

English has numerous tense-aspect-mood combinations in its verbs, as grammarians would say, though most of them use auxiliary verbs. Here are some:

George Foreman defeats Joe Frazier -- general statement
George Foreman is defeating Joe Frazier -- currently happening
George Foreman will defeat Joe Frazier -- future time
George Foreman defeated Joe Frazier -- emphasis on being in the past
George Foreman has defeated Joe Frazier -- emphasis on being completed
George Foreman would defeat Joe Frazier -- conditional (dependence on another event)
George Foreman might defeat Joe Frazier -- hypothetical
George Foreman ought to defeat Joe Frazier -- obligation or something made necessary

How successful has it been for natural-language processing to capture grammatical information and use it?
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top