• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Thoughts about AI and Consciousness

Jadzia

on holiday
Premium Member
I was thinking today about the human dream of creating a machine that is aware in the living sense of that word. There was an idea that I came up with and I wanted to discuss it. Let's hear what you think.

We learned from Darwin about evolution. That everything that emerges within a species tends to be necessary. We don't tend to see redundant body parts emerging, as there would be no motive for natural selection to develop that body part. It is reasonable to think that everything that emerges within a species is beneficial to that species.

So how does that apply to consciousness/sentience? If we accept that some lifeforms don't have it (eg grass), and some do (eg humans), then the some that do must benefit from it, else it wouldn't have evolved.

In which case, sentience must offer something beneficial to a lifeform that cannot be achieved by data processing alone.

That sounds quite profound to me. Because we seem to expect almost all things pertinent to the outward behaviour of autonomous beings, are able to be simulated without the need for a consciousness hovering above it. What does consciousness add to behaviour beyond what data processing can do?

Even if we consider something like emotion. In terms of survival behavior, the emotions themselves are not important, but it is the behavioural changes motivated by them which are. Surely those behaviour changes can be simulated without consciousness?

So I don't know what that extra something is. :confused:

One conclusion that we might draw, is that consciousness isn't useful without the ability for mind to modify behaviour. We wouldn't expect natural selection to choose consciousness without a corresponding cognitive control mechanic. In evolutionary terms, the two would occur simultaneously. One without the other doesn't modify behaviour.

So perhaps if we can try to understand what that extra something is, we would have some insight into why some life forms evolve consciousness and some don't.

The obvious dividing line to compare our thoughts across is animal vs vegetable. The former is considered conscious while the latter is not.

Vegetables have no ability to control their environment. They cannot change what soil they are growing in, or move themselves to find food. They can only adapt to their fixed environment. Their growth (I believe) is purely mechanical and void of any underlying mind. What could a mind possibly add to the behaviour of a blade of grass? Surely it would be a tremendous redundancy to evolve the machinery to support consciousness without it being beneficial to its survival?

Animals on the other hand do move about. They look for food. They control and shape their environment to suit themselves. They adapt their environment to suit themselves. So how does consciousness assist that? Why is this behaviour not achieved in simulated/data processing ways?

Of course what I'm getting at here is that once we understand what that extra something is, a machine could be evolved to have consciousness too once we know how that extra something could benefit the behaviour (and survival) of the machine.
 
Last edited:
Of course, the ability to modify it's behavior can be considered a characteristic of intelligence, as is the ability to challenge it's own behavior.

This has implications, for example even if one programs such a robot not to want to harm humans, it may decide based on it's own experience that it does or should. This would be disastrous, obviously.
 
So perhaps if we can try to understand what that extra something is, we would have some insight into why some life forms evolve consciousness and some don't.

Think this is one of the main things with human consciousness. A creature which processed data linearly wouldn't have the ability to plan, imagine things etc. All very useful in the natural environment. In order to do these things you need to think in a non sequential way which necessitates sentience. I think anyway, I dunno. Insight is an interesting thing with humans, you should read The Emperors New Mind by Roger Penrose, granted its a horrible book to read and with an abudance of mathematical formulas but it deals with the whole question of consciousness, what is it etc in relation to AI and physics
 
The idea of challenging ones own behaviour, can be linear. All you have there is a set of different behavioural algorithms which trigger when certain conditions are met, such as repeated fail. We could think of it as a simple mechanic that allows programming to adapt, by trying different behaviours.

Planning can be linear in the same was as a chess computer can plan and anticipate.

But what you're suggesting john, is that a purely mechanical instrument isn't more than a linear processor. And mind somehow breaks that? By working more like a quantum computer perhaps?
 
The idea of challenging ones own behaviour, can be linear. All you have there is a set of different behavioural algorithms which trigger when certain conditions are met, such as repeated fail. We could think of it as a simple mechanic that allows programming to adapt, by trying different behaviours.

Planning can be linear in the same was as a chess computer can plan and anticipate.

But what you're suggesting john, is that a purely mechanical instrument isn't more than a linear processor. And mind somehow breaks that? By working more like a quantum computer perhaps?

Yes, you could have linear algorithms working in that way. But, and this is adressed in the Penrose book in detail, as humans we have the ability to use insight, a non algorhmic process to develop new theories/actions etc. He uses the example of the Godel theorem as a case in point. The aforementioned theorem is essentially a repudiation of mathematical formalism which states that maths is purely algorithmic, a meaningless game of sorts. Say you have a mathematical system. My limited understanding of it is that the theorem posits that there exists a statement for which there is no proof within the system. It is true that there is no proof for this system. If it is true then we have a statement which cannot be proven by the system. The only way we know this is true is by non algorithmically determining that the statement is true. Some of the most powerful mathematical theories have been arrived at through insight.

I would say that yes, we could easily be some type of quantum computer, probably most animals too.
 
We learned from Darwin about evolution. That everything that emerges within a species tends to be necessary. We don't tend to see redundant body parts emerging, as there would be no motive for natural selection to develop that body part. It is reasonable to think that everything that emerges within a species is beneficial to that species.
It's more a case of "will this difference cause a disadvantage to the species". If no, then it has just as much chance of being "selected" as not.

One example is mankind's (and other primate species') inability to create it's own Vitamin C like many other mammals can, especially carnivorous mammals like cats & dogs. This arose but didn't cause an evolutionary disadvantage because we were taking enough Vitamin C from eating fruits and so on. Therefore there was no reason for our evolutionary history so select against people with this trait, and that's why people today NEED to get pure Vitamin C from their food and can't create it inside their bodies from other ingredients.

Likewise, there's no reason for sentience to have been selected against by evolution, so it stuck.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top