• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Simulated Consciousness vs Real Consciousness

polyharmonic

Lieutenant Commander
Red Shirt
In Star Trek and other sci-fi universes, the notion of artificial sentient beings is routine. In Trek we see for instance Data and the Doctor among others. But the question is whether these beings represent simulated consciousness or real consciousness? And does it even matter if you can't tell?

Now let us run a thought experiment. Using today's real-life technology, I am going to create a robot with a primitive ability to interact with you. Clearly this primitive robot has no consciousness as it is just a machine that reacts based on programming.

But let's assume that I improve the programming of this robot as technology advances. So now instead of the robot operating in a way where it clearly is just a machine, it starts to resemble a human in its interaction. You know its still not a sentient being though. Its just has a more advanced programming.

But if we continue to do this, might we not get to the point of creating a "Data" like robot or android. But if we assume that Data is actually sentient and has real consciousness, then at what point did this occur? Or did it never occur and again Data is just simulated consciousness but just so sophisticated that you are fooled?

If in our real life world, there were a lot of "Data" like androids running around, would it not bother you if there was doubt whether your "Data" companion, spouse, friend, or whatever, was really just a machine with a simulated consciousness and no more sentient than your toaster oven in reality?

I think I would always have doubts about whether beings like Data were really truly sentient. And with these doubts it would be hard to have such a being as, say, your spouse or companion. I mean, you'd be living your life with "someone" who was not really truly alive, sentient, conscious and self-aware but merely a machine...

Or if it feels real enough, then it doesn't matter?
 
Now take the above and change "robot" to "human". First we have an infant who's clearly nonsentient, then a child who slowly exhibits more and more sentience, as it learns to better imitate the adult behavior that defines sentience for use. Finally, the mimicry is perfect, and the imitation has become sentience.

No doubt specific points along that development path can be defined as marking the onset of "linguistic ability" or "self-awareness" or "true sentience" or whatnot. And no doubt such definitions can be challenged and debated forever, as their theoretical basis is not one of an easily observable quantum leap, but of a slowly shifting balance of multiple parameters.

So to answer the question... "If it feels real enough, it doesn't matter" has been it for the past million years or so, for our discriminating little species. That's how we grow up. Feeling is the only thing there is; the mechanics of the matter don't really matter. Some of us have nicely self-growing and self-healing bodies, others are like poorly constructed machines that fall apart without extensive care and never manage to develop to their full potential. Some of our minds flourish on their own, others require careful nurturing for growth or even sustenance. In all that variety, it really matters rather little whether our parents assembled us from nuts and bolts or sperm and eggs...

Timo Saloniemi
 
I would argue sentience is a gradual thing, rather than something that either exists totally or not at all. Also it's a human concept used to define ourselves. I don't think it's a tangible thing as such.

In your primitive robot example, I would say it does have some small degree of sentience, despite being a primitive mechanical thing. But then I would also say that a table has some small degree of sentience. Really, really small. More complexity leads to higher sentience. Data's brain is an incredibly complex thing, so I would argue Data is near enough as sentient as any human being.
 
it's an interesting point, but human beings have a base program, like Data would - DNA. We cannot really surmount our genome, as Data can his program.

And human beings in a sense are taught or given sentience. Most of our knowledge is acquired either from experience or a third party, similar to Data. Data can learn, grow and comprehend new ideas/experiences, just as we can.

In the end, IMO there is no real difference.
 
There are many philosophical books discussing consciousness and sentience, none of which I've read.

However, isn't consciousness defined as an awareness of one's surroundings? If so, then current computers or robots with sensory inputs are certainly conscious, are they not?

Isn't sentience defined as self-awareness? If so, then this quality is much harder to define, whether in humans, other animals, or in computers or robots. However, I think we're a long ways off from sentient machines, and even longer away from a sentient machine which is indistinguishable from a "real" human.

Having stated that with apparent certainty, I now realize that the bigger question is: how will we know whether a machine is "truly" sentient or not? We assume that a human is sentient, because he/she looks more or less like ourselves, but a machine might not look like ourselves and yet could one day be sentient. So to me the quandary might not be that an intimate "person" turns out to not even be a sentient human, but rather that something we assume is a dumb machine is actually a sentient being.

Doug
 
Interesting point Doug and one I hadn't considered. It is possible that we could create something that simulates and appears to be conscious, self-aware and sentient but that is ultimately no more sentient than your toaster. But I hadn't considered that we could create something that we assumed was not sentient or could not become sentient and yet in reality was. Kind of like Skynet in Terminator series I guess.
 
Alan Turing said it first, I believe: If you cannot tell if the thing you are conversing with is a human or a computer simulating being a human, than the question of whether the software is actually aware and alive has become moot: it is impossible for you to distinguish it from a living thing, so it might as well be one.

Now for the mind-blower question: We have seen in Star Trek androids that were unaware of their own android nature. So the question becomes: how sure are you that you are not an android? Are you a real person, or are you just programmed to think you are? Are you conscious, are are you just programmed to think you are?
Does it matter?
 
Interesting point Doug and one I hadn't considered. It is possible that we could create something that simulates and appears to be conscious, self-aware and sentient but that is ultimately no more sentient than your toaster. But I hadn't considered that we could create something that we assumed was not sentient or could not become sentient and yet in reality was. Kind of like Skynet in Terminator series I guess.

Actually, I got the impression that Skynet was supposed to be sentient, it was just that we didn't think it might decide we were superfluous. Much like the WHOPPER/Joshua in War Games.
But it is a fairly common trope in science fiction, from The Moon is a Harsh Mistress, where the computer running Luna City has accidentally become sentient and befriended a computer repairman, to Jerry Was A Man which originally was about genetically engineered apes but when adapted for TV they made them manufactured laborers (the story centers on the legal battle to save one from euthanasia), to .... Quality of Life (TNG), where the Exocomps were supposed to be just useful semi-automated tools, but became much more.
 
polyharmonic, your question cannot be answered at this time in the real world because we don't know what consciousness is. We don't know if it is even a real phenomena. We think we are conscious but from what I understand, it could easily be illusionary.

In-universe, the Humans of the 24th century don't seem to be a whole lot closer to understanding it than we are. That doesn't make much sense in-universe but it's understandable since the writers are a product of our present day!

The episode that may offer the most clues is TNG-Where No One Has Gone Before. In this episode, we learn that thought is a tangible force that is inexorably intertwined with space-time and matter-energy. The Traveler was stunned that Wesley had a rudimentary understanding of this and he told Picard that our perception of space-time was very narrow. From this we can deduce that consciousness is still a metaphysical concept in the 24th century and poorly understood by Humans. The Travelers had a greater understanding but the Q seem to be the masters, having complete control of space-time and matter-energy through pure thought.
 
polyharmonic, your question cannot be answered at this time in the real world because we don't know what consciousness is. We don't know if it is even a real phenomena. We think we are conscious but from what I understand, it could easily be illusionary.

I don't agree with those comments. I think we all agree that consciousness is an awareness of one's surroundings. Then, it's quite easy to perform an experiment to see if a being reacts to, or can elucidate awareness of, its surroundings. Which part of this would be illusory (not "illusionary")?

Maybe you're mixing up consciousness and sentience. I agree that sentience would be harder to prove in a physical experiment.

Doug
 
polyharmonic, your question cannot be answered at this time in the real world because we don't know what consciousness is. We don't know if it is even a real phenomena. We think we are conscious but from what I understand, it could easily be illusionary.

I don't agree with those comments. I think we all agree that consciousness is an awareness of one's surroundings. Then, it's quite easy to perform an experiment to see if a being reacts to, or can elucidate awareness of, its surroundings. Which part of this would be illusory (not "illusionary")?

Maybe you're mixing up consciousness and sentience. I agree that sentience would be harder to prove in a physical experiment.

Doug

Sigh. I can never keep these terms straight. Yes, replace "consciousness" with "sentience."
 
Since, in my opinion, humans are no more than robots made of meat, I don't really see the difference with a robot made of metal, plastic, or whatnot.
 
Data was sentient because he was essentially an approximation of a humanoid. His brain and neural pathways were intended to be that way, albeit more efficient, and there is no reason why he could not have been built to react in an emotional fashion as a result.

I take issue with the Doctor though. I've never thought he was sentient, merely a complex program that appeared to be sentient because of a large number of programmed responses. I realise that the Moriarty episodes and the way he was treated in Voyager suggest otherwise but I always thought it was rather questionable. As a program capable of learning it is possible for the program to become more complex but a truly sentient being is so complex that I would question whether a the ship's computer could store the necessary information on top of everything else.
 
Data was sentient because he was essentially an approximation of a humanoid.

But he appeared to be equally sentient when he was doing a good approximation of being a sea anemone - a detached head sitting on a table!

I don't see how the design of "pathways" that create what you'd accept as sentience would be dependent on Data's body shape. Sure, a human might develop into a somewhat different type of sentient if he were, say, born without arms and legs and had a flapping tail instead, or some other birth defect of that nature. But not drastically so.

Consequently, I don't see why "sentience pathways" couldn't be built into, say, a starship computer that would then run a sentient holographic character. Or a sentient turbolift, for that matter. The pathways wouldn't need to be hardwired: they could no doubt be software-emulated by any computer complex and capable enough. Indeed, a large but "essentially dumb" computer might plausibly and simultaneously run a number of different types of sentient program, in addition to all sorts of nonsentient ones.

It would be pretty absurd for hardware limitations to block a starship computer's path to sentience. Data's sentience was neatly contained within his tiny little head (although his other body parts apparently also prominently featured positronics, as per ST:NEM). A starship computer would be much bigger, even if not positronic; it would be difficult to see any inherent, generic limitations stemming from lack of space or computing power.

...merely a complex program that appeared to be sentient because of a large number of programmed responses.

What else would sentience be if not "a large number of programmed responses"? That's how we humans cope with the world, too: by accumulating a nice library of responses. It doesn't appear that there'd be anything more to it, or even a clear threshold where the number of responses is large enough for sentience. Assorted other species demonstrate aspects of our sentience (the ability to plan, to abstract, to communicate, to tell lies), often going beyond ours in specific fields. And these tend to be species with demonstrably inferior raw processing power to ours; the sky shouldn't be any limit with machines that have demonstrably superior processing power.

Timo Saloniemi
 
Yeah sorry, I meant Data's positronic brain was designed to function like a humanoid brain.

A hologram is created by the ship's computer, although the programming can be hived off. I question how a sentient hologram could be created (e.g. Moriarty) unless the ship was also sentient. A large number of programmed responses is not the same as sentience. Where the dividing line lies is less clear. Let's not forget that Data was incapable of creating another sentient robot with any longevity.

Also, if genetic engineering is outlawed I think that creating artificial intelligence willy nilly would also be outlawed, especially given the Enterprise crew's past experiences! I can't fathom where the writers think sentient holograms should fit into the Federation ethos given the fact that they can't create sentient androids. I'm still not convinced that the EMH should ever have been portrayed as sentient.
 
Once again, what is sentience? What would make a computer program "wake up" and have awareness of its existence? We don't have a clue about that as far as I know.
 
Once again, what is sentience? What would make a computer program "wake up" and have awareness of its existence? We don't have a clue about that as far as I know.

I'm glad you asked (again). Based upon context when the word was used in TNG, I assumed it meant "self-aware." That is, you're aware of yourself as an individual distinct from your surroundings and other individuals, and you're aware of yourself thinking, etc.

However, I just looked it up on dictionary.com, and there it's defined as equivalent to general consciousness, e.g. aware of your surroundings.

Did TNG use sentience in a unique way, or did I just imagine that it's different to consciousness?

Doug
 
http://memory-alpha.org/wiki/Sentience

Tracking down the individual uses of the words in dialogue is left as an exercise to the reader. But it does sound as if "sentient" would be a compliment extended to those species that are very humanlike (i.e. they aren't just clever self-aware machines, but are actually capable of the sort of complex thinking we apply to morals, emotions and social interaction), whereas "sapient" would just tell smart animals or machines apart from dumb animals or machines. Which is pretty much in line with the current usage of the two words, too.

Self-awareness as such probably wouldn't warrant any terminology, as it's more or less impossible to establish. Any sapient entity can fake it, and proving one's self-awareness as a condition separate from one's sapience would be as futile as proving that one possesses a soul. It'd be a subjective feeling, nothing more.

Timo Saloniemi
 
Precisely. The hologram may be sapient but I personally don't think it should be considered sentient (unless augmented somehow). Data I would consider to be sentient because of the huge amount of effort that went into making him that way and the amount of time he was designed to function for.
 
Its kind of interesting how people feel that The Doctor is only simulated consciousness (or sentience) but Data is truly conscious and sentient. Based purely on their responses and they way they interact with others, I just don't see how one could make this argument. Now I agree that the Doctor could just be a very sophisticated program that appears to be sentient but is in fact no more sentient than the computer I'm typing this on. But then so could Data!
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top