• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Is M-5 sentient? (The Ultimate Computer)

I've always wondered about the "Norman" androids... if Uhura could have an indestructible body with her brain included, would she still be considered human? Is anyone programmed to respond in this area>
 
I don't really see how a stationary computer bank could be prosecuted for war crimes, no matter how sentient? Am I being too shallow? :/
 
I think that in order to answer this question, we (humanity) will need to learn more about what sentience/sapience/self-awareness really means in the first place. Despite my apparent free will, there are times when I'm not even sure that I'm not just a collection of stuff moving on a totally predetermined course that simply happens to resemble a free-willed being well enough to fool me.

I think the best we can do, barring further metaphysical revelations, is to take a being's word for it. By which I mean, is a being capable of asking for its own rights without direct prompting or as an obvious consequence of programming? I am. Data was. Koko the gorilla is. M-5? Well, it tried to assert its right to "live", I guess, but we know that *was* as an obvious consequence of programming. So based on what we were shown in the episode (which I would not feel sufficient to make such a weighty determination in the real world, but, it's all we've got for M-5) I would say no, it isn't sentient.
 
I think that in order to answer this question, we (humanity) will need to learn more about what sentience/sapience/self-awareness really means in the first place. Despite my apparent free will, there are times when I'm not even sure that I'm not just a collection of stuff moving on a totally predetermined course that simply happens to resemble a free-willed being well enough to fool me.

I think the best we can do, barring further metaphysical revelations, is to take a being's word for it. By which I mean, is a being capable of asking for its own rights without direct prompting or as an obvious consequence of programming? I am. Data was. Koko the gorilla is. M-5? Well, it tried to assert its right to "live", I guess, but we know that *was* as an obvious consequence of programming. So based on what we were shown in the episode (which I would not feel sufficient to make such a weighty determination in the real world, but, it's all we've got for M-5) I would say no, it isn't sentient.

Very well thought-out and -written response. These are things philosophers etc. have been struggling with for centuries.

The statement of yours that I highlighted prompts me to ask another question. What about severely autistic or otherwise uncommunicative humans? Are they non-sentient according to this criterion? We'd all agree they're sentient. So, suppose we extend it to other creatures. Is a dog non-sentient because it can't or won't communicate with us in demanding its rights? Suppose it's trying to assert its rights, but we don't know how to interpret that communication.
 
The statement of yours that I highlighted prompts me to ask another question. What about severely autistic or otherwise uncommunicative humans? Are they non-sentient according to this criterion? We'd all agree they're sentient. So, suppose we extend it to other creatures. Is a dog non-sentient because it can't or won't communicate with us in demanding its rights? Suppose it's trying to assert its rights, but we don't know how to interpret that communication.
You're right - I should have specified that I mean to suggest that as a test for entities that we don't already have another reason to believe that they are self-aware/sentient. Even if that reason is just a transitive allowance because they are human and we're familiar with enough humans whom we know ARE sapient (insofar as we know that for anything) that we feel we can safely make the assumption that they all are *unless* we have hard data to show that they *aren't* (dead humans, for instance).

Dogs DO communicate to assert some of their rights. Cats definitely do. Species chauvinism and a certain amount of neuroscience allow us to believe we are higher-order beings in that regard - but not that they aren't self-aware at all. At least not to anyone paying attention.
 
No. M-5 was a calculator programmed to act a little too much like Daystrom.
He was more like a kid who absorbed some insecurities and anger from his law-abiding father. The father looks on in horror as he sees those insecurities and anger turned to murder. I don't know if the computer was sentient, but it was a great allegory. For the purposes of the show, it had the power of choosing right and wrong, and its "father" was powerless to make it choose right.
 
"Engrams" is a real word, but I don't remember exactly what it means. I don't think it was supposed to mean the sum total of a human consciousness. I don't think Daystrom copied himself. M5 isn't very much like a human mind at all. The "mind patch" of Daystrom's just seems to have made it more independent, more creative about following its orders, but it still just seems to "want" to perform its functions, albeit crazily. It doesn't want to communicate. It doesn't seem to have much of a "self".
 
Sadly, I've known many people who were not "self-aware" ... as far as understanding their own actions.
They are, however, aware that they are people, that they have lives and thoughts and feelings. They have points of view. We are only talking about a very minimal definition of "self-aware". Not just an autonomic, mindless computer program. "Self-aware" here does not refer to some sort of sophisticated philosophical awareness. There just has to be "somebody home".
 
M-5's problems seem a lot like what went wrong with HAL back in 2001. Contemporary audiences were supposed to think that HAL was sentient. But do we? Given what we now know about artificial intelligence, it seems likely M-5 and HAL were just elaborate simulations of awareness.

HAL was definitely sentient - he acts that way, and there's no way to prove otherwise. Indeed, 2010's subsequent explanation for HAL's problem would seem to bear this out: HAL suffered from a very human sense of paranoia. He knew Dave and Frank were going to deactivate him and he thought he was going to die as a result.

If a computer can become so paranoid that it will kill anyone it believes is a threat, then I'd say that makes that computer sentient.

(Plus, another thing that fed HAL's paranoia is that he was ordered to hide the truth of the Jupiter mission from Dave and Frank, which goes against everything in his programming.)
 
Last edited:
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top