• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

"Nothing Human": The avatar of Crell Moset, the ship's computer and the line of demarcation...

Ragitsu

Commodore
Commodore
Good afternoon.

During the final conversation between (incomplete) "Crell Moset" and The Doctor, how much of the arguments on the morally dubious side were an attempt by the computer to accurately represent Moset based on available records or the computer itself drawing from philosophy/medical ethics (or a lack thereof) and, in the process, taking an exceptional creative liberty with its attempt at a representation? On the one hand ->

EMH: We've gathered some corroborating evidence. It appears that he's telling the truth. You committed a series of atrocities during the Cardassian war. Thousands of Bajorans died on your surgical tables.
MOSET: That's absurd. And even if it were true, I'm only a hologram, and I have no memory of those events. They're not part of my programming.

However, later on, it apparently doubles back to some extent with ->

MOSET: You're a physician. You know there's always a price to pay for the advancement of medical science.
EMH: Sometimes that price is too high. Torture?
MOSET: Your word, not mine. I cured the Fostossa virus, didn't I?

Was the consistency of "Moset"'s sense of self slightly "off" during the writer's exploration of a moral conundrum or does this hold up under scrutiny? Does a dividing line even exist? Would the computer itself argue for the sake of arguing?

Hopefully I've not puzzled any of you too much.
 
It's already a hazy issue how a ship's computer (incapable of sentience) can harbor beings like Moriarty, Vic Fontaine, and the EMH (capable of sentience). I'm already plenty puzzled. :shrug:
 
Curiously, "Moset" did not present a significant amount of (naked?) resistance upon learning that "he" was going to "die"; right up until the end, there was ample debate yet a distinct dearth of pleading.
 
I always assumed the EMH was easily recognizable as a sentient life form because it was easy to see when he grew beyond his default programming. That is to say, there is no conceivable reason for a medical device to like opera. That is him feeling something, a preference with no connection to a patient or to any clinical procedure, and that is a mark of sentience.

Now a character from a holoprogram can be based on a real person with likes and dreams and desires built in or on a fictional character of which the same can be said or on an amalgamation of people, with AI shuffling through all the generated possibilities to fill in likes and dreams and desires so that they appear sentient to those utilizing the program. In any of those cases it is much more difficult to parse where the illusion of sentience ends and true sentience begins.

Take Moriarty for instance, can anyone truly say that the character of Moriarty as conceived throughout the centuries wouldn’t figure out the hidden stats of the reality he finds himself in so as to manipulate them in order to gain the upper hand? So is that him gaining sentience or the sentient character he is imitating being just that type of character?
 
I always assumed the EMH was easily recognizable as a sentient life form because it was easy to see when he grew beyond his default programming. That is to say, there is no conceivable reason for a medical device to like opera. That is him feeling something, a preference with no connection to a patient or to any clinical procedure, and that is a mark of sentience.

Now a character from a holoprogram can be based on a real person with likes and dreams and desires built in or on a fictional character of which the same can be said or on an amalgamation of people, with AI shuffling through all the generated possibilities to fill in likes and dreams and desires so that they appear sentient to those utilizing the program. In any of those cases it is much more difficult to parse where the illusion of sentience ends and true sentience begins.

Take Moriarty for instance, can anyone truly say that the character of Moriarty as conceived throughout the centuries wouldn’t figure out the hidden stats of the reality he finds himself in so as to manipulate them in order to gain the upper hand? So is that him gaining sentience or the sentient character he is imitating being just that type of character?

I love this explanation. Nice.

Voyager's exploration of holographic existentialism, which was pretty darn deep, resonated with me more than anything TNG did with the concept, and more than Damone on DS9. One question I've pondered (probably in tiresome fashion ;)) is whether any "photonic," left running, would become sentient. I think part of the message of "Nothing Human" (perhaps given away by the intriguing episode title) is no. Like @Ragitsu writes above, the Moset avatar didn't seem to be all that concerned with his possible impending demise.
 
It might depend on the nature of the program. Some photonics, there would be no reason for them to be capable of "growth", or even self-aware.
 
I always assumed the EMH was easily recognizable as a sentient life form because it was easy to see when he grew beyond his default programming. That is to say, there is no conceivable reason for a medical device to like opera. That is him feeling something, a preference with no connection to a patient or to any clinical procedure, and that is a mark of sentience.

There was Latent Image (a biggie: his gradually-developed personal feelings badly conflicted with his default medical ethical programming) and that episode where he simulated a flu within his programming that ended up inadvertently teaching him empathy. What other personal milestones exist?
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top