• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Were the last words of Crell Moset the Mengele Hologram sincere or was he only trying to survive?

Unnamed Caitian

Lieutenant Commander
Red Shirt
"What about the well-being of your crew? You're confronted by new forms of life every day, many of them dangerous. You need me. Delete my program and you violate the first oath you took as a physician. Do no harm."
 
Neither. The program wasn't sentient, just a predictive AI like the ones we have today -- the same as the Leah Brahms holo in "Booby Trap." The computer compiled all of Moset's known writings and recorded appearances and created a personality simulation that calculated what Moset would probably say or do in response to a given stimulus, based on his documented behavior.

After all, if the computer could simulate a sentient hologram, it wouldn't have been so hard to create a replacement EMH when he was off the ship in "Message in a Bottle." Moset was just an interactive database of the real Moset's medical knowledge, spiced up with a personality simulation to smooth the interaction.
 
I get the sense that he didn't understand the Doctor as well as he thought he did. The Doctor's actions in deleting him were rooted in ethical principles. Just as, years earlier, the Doctor refused to take a man's life because said act would violate his principles of doing no harm.
 
Neither. The program wasn't sentient, just a predictive AI like the ones we have today -- the same as the Leah Brahms holo in "Booby Trap." The computer compiled all of Moset's known writings and recorded appearances and created a personality simulation that calculated what Moset would probably say or do in response to a given stimulus, based on his documented behavior.

After all, if the computer could simulate a sentient hologram, it wouldn't have been so hard to create a replacement EMH when he was off the ship in "Message in a Bottle." Moset was just an interactive database of the real Moset's medical knowledge, spiced up with a personality simulation to smooth the interaction.
This makes sense to me, but it does seem like Star Trek was not always consistent about how this worked. I mean, Data and pals were able to create a sentient or self-aware holo-Moriarty just by giving the computer imprecise verbal instructions--though it's been a long time since I've watched that episode, so maybe I'm forgetting something.

And "Fair Haven" kind of skirts the issue. Is Michael Sullivan as self-aware as the doctor? Or is he just a personality simulation, a fictional character Janeway allows herself to get imaginatively attached to, the way I allow myself to get imaginatively attached to . . . well . . . Janeway, for example.
 
This makes sense to me, but it does seem like Star Trek was not always consistent about how this worked. I mean, Data and pals were able to create a sentient or self-aware holo-Moriarty just by giving the computer imprecise verbal instructions--though it's been a long time since I've watched that episode, so maybe I'm forgetting something.

Yes, but that was treated as an anomalous event, so there's no reason to assume it would automatically happen every time an expert program was created. The in-story logic was that the computer was asked to create a foe that could outsmart Data, the most sophisticated AI ever created in the Federation, which raised the bar much higher than your typical expert program. The holodeck had to draw on far more of the Enterprise computer's resources to create Moriarty than it did for any other character. (Though personally I've always assumed Moriarty was juiced by the leftover Bynar code in the holodeck from "11001001.")


And "Fair Haven" kind of skirts the issue. Is Michael Sullivan as self-aware as the doctor? Or is he just a personality simulation, a fictional character Janeway allows herself to get imaginatively attached to, the way I allow myself to get imaginatively attached to . . . well . . . Janeway, for example.

It was always clear to me that the Fair Haven characters were just characters, that there was no issue of their sentience and the only stakes involved were the emotional stakes to the Voyager crewmembers invested in those characters. They weren't even personality simulations in the way Leah and Moset were, since they weren't predictive models of the probable behavior of real people. They were just routine NPCs in an open-world MMORPG, and based on shallow, badly written Irish stereotypes at that.
 
Indeed. We don't throw Worf in the brig when he rams his bat'leth into Yellow Skeletor Guy on the holodeck, or charge Janeway with murder when she deletes the wife.
 
People are often too quick to assume holodeck characters are sentient just because they act convincingly like real people. That's the whole point of holodeck characters -- to create a convincing illusion of being real people. It doesn't mean they actually are, any more than a chatbot that successfully fools people into thinking it's a real person. The myth is that the Turing Test proves AI sentience, but all it really shows is how easy it is to program a computer to mimic intelligent behavior. Turing only called it an "imitation game," and he meant that if computers could be programmed to mimic humans convincingly, their programming might be a useful model for understanding how the human brain worked, like how computer models let us predict weather without actually being weather.

What defines the sentient AIs we've seen in Trek, like Data, Moriarty, and the Doctor, is not that they're aware of being AIs -- it's easy enough to write a fictional character to act like they're aware of their fictionality, like Daffy Duck, Deadpool, or She-Hulk -- but that they aspire to grow beyond their programmed behavior, that they make choices independent of their predefined roles. Which is why I'm skeptical that Vic Fontaine was truly sentient, because he was perfectly satisfied to continue living within his defined role in his open-world program.
 
Which is why I'm skeptical that Vic Fontaine was truly sentient, because he was perfectly satisfied to continue living within his defined role in his open-world program.
I think he exceeded his parameters quite a bit in "It's Only a Paper Moon". It would have been interesting to see them build on that. Another argument for DS9 getting an 8th season.
 
that they aspire to grow beyond their programmed behavior, that they make choices independent of their predefined roles. Which is why I'm skeptical that Vic Fontaine was truly sentient, because he was perfectly satisfied to continue living within his defined role in his open-world program.

Maybe not yet, but it seems clear that if you leave an adaptable program running long enough it has a good chance of gaining sentience. The more complex the program, the less time it takes. Vic was on the lower end of adaptable so it makes sense he's take longer than someone like the Doctor.
 
I think he exceeded his parameters quite a bit in "It's Only a Paper Moon".

Did he? It seemed more like just a more extended version of what he was designed to do, to be a sounding board and advisor to the people who visited his 1960s Las Vegas casino. It was just that Nog refused to leave, so Vic had to adapt to the changes that Nog's behavior made. It wasn't something Vic pursued on his own initiative, like Moriarty wanting to grow beyond his dictated villain role or the Doctor wanting to get out of sickbay and learn opera and so forth. Vic just wanted to help Nog get his groove back so he could leave and let Vic return to his cozy status quo.


Maybe not yet, but it seems clear that if you leave an adaptable program running long enough it has a good chance of gaining sentience. The more complex the program, the less time it takes.

I don't think it's automatic or inevitable, though. The Doctor was in a context where he specifically had to grow beyond his parameters, plus he had Kes encouraging him to develop his personhood. It wasn't as simple as just leaving him running. There had to be a catalyst for growth.


I think Vic occupies a gray area at best. Sentience isn't an on-off switch, after all, but a continuum. Vic might have some degree of awareness and flexibility, but he's still a product of his programming.
 
The in-story logic was that the computer was asked to create a foe that could outsmart Data, the most sophisticated AI ever created in the Federation, which raised the bar much higher than your typical expert program. The holodeck had to draw on far more of the Enterprise computer's resources to create Moriarty than it did for any other character.
If the computer is able to create a sentient hologram based on the subjective, unintended subtext of a spare verbal command, it's not that hard, and it could conceivably happen accidentally in other situations. Personally, I treat the creation of Moriarity as the silly outlier that doesn't make much sense.
It was always clear to me that the Fair Haven characters were just characters, that there was no issue of their sentience and the only stakes involved were the emotional stakes to the Voyager crewmembers invested in those characters.
The doctor compares Michael Sullivan to himself, a sentient hologram, when Janeway questions Sullivan's reality.
What defines the sentient AIs we've seen in Trek, like Data, Moriarty, and the Doctor, is not that they're aware of being AIs -- it's easy enough to write a fictional character to act like they're aware of their fictionality, like Daffy Duck, Deadpool, or She-Hulk -- but that they aspire to grow beyond their programmed behavior, that they make choices independent of their predefined roles. Which is why I'm skeptical that Vic Fontaine was truly sentient, because he was perfectly satisfied to continue living within his defined role in his open-world program.
Sentient AIs must have the potential to change and grow. Or in more technical terms, they must be capable of overwriting their own program. That doesn't mean they must actually do so to the point that they're perceived as psychologically aspirational. Plenty of organic people aren't aspirational and are perfectly happy living in a defined role indefinitely. That doesn't mean they're not sentient.
 
Last edited:
If the computer is able to create a sentient hologram based on the subjective, unintended subtext of a spare verbal command, it's not that hard, and could conceivably happen accidentally in other situations.

But hardly inevitable, which is the point. I'm not saying it can never happen, just that it shouldn't be presumed to be an automatic default.


The doctor compares Michael Sullivan to himself, a sentient hologram, when Janeway questions Sullivan's reality.

No, he says, "He's as real as I am. Photons and forcefields, flesh and blood." He's referring to their physical composition as holograms, not their mental aspects. He also says earlier, "Michael Sullivan is a hologram. His broken heart can be mended with the flick of a switch." That's not something you say about a sentient being. His point in that scene is not about Michael's sentience; he's saying it doesn't matter whether Michael is a real person or has certain attributes Janeway wants, as long as he makes her happy. He's telling her not to micromanage everything and just let things happen naturally.


Sentient AIs must have the potential to change and grow. Or in more technical terms, they must be capable of overwriting their own program. That doesn't mean they must actually do so to the point that they're perceived as psychologically "aspirational."

I wasn't saying they "must." I'm not a philosopher, I'm a rationalist. I care about evidence, not abstractions. Yes, it's possible there could be a sentient hologram that doesn't show any clear signs of behavior beyond what programming would produce. Theoretically, that could exist. But you couldn't prove it was the case. Given how easy it is to create the illusion of intelligent behavior, you can't just assume a hologram or other AI is genuinely conscious unless you can rule out non-sentient mimicry. The question is, how can outside observers recognize genuine sentient behavior as distinct from the mimicry of it? What kind of behavior can we be reasonably sure is not merely faked by programming?

It's like the debate over whether great apes who learn sign language are actually thinking for themselves or just mimicking what they've been taught. Some of the best evidence that Koko the gorilla was actually thinking for herself was that she signed to herself when she was alone (on camera), that she manufactured her own compound signs with real meanings, and otherwise behaved in a way that expanded beyond the limits of what mere trained mimicry would include. By the same token, a hologram expanding its behavior beyond its programming, actually learning and growing and changing, is good evidence that there's more going on than just programming.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top