"What about the well-being of your crew? You're confronted by new forms of life every day, many of them dangerous. You need me. Delete my program and you violate the first oath you took as a physician. Do no harm."
This makes sense to me, but it does seem like Star Trek was not always consistent about how this worked. I mean, Data and pals were able to create a sentient or self-aware holo-Moriarty just by giving the computer imprecise verbal instructions--though it's been a long time since I've watched that episode, so maybe I'm forgetting something.Neither. The program wasn't sentient, just a predictive AI like the ones we have today -- the same as the Leah Brahms holo in "Booby Trap." The computer compiled all of Moset's known writings and recorded appearances and created a personality simulation that calculated what Moset would probably say or do in response to a given stimulus, based on his documented behavior.
After all, if the computer could simulate a sentient hologram, it wouldn't have been so hard to create a replacement EMH when he was off the ship in "Message in a Bottle." Moset was just an interactive database of the real Moset's medical knowledge, spiced up with a personality simulation to smooth the interaction.
This makes sense to me, but it does seem like Star Trek was not always consistent about how this worked. I mean, Data and pals were able to create a sentient or self-aware holo-Moriarty just by giving the computer imprecise verbal instructions--though it's been a long time since I've watched that episode, so maybe I'm forgetting something.
And "Fair Haven" kind of skirts the issue. Is Michael Sullivan as self-aware as the doctor? Or is he just a personality simulation, a fictional character Janeway allows herself to get imaginatively attached to, the way I allow myself to get imaginatively attached to . . . well . . . Janeway, for example.
I think he exceeded his parameters quite a bit in "It's Only a Paper Moon". It would have been interesting to see them build on that. Another argument for DS9 getting an 8th season.Which is why I'm skeptical that Vic Fontaine was truly sentient, because he was perfectly satisfied to continue living within his defined role in his open-world program.
that they aspire to grow beyond their programmed behavior, that they make choices independent of their predefined roles. Which is why I'm skeptical that Vic Fontaine was truly sentient, because he was perfectly satisfied to continue living within his defined role in his open-world program.
I think he exceeded his parameters quite a bit in "It's Only a Paper Moon".
Maybe not yet, but it seems clear that if you leave an adaptable program running long enough it has a good chance of gaining sentience. The more complex the program, the less time it takes.
If the computer is able to create a sentient hologram based on the subjective, unintended subtext of a spare verbal command, it's not that hard, and it could conceivably happen accidentally in other situations. Personally, I treat the creation of Moriarity as the silly outlier that doesn't make much sense.The in-story logic was that the computer was asked to create a foe that could outsmart Data, the most sophisticated AI ever created in the Federation, which raised the bar much higher than your typical expert program. The holodeck had to draw on far more of the Enterprise computer's resources to create Moriarty than it did for any other character.
The doctor compares Michael Sullivan to himself, a sentient hologram, when Janeway questions Sullivan's reality.It was always clear to me that the Fair Haven characters were just characters, that there was no issue of their sentience and the only stakes involved were the emotional stakes to the Voyager crewmembers invested in those characters.
Sentient AIs must have the potential to change and grow. Or in more technical terms, they must be capable of overwriting their own program. That doesn't mean they must actually do so to the point that they're perceived as psychologically aspirational. Plenty of organic people aren't aspirational and are perfectly happy living in a defined role indefinitely. That doesn't mean they're not sentient.What defines the sentient AIs we've seen in Trek, like Data, Moriarty, and the Doctor, is not that they're aware of being AIs -- it's easy enough to write a fictional character to act like they're aware of their fictionality, like Daffy Duck, Deadpool, or She-Hulk -- but that they aspire to grow beyond their programmed behavior, that they make choices independent of their predefined roles. Which is why I'm skeptical that Vic Fontaine was truly sentient, because he was perfectly satisfied to continue living within his defined role in his open-world program.
If the computer is able to create a sentient hologram based on the subjective, unintended subtext of a spare verbal command, it's not that hard, and could conceivably happen accidentally in other situations.
The doctor compares Michael Sullivan to himself, a sentient hologram, when Janeway questions Sullivan's reality.
Sentient AIs must have the potential to change and grow. Or in more technical terms, they must be capable of overwriting their own program. That doesn't mean they must actually do so to the point that they're perceived as psychologically "aspirational."
We use essential cookies to make this site work, and optional cookies to enhance your experience.