• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

How the Doctor deals with death

Laura Cynthia Chambers

Vice Admiral
Admiral
Latent Image - After having to choose who lives and who dies in triage (picking the one he knows better over another also-ran never references until now), has a breakdown and can't function properly until his memory is wiped

Real Life - Faced with the inevitable death of his holofamily daughter Belle (someone he grows close to, despite her being "less real" than he is, she is family for the purposes of the program - like crying when your favorite TV/book character dies), shuts down the program and refuses to see it through, until convinced to do so. Afterwards, never re-visits the program again.

Is it a limitation of his programming, as originally, he is supposed to treat the person he is activated to help, and move on quickly to whoever is next, then be shut down? Most emergencies don't last anywhere near as long as Voyager's stranding did, after all, and he is only meant to be a stopgap, although I suppose his program ought to be capable (without active editing) of learning and applying what he's done in previous situations to new ones.

Does anybody else have other examples?
 
Latent Image - After having to choose who lives and who dies in triage (picking the one he knows better over another also-ran never references until now), has a breakdown and can't function properly until his memory is wiped

That was just an oversight in Zimmerman's programming. The Doctor simply didn't have any tools for dealing with a triage situation where neither patient had a clear advantage over the other.

A better solution than letting the Doc mentally tear himself apart is to modify his triage subroutine as follows: in the event of a decision where all triage protocols have been exhausted and no clear advantage can be discerned objectively, it is permissible to make a subjective decision.

Real Life - Faced with the inevitable death of his holofamily daughter Belle (someone he grows close to, despite her being "less real" than he is, she is family for the purposes of the program - like crying when your favorite TV/book character dies), shuts down the program and refuses to see it through, until convinced to do so. Afterwards, never re-visits the program again.

This is again, bad programming, this time by B'Elanna, when she redid the program. The object of the game was to give the doctor the overall experience of having a family. What he got was the absolute worst case scenario.

As for him never revisiting, that's a matter for Voyager's relentlessly hammering the Big Red Reset Button.
 
That was just an oversight in Zimmerman's programming. The Doctor simply didn't have any tools for dealing with a triage situation where neither patient had a clear advantage over the other.

A better solution than letting the Doc mentally tear himself apart is to modify his triage subroutine as follows: in the event of a decision where all triage protocols have been exhausted and no clear advantage can be discerned objectively, it is permissible to make a subjective decision.

Sounds like a classic First Law conflict. Asimov was able to foresee that sort of thing. Centuries later, Zimmerman wasn't.
 
Sounds like a classic First Law conflict. Asimov was able to foresee that sort of thing.
If we're talking "a robot cannot harm or a human or through inaction allow a human come to harm", I think that's about it. Two humans in danger, one robot. Right?
 
I don't have any examples but I was wondering about EMH backup from "Living Witness." Will he also have a meltdown in the future? Does he learn about Janeway's tampering, which presumably happened originally in Season 3? It doesn't really matter in the long run but it just occurred to me the other day.
"Latent Image" is probably my favourite episode of the whole franchise.
 
Had been the one talking to the Doc on "Real Life", I would have just reminded him "you created that program to experience the everyday experience of having a family. Burying a child is not an everyday experience, it's the ultimate nightmare". Then, I urge him to continue with the program as B'Elanna wrote it, but modify it so that Belle suffers a less severe injury, because having a kid get hurt is a far more common experience and rates inclusion.
 
I remember at one point The Doctor talks about being apprehensive about being turned off, because he never knows how much time will pass.

in living witness, it is 700 years… but that wasn’t really him. It was a backup copy of him.

he feared it, just like we fear being “shut off”, his memories are stored in sick bay and in the computer… and sometimes in his mobile emmiter (Which has reappeared in Star Trek Picard… also, in the fan series, Star Trek Renegades, they gave it to the holographic engineer, played by “John Connor“…) do you know, the only time we ever saw the main computer of Voyager was in the da Vinci episode, concerning flight? It’s just a small machine about the height of a man…

The main computer of the enterprise D was about the size of those cylindrical buildings in century city! Times three… we got to see one level of that in TNG “evolution”, and that computer at one point generated a whole new emergent life form…

maybe voyagers Bio Neurotic Cold-Catching Gel packs Store his memories… but the fact is he will survive, as long as the memory circuits that contain “him“ still exist…

but I think the episodes mentioned explore his thoughts about mortality very well, but he doesn’t really get into it that much. I don’t recall if he ever talks about his own fears about mortality he is more worried about his patients, is the impression that I got… compare to “mortal coil“, where Neelix discovers that his magic forest wasn’t really there… that was a huge crisis of faith for Neeelix.

The doctor was only ever worried about losing his memories. and at one point he did lose a lot of them, but they trickled back somehow…

kind of an odd thing for an immortal computer program to worry about, we want to compare him to Data, but Data was a real, physical person, his programming was inside of himself and not really stored anywhere, except on that one neuron that Bruce Maddox used… but Data had many thoughts about it, expressed in many episodes, and finally at the end of nemesis. And he continues that same train of thought at the end of season one Picard.

but the doctor never really discusses it, when I think about it.
 
Yeah. While in space, there are many ways for an ordinary individual to die/be injured, a computer program could get scrambled with the latest software update, or a simple electrical overload, even in one of the safest places.

If the mobile emitter had malfunctioned and cast multiple, imperfect views of the Doctor simultaneously in sickbay and on a planet's surface, he'd become the legendary ghost that haunts the planet, nattering on to people who aren't visible there (Voyager's crew). I would've liked to see that.
 
If we're talking "a robot cannot harm or a human or through inaction allow a human come to harm", I think that's about it. Two humans in danger, one robot. Right?
That's right.

And for the Doctor to end up unable to act specifically because more than one human life is threatened: That's a pretty major programming oversight. Your modification to his triage subroutine is about right, but it absolutely should have been there from the factory. Yes, of course he's the "emergency" medical hologram. He's not always on duty and not necessarily capable of the same long-term diagnosis/care as an organic doctor ... but these are emergencies. He's supposed to be designed for them, and only for them. If medical emergencies can arise that render him unable to make a decision without crashing, then his primary purpose has bugs. In that case he's no better than beta software and not qualified for the job. His installation in multiple starships should never have been approved, regardless of how he grew later, or how many "improved" Marks came after him.
 
His installation in multiple starships should never have been approved, regardless of how he grew later, or how many "improved" Marks came after him.
Any new software has glitches. The more complex the program, the more inevitable issues are.
 
Any new software has glitches. The more complex the program, the more inevitable issues are.
I mean, it's even noted in the show (one of the higher points) that his program was not designed to run unassisted. He was to be assisting a chief medical officer or senior medical officer or someone with the ability to make the decision regarding life or death, something that organic beings do in crises but a computer program would struggle with because there's no obvious logical choice. He lacks in the "intuition" that would come from training, consultation and experience.

He was designed for medical emergencies, but as an assistant not as the primary decision maker.
 
That's right.

And for the Doctor to end up unable to act specifically because more than one human life is threatened: That's a pretty major programming oversight. Your modification to his triage subroutine is about right, but it absolutely should have been there from the factory. Yes, of course he's the "emergency" medical hologram. He's not always on duty and not necessarily capable of the same long-term diagnosis/care as an organic doctor ... but these are emergencies. He's supposed to be designed for them, and only for them. If medical emergencies can arise that render him unable to make a decision without crashing, then his primary purpose has bugs. In that case he's no better than beta software and not qualified for the job. His installation in multiple starships should never have been approved, regardless of how he grew later, or how many "improved" Marks came after him.

Is it possible that this capacity was compromised exactly because he was allowed to grow, form attachments to the crew, and reflect upon his own actions? He does make the choice who to treat at the moment itself without problem, and he chooses Kim. It's only afterwards that he starts to destabilize when he starts thinking about his decision. I think the odds are decent that he never would have done that had he been the personality-poor tool he was the day he was first switched on, also because he would only have been on when there was someone to treat.
 
This problem is probably why the mark 1 was shelved.


I don't think that was a good reason for Starfleet to shelve the Mark 1 hologram. Especially since many other doctors within Starfleet are organics who were subjected to their own biases and emotional responses during certain situations. The Doc's emotional breakdown in "Latent Image" strikes me as a sign that in his own way, he had become like many other doctors or sentient beings, both in Starfleet and other organizations and societies.
 
This problem is probably why the mark 1 was shelved.

I don't think that was a good reason for Starfleet to shelve the Mark 1 hologram. Especially since many other doctors within Starfleet are organics who were subjected to their own biases and emotional responses during certain situations. The Doc's emotional breakdown in "Latent Image" strikes me as a sign that in his own way, he had become like many other doctors or sentient beings, both in Starfleet and other organizations and societies.

Here's a quote from "Life Line" on the subject.

ZIMMERMAN: "Because you're defective. Emergency Medical Hotheads. Extremely Marginal Housecalls. That's what everyone used to call the Mark Ones until they were bounced out of the Medical Corps."
 
Are you saying that the Doctor should have been treated as a mere tool and nothing else? That his programming or individuality should not have been allowed to develop, even with growing pains? Or what are you trying to hint with Zimmerman's quote?
 
If we're talking "a robot cannot harm or a human or through inaction allow a human come to harm", I think that's about it. Two humans in danger, one robot. Right?
The 60's German SF-series "Raumpatrouille" (Space Patrol) had some sort of primitive robots who were programmed with a rule that "a robot may not in any circumstance do a human any harm" or something like that.

That didn't prevent some robots to actually seize power on a mining colony. When some of the human workers started to argue and kill each other, the robots became neurotic and took over the colony.

Had been the one talking to the Doc on "Real Life", I would have just reminded him "you created that program to experience the everyday experience of having a family. Burying a child is not an everyday experience, it's the ultimate nightmare". Then, I urge him to continue with the program as B'Elanna wrote it, but modify it so that Belle suffers a less severe injury, because having a kid get hurt is a far more common experience and rates inclusion.

In some occasiions, The Doctor's program made him almost "too human".

Like in Real Life (hmm, why do I always write "Still Life" which is an Iron Maiden song?) in the situation you describe above. I would have told The Doctor the same as you did.

We also had that scenario in Heroes And Demons when The Doctor became so devastated over Freya's death that he refused to call himself Schweitzer anymore.

He could easily have replayed the Beowulf holoprogram in which Freya wouldn't have died, then he could have created a new oprogram in which he marries Freya and live happy with her all the time when he's not active in Sickbay.
 
The 60's German SF-series "Raumpatrouille" (Space Patrol) had some sort of primitive robots who were programmed with a rule that "a robot may not in any circumstance do a human any harm" or something like that.

That didn't prevent some robots to actually seize power on a mining colony. When some of the human workers started to argue and kill each other, the robots became neurotic and took over the colony.

It was probably something similar to Asimov's First Law of Robotics: "A robot may not harm a human, or through inaction allow a human to come to harm." The robots re-interpret the law to state that they must stop humans from harming each other.
 
It was probably something similar to Asimov's First Law of Robotics: "A robot may not harm a human, or through inaction allow a human to come to harm." The robots re-interpret the law to state that they must stop humans from harming each other.
"Raumpatrouille" was first aired in Germany in 1966, actually some months before TOS was aired in the US for the first time so I guess that those who made that series had read Asimov's books.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top