• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Moriarty in 'Ship in a Bottle'

JesterFace

Fleet Captain
Commodore
I really like this episode. Watching it recently had me thinking few things though, would it be possible to fool Moriarty like that?

Picard, Data and Barclay were on the holodeck planning to use the same trick on Moriarty he had used on them, to capture Moriarty on a fake holodeck. I was wondering, it might have been a good idea for Moriarty to keep an eye on everybody, what they were doing. Somehow our crew members were able to pull off this plan without Moriarty knowing about it. I'd imagine Moriarty had ears everywhere listening to what the crew members were up to on the holodeck.... or maybe he was vulnerable like the original Moriarty in Sherlock Holmes, he just missed something and was ”beaten”. His character is ”programmed” to lose, one way or the other? I have no knowledge of Sherlock Holmes character, I'm assuming Moriarty was supposed to lose as he was the evil character?
 
Yes, this was a great episode. I liken it to Inception / The Matrix in a way. I think Moriarty just underestimated them due to the subtle mistakes he made eg; left/right handed Geordi... and a big disadvantage of having someone like Data there. One would think he would be the master of the holodeck since he is part of the system, he most probably had an extra perception/sense about everything in its reality. But since it was an hour episode I'm quite happy it ended in a clever (almost diabolical) way. I do kind of feel bummed that Moriarty is still where he is, and he did kind of grow out of that evil role - despite using evil means, he did have the best moral cause from his perspective...
 
In the literature, Moriarty is a man who is actually Holmes' equal, a man who Holmes could only beat by sacrificing himself. This Moriarty doesn't have to contend with Holmes however. That dynamic is moot. He really is only Moriarty by name, & by some similar personality traits.

Apart from that, he's a computer construct, made with the sole design of being able to defeat Data. While that means a type of consciousness is given to him, it also means, in both appearances, he is a construct of the Enterprise computer, that is attempting to match the capabilities of a positronic artificial intelligence, which we know the Enterprise computer is not. That's what I like about Ship in A Bottle. It's the final game between Data & Moriarty. Elementary Dear Data never let that match play out.

Geordi's initial premise was flawed, because trying to get the Enterprise computer to match wits with Data is a losing battle for the former, & no matter how self realized Moriarty is, he doesn't understand that much, that the designer of his mind can never be a match to Data's. This is why he'd have such a crucial flaw in his plan, that he was either ignorant of, overlooked, or had been hoping no one would notice the left handed glitch in his own programs, a glitch that I believe Moriarty himself must've somehow created, in order to get the crew to think the programming was broken, & send someone to set him fully free... which is what happened. Barclay sourced the glitch malfunction to the protected memory that Moriarty was stored in. That's very unlikely to be an accident imho. It was however possibly a Persian flaw, that once his programming had that glitch, he had to keep it in there to run his program

Moriarty doesn't know that Data will always be impossible to beat, because he's maybe never figured out that Data is a better computer than the one that made his own mind. At least that's my take on it.
 
LaForge made a very direct request to the computer: to create an opponent capable of defeating Data. He did not request an opponent that would defeat Data., or explicitly override the default assumption that opponents in the holo-context exist in order to be beaten. So yes, I'm certain Moriarty was always doomed to fail, by design, and it is only a matter of the computer's imagination which exact form his defeat will take at every specific instance. (Sometimes it takes a lot of imagination, what with his opponents being so stupid and all...)

On the other hand, I see no reason whatsoever to think that Data is the better computer. Nobody has ever claimed that he would be, not even his creator Soong whose one superpower is vanity.

Timo Saloniemi
 
I always hated the idea that the holodeck could accidentally create a sentient being. Even if it was just that holodeck (say, because of the probe from The Big Goodbye, which could explain why Starfleet couldn't figure it out in the years between episodes), then why in the name of Shakari was that holodeck still in operation? Is there like a warning when you open the menu not to create life?
 
How could the computer tell? Holodecks already regularly run programs where the whole point is to convincingly fake life. There is always demand for more variety and detail, and never an issue with capacity. Starfleet is fine with fake lives filling its starships (and the civilian world with those inhabiting the regular dwellings and raising the kids).

Creating life never put Noonien Soong in jail, any more than it did Jim Kirk and Carol Marcus. And the attitude of our heroes towards Moriarty in the introductory episode is in line with that: they are not surprised in the slightest that the computer has the ability to create the adversary of the specified capabilities, and immediately realize the consequences, not treating this as a bizarre supernatural phenomenon but as a grave domestic accident instead. Creating life is fine, especially when there are safeguards all over the place that has been built for the specific purpose of creating life. But letting the Chief Engineer of a starship abuse his privileges is rare human error that probably doesn't plague the average civilian holodecks, so the domestic accident may well be without precedent.

Timo Saloniemi
 
Regarding the original question, I think that Moriarty did not have that god-like control over the holodeck, but only because he was limited by his original programming.

Yes, in theory he could have seen e everything and controlled everything, but in practice he still thought of himself as a human being and acted as such.
 
In the literature, Moriarty is a man who is actually Holmes' equal, a man who Holmes could only beat by sacrificing himself.

Thanks for the info, I had no idea. :)

If Picard, Data and Barclay know this and his character very well, maybe the answer can be found there. Picard and others know Moriarty and the computer he lives in, Moriarty didn't have that advantage.
 
Data actually quotes that very fact about Moriarty in "Elementary, Dear Data", when introducing the character to the audience. And never mind that later Holmes stories turn the tables, establishing that Holmes' sacrifice was but a ruse, or was turned into one, and he in fact survived and demonstrated his superiority over Moriarty.

But of course the very scene where Picard and Data give the lowdown to the character himself, he says "Yes, I know, but I'm more now." Not a reassuring assertion at all. But not necessarily a factually correct one: Moriarty would naturally be full of himself, but he would be the worst possible judge of his limitations, being new to the whole business of them existing.

Timo Saloniemi
 
Thanks for the info, I had no idea. :)

If Picard, Data and Barclay know this and his character very well, maybe the answer can be found there. Picard and others know Moriarty and the computer he lives in, Moriarty didn't have that advantage.
Well, he might not have known initially, but in his self awareness, he would eventually realize all about what he is. He knows he was created from an author's ancient writings of a fictional man, designed to be a play thing for Data. He ends up inexplicably being able to concoct ways to affect the ship, which means his access to the computer databanks was something he took full advantage of, studying them, the ship, the computer, the crew etc...

In all senses, he couldn't be anything like the fictional Moriarty, because that Moriarty would have no chance in hell of defeating Data, even with access to consciousness & 24th century information. In order to fulfill the task Geordi laid out, the computer would have to make a character, who looks & acts like a Moriarty, but would have the capabilities of Data.

Geordi told the computer to make a villain capable of besting a damn technological phenomenon. It may be the dumbest thing he's ever done lol. He told it to make Lore. That's what Moriarty is, a hologram that looks like a Doyle character, which has been endowed with Data's attributes, by the ship's computer. It's possible in asking to make a villain capable of defeating Data, it made one that only Data could defeat
 
I do wonder if Moriarty's lack of strong malevolence is the result of becoming self aware or is part of the internal logic of the computer's creation of him; that a more antagonistic foe would not be capable of defeating Data in the same circumstance.
 
In The Final Problem, I really didn't find Moriarty to be that worthy of an opponent. It wasn't really a very good story at all.
 
I always hated the idea that the holodeck could accidentally create a sentient being. Even if it was just that holodeck (say, because of the probe from The Big Goodbye, which could explain why Starfleet couldn't figure it out in the years between episodes), then why in the name of Shakari was that holodeck still in operation? Is there like a warning when you open the menu not to create life?
Their computers are definitely too smart for their own good. The holodeck comes to life, the Enterprise has a baby...

Personally I love it. It's ridiculous and short sighted of them to have this technology in operation, which is very human indeed.
 
It happens again and again too...The Doctor on Voyager, Vic Fontaine on DS9. Anytime they leave the holodeck to run for a while and the characters may become sentient.

It means that every time someone shuts off the holodeck they may be snuffing out a bunch of lifeforms that have the ability to attain sentience. Which really has rather disturbing implications if you think about it.
 
It means that every time someone shuts off the holodeck they may be snuffing out a bunch of lifeforms that have the ability to attain sentience. Which really has rather disturbing implications if you think about it.

However, in almost every case those holodeck characters only have the ability to become sentient beings, they aren't sentient. Could it almost be compared to abortion, the characters on the holodeck and the beginnings of life in the vomb have the ability to grow, it just hasn't happened yet?
 
It means that every time someone shuts off the holodeck they may be snuffing out a bunch of lifeforms that have the ability to attain sentience. Which really has rather disturbing implications if you think about it.

However, in almost every case those holodeck characters might only have the ability to become sentient beings, they aren't sentient. Could it almost be compared to abortion, the characters on the holodeck and the beginnings of life in the vomb have the ability to grow, it just hasn't happened?
 
Even so, I find the moral implications really upsetting. Have felt that way since Voyager.

Ironically, I always felt this would result in a situation horrifying to the Doctor and others. Specifically, the mass shutting down and/or "dummying down" of holograms.

It would seem like genocide to one while morally imperiative to the other.

We don't WANT to create a new race. So a new race is snuffed before it can be created.
 
That's why the only Holodec episode I like is Hollow Pursuits, that's the only one where the holograms actually behave like video game NPCs.
 
I wonder if holodecks now come with warnings: CAUTION--Excessive use may cause self awareness and crooning.
 
Ironically, I always felt this would result in a situation horrifying to the Doctor and others. Specifically, the mass shutting down and/or "dummying down" of holograms.

It would seem like genocide to one while morally imperiative to the other.

We don't WANT to create a new race. So a new race is snuffed before it can be created.
Indeed. Could bring to a situation similar to Star Wars, where the droids are periodically purged of their memories to prevent them from becoming too self-aware.
 
It happens again and again too...The Doctor on Voyager, Vic Fontaine on DS9. Anytime they leave the holodeck to run for a while and the characters may become sentient.

I thought it was clear that Vic was created sentient to begin with? Felix might not have forewarned Julian or Quark or any authorities about it, but that'd have been part of the gift; Vic demonstrates self-awareness to Dax, Odo and O'Brien and the audience the first thing he opens his mouth, and Bashir isn't surprised at all at this point.

It means that every time someone shuts off the holodeck they may be snuffing out a bunch of lifeforms that have the ability to attain sentience. Which really has rather disturbing implications if you think about it.

Not really. It's just akin to people being born: every life realized is an infinite number of lives prevented from being realized in its stead. Not everything can happen, especially not at the same time. And mostly (that is, roughly 100.0000000000000% of the time) stuff is prevented from happening by random factors: even if there is a choice or three involved there somewhere, it's not really all that relevant to the morals of the issue.

Timo Saloniemi
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top