• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Wouldn't Moriarty have known?

Yes, since Picard is a captain it is logical he would be able to turn the safties off. But he has a code in order to do this therefore Moriarty would need a code. I highly doubt the computer gives theese away willy nilly though. And why would he want to kill the people who enter the holodeck? he wants to kill Sherlock errrr, I mean Data and since he has unlimited acess to the computers he knows he would be killing a computer?
 
If Moriarty needs to be capable of defeating (nobody says anything about killing) Data, and the computer believes that Moriarty will need to be able to override the safeties in order to do so, then the computer will give Moriarty the means to do so. Since we don't see what he does the whole time, if he was provided with a code (and on a number of occasions he seems to spontaneously "remember" things, so there's no reason such a code couldn't fall into that category) there's no reason he couldn't have plugged it in off-screen.

Why would Moriarty want to be able to turn off the safeties? It's a lot easier to take hostages with the safeties off, for starters.
 
Okay so I'll take your thing on defeating Data, But when you take the safties off you can kill people, Picard was almost mortally injured. However Moriarty took Pulaskie prisoner but he did nothhing to her. If she was his hostage and he could injur her, this is all based on what your saying, why didn't Moriarty at lest threaten Data with the threat of harming Pulaskie. We all know that Data is programed so that he couldn't let her be harmed. And once more since Moriarty had acess to the memory banks he would know this and be able to use this to his advantage.
 
You'll have to wait for someone else to chime in on that one; I haven't seen the pertinent episodes in quite some time.
 
Nothing, our brains are "glorified memory banks" and so is the ships computer.

Well if creating a sentient android was as simple as downloading a starship computer into an automoton, I don't think the show would have made as big a deal out of it as it did. Clearly Data was meant to be something special, and something that required a lot more that just a super-advanced hard drive to make work.

His positronic brain allowed him to grow and develop and become more human in ways the Enterprise computer obviously wasn't designed to.
 
I thought the very point of the character Data was that nobody thought much about him?

I mean, Starfleet Academy just sort of let him enroll and graduate. Starfleet then employed him in singularly indistinct jobs, after which he ended up doing menial tasks on Picard's bridge. Only one guy expressed scientific interest in how he was put together, and nobody in turn paid attention to the guy. Also, everybody seemed to think that Noonian Soong was a charlatan, and the androids made by him did not appear to alter that one iota.

...Really, the only one who ever cared was an eccentric collector of rarities, who only found Data valuable because he wasn't built in high numbers.

Worf was more of a celebrity and curiosity than Data, his exotic physique more a plot point and a source of amazement for the heroes and villains. Apparently, androids and sentient machines are commonplace enough in the Trek universe, and just because Soong decided to combine the two didn't really suffice for making Data a target of scientific interest.

For some reason, Star Trek has never considered it remarkable that a machine could have the intellect of a human (or more, or less), and has only occasionally found it interesting that a machine might dabble in human emotions and idiosyncrasies. These things simply happen in Star Trek, where technology is advanced enough to make everything possible; if something isn't commonly done, it's merely because it's uninteresting or not worth the hassle. Or taboo for some reason, such as making superstrong or superintelligent variants of the human species.

Timo Saloniemi
 
That's certainly not the impression I got. Even if there had been other androids before, it still seemed like Data was regarded as something different and unique.

Just because the crew had gotten used to him being around, doesn't mean they didn't think he was a remarkable creation. And in fact Picard often seemed in awe of the progress Data was making, or the special insights he had into the human condition.
 
When Data attracts attention, it's "Ooh, look how strong he is!" or "Wow, see how fast he does math!". It's never "Wow, that machine can think!"... At least not with people from the 24th century.

Picard mostly appeared to express admiration on how Data bravely overcame his debilitating "autism". He didn't find other artificial intelligences such as Moriarty awesome as such, not on the basis of them being "thinking machines"; he treated such things on a case-by-case basis, as he would treat a biological entity with the same motivations and histrionics.

Kirk already took thinking machines in the stride, reasoning with them like he would with a child. That the machines thought was not a source of amazement; that they thought malicious things and weren't very understanding of humans was a source of annoyance.

Thinking machines were such old news when TOS was made already that it would have appeared awkward to have our heroes express amazement in face of such things. Thinking machines today may still be far from reality, but as a scifi concept they continue to be mundane things.

Timo Saloniemi
 
As Moriarty was created with the directive of being capable of defeating Data, presumably the computer gave Moriarty a level of holodeck access that Data would not have.

Of course, one wonders why the computer would be able to do this...but if we assume that, for instance, Captain Picard could unilaterally turn off the safeties...which he in fact does in First Cotnact, then it's simply a matter of giving Moriarty the same level of holodeck access that Picard has, as opposed to a level of unprecedented access.

Moriarty wouldn't need that access to be specifically granted, though - he was a program being executed by the very same computer. It would simply be a matter of two programs communicating with each other - something that the programmers probably never anticipated could cause a dangerous situation.

It's no different than if I were to write code for a website that read a file that was on the same server, or another server on the same network. If there's no specific reason to change the security settings to prevent those that code from having access to that file, I'm not going to think to do it.

Now, one could argue that the holodeck safeties should have that security in place, and they do, as illustrated by the fact that they have to be explicitly turned off. But I would make the counterargument that that's in place to prevent people from being able to take unauthorized actions.
 
To be fair I think the show did address the fact that people take Data for granted when you have Bashir cross-over and pointing out the fact Data breathes and has a pulse...
 
Now, one could argue that the holodeck safeties should have that security in place, and they do, as illustrated by the fact that they have to be explicitly turned off. But I would make the counterargument that that's in place to prevent people from being able to take unauthorized actions.

Another thing to consider is that the holodeck is a very advanced entertainment machine. It seems to be able not only to accommodate, but also to anticipate the user's wishes and needs. In many an occasion, it really bends over backward to create the simulated reality the user yearns for, without asking for permission at each step and thus ruining the illusion.

In essence, the holodeck from the very outset is "an opponent capable of defeating Data". It just unleashes itself to the necessary degree when told to do so, independently deciding whether security and safety really is necessary.

Timo Saloniemi
 
Nothing, our brains are "glorified memory banks" and so is the ships computer. I have 1 question, I recently watched an episode where Data was fighting a Borg and became angry(it's the one where the effects of sendin Hugh back are reveald) and he trys to recreate the situation on the holodeck. But to take the safe factor out so he could get hurt he needs the autor ization codes of 2 officers, him and Geordi. But some how Moriarty took the safty gaurd off. How the Hell did he do that?

He was able to do it because in
Season 2 Ep 3, Geordi creates Moriarty without even turning the safeties off.
Season 6 Ep 12, Moriarty is able to turn the safeties off alone.
Season 6 Ep 26, Only by this point has the security increased so it takes 2 officers to disable the safeties.

chicken and egg...

Even if security measures are in place through all episodes as we'd all assume, you should equally assume those measures could be compromised and circumvented by means other than authorized access. Don't we see plenty of other episodes doing just that?
 
Still, mis-spoken orders or not simply mis-speaking a phrase like "create a adversary capable of beating Holmes" and instead say "Data" shouldn't cause the computer to give holodeck characters and the knowledge capable of taking over the ship.

(It's probably also worth nothing Geordi didn't even need to call for the arch and make for the second request, Moriarty was already seemingly aware of Geordi, Data and the arch before Geordi made the second request.

Moriarty should not have so easily have been able to take over the ship.
 
Still, mis-spoken orders or not simply mis-speaking a phrase like "create a adversary capable of beating Holmes" and instead say "Data" shouldn't cause the computer to give holodeck characters and the knowledge capable of taking over the ship.

(It's probably also worth nothing Geordi didn't even need to call for the arch and make for the second request, Moriarty was already seemingly aware of Geordi, Data and the arch before Geordi made the second request.

Moriarty should not have so easily have been able to take over the ship.

But if the writers were going to make sense like that, they might as well not have made the episode. A lot of it never held water but it was a fun episode and character.
 
Why should the computer have the power to judge whether a software-engineering order from the Chief Engineer is "reasonable" or not? Creating an opponent capable of defeating Data might be a matter of trillions of lives and deaths, for all it knows. Indeed, it's probably among the less exotic orders from LaForge when it comes to engineering matters.

Sure, the computer no doubt is smart enough to think through the consequences. But it should also be smart enough not to ask "are you sure?" from an officer of supreme authority.

Timo Saloniemi
 
Now I'm imagining the Enterprise computer as the Microsoft Word Paperclip.

"It looks like you're trying to create an adversary for Holmes..."
 
And we can argue that there are other types of holodeck decisions that LaForge (or some other character with engineer clearance, such as Barclay) might make that would have effects outside the holodeck.

Indeed, in "Hollow Pursuits", Barclay ran simulations that ended up affecting the lives of the people he asked to be simulated; in "A Matter of Perspective", a simulation was run that had direct physical effects on the ship outside the holodeck; and in "Booby Trap", LaForge insisted on running an as such benign simulation that still was a known drain on crucial resources during a crisis.

So the computer should have ample precedent on not being expected to interfere even when a holodeck order has obvious far-reaching consequences...

Timo Saloniemi
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top