• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

OK, how does the Universal Translator work?

He did point out Homeward and Emissary earlier on this page as examples, to be fair.

Edit: oh wait no. You're saying a door has never opened without someone moving towards it, not a door has always opened when someone moved towards it. That is different, yeah.
 
He did point out Homeward and Emissary earlier on this page as examples, to be fair.

She.

Edit: oh wait no. You're saying a door has never opened without someone moving towards it, not a door has always opened when someone moved towards it. That is different, yeah.

The only examples of this happening are when doors were operating wildly to denote ship malfunction, as in Contagion, and as with Chakotay in the teaser for One Small Step.

In fact, this scene in Step is extremely important for another reason, because it totally supports my theory about intent-sensors. Watch how the door behaves... It ignores him at first, but then it opens after he moves away from it, then closes as he attempts to approach. Motion sensors do not explain this, in fact the evidence speaks against motion sensors. The door behaves as if it is totally oblivious to his actual position and motion, but like it is having delay and difficulty receiving or computing his intent.

The door (and the comm systems) are both malfunctioning because Seven of Nine has co-opted computer power for the astrometrics lab. The notion that this procedure would interfere with the door behaviour makes absolutely no sense whatsoever if the door is supposedly connected to a local motion sensor.

It makes much more sense that the doors would malfunction in this situation, in this way, if the process of doors deciding when to open is a complex process requiring computer access... As, I would argue, it is explicitly demonstrated to be in this scene.
 
There has never ever been an unlocked automatic door that opened in any series without someone having first made a movement that could reasonably be interpreted as intending to use it. If someone has a specific example to the contrary, would they please present it?

Sisko. Odo's office. Emissary. After blackmailing Quark ("community leader").

The only situation which is perhaps a little wonky is that there have been occasions when characters changed their minds after moving towards a door, but it didn't open in anticipation of the initial movement when perhaps it should have.

Yeah, that happens a lot. Almost like the door has some idea of their intent.
 
Having the doors read minds just seems overcomplicated unnecissary to me. Sure there might be a few situations where it seems like there was more than just motion sensors in play, but those could always just be mistakes on the creators parts or malfunctions. I think hundreds or thousands of scenes that indicate one thing can easily over rule one or two scenes that could possibly be interpreted another if you squint really hard. Not to mention that as often as Trek tech malfunctions brain scanning would probably be pretty dangerous, one malfunction and all of the people trying to walk through a door could suddenly find themselves being brain fried.
 
There has never ever been an unlocked automatic door that opened in any series without someone having first made a movement that could reasonably be interpreted as intending to use it. If someone has a specific example to the contrary, would they please present it?

Sisko. Odo's office. Emissary. After blackmailing Quark ("community leader").

The only situation which is perhaps a little wonky is that there have been occasions when characters changed their minds after moving towards a door, but it didn't open in anticipation of the initial movement when perhaps it should have.

Yeah, that happens a lot. Almost like the door has some idea of their intent.

I'll check "Emissary" in the next few days.
 
And if the doors via the main computer can read intent of the crew, why have a crew at all? Janeway just needs to command the ship in her mind. And robots can do maintenance and repair.
 
And if the doors via the main computer can read intent of the crew, why have a crew at all? Janeway just needs to command the ship in her mind. And robots can do maintenance and repair.

That's reductio ad absurdum. You're blowing it out of logical proportion.

And to everyone saying it would malfunction and fry brains, there is a difference between active and passive sensors. This worry of brain-frying from the EEG-style intent sensors would be akin to worrying that today's visual "eye-tracking" devices and software, which are a real thing that are designed to measure what part of a screen the user is looking at, are dangerous just by virtue of the fact that human eyes are involved: In the sense that lasers might suddenly and inexplicably shoot out of the cameras and blind you. The eye-tracking is just not built that way, and neither would the EEG-style intent sensors be risky in that regard.

It would also be akin to worrying that Christopher's kinematic sensors could somehow cause paralysis by virtue of the fact they measure the user's movement.
 
And if the doors via the main computer can read intent of the crew, why have a crew at all? Janeway just needs to command the ship in her mind. And robots can do maintenance and repair.

That's reductio ad absurdum. You're blowing it out of logical proportion.

That's not a fallacy, that's a valid method of argument. This technology certainly could do that if taken to its logical conclusion over the course of enough time; rahullak simply feels that a century (or two if you start it in the ENT era instead of the TOS era) is enough time.

Edit: that is to say, reductio ad absurdum is just saying "A->B, !B, therefore !A". It's only fallacious if you can show that "A->B" isn't actually true, and in this case the implication is almost wholly subjective and can't easily be dismissed. rahullak believes that this technology does imply eventually leading to that level of mental control, and you do not; it's just a difference in priors, unless you can somehow prove that a century would not actually be sufficient time.
 
And if the doors via the main computer can read intent of the crew, why have a crew at all? Janeway just needs to command the ship in her mind. And robots can do maintenance and repair.

That's reductio ad absurdum. You're blowing it out of logical proportion.

And to everyone saying it would malfunction and fry brains, there is a difference between active and passive sensors. This worry of brain-frying from the EEG-style intent sensors would be akin to worrying that today's visual "eye-tracking" devices and software, which are a real thing that are designed to measure what part of a screen the user is looking at, are dangerous just by virtue of the fact that human eyes are involved: In the sense that lasers might suddenly and inexplicably shoot out of the cameras and blind you. The eye-tracking is just not built that way, and neither would the EEG-style intent sensors be risky in that regard.

It would also be akin to worrying that Christopher's kinematic sensors could somehow cause paralysis by virtue of the fact they measure the user's movement.
Sure that might be true in the real world, but this is Trek and if a piece of tech can malfunction in spectacularly deadly way it will eventually.
 
Actually this is a case where I agree with chrinFinity's specific objection. The problem with this idea has never been that it couldn't theoretically work; it's just that there are simpler and more likely ways that it could be done.
 
rahullak believes that this technology does imply eventually leading to that level of mental control, and you do not; it's just a difference in priors, unless you can somehow prove that a century would not actually be sufficient time.

I've already stipulated to a regulatory restriction, rather than a scientific one. Just like how genetic engineering is essentially mastered by competent Federation medical doctors, but legally banned in the Federation except for fetus spine-fixing.

Mind-reading tech? It's not that they can't, it's that they don't, because of dangers and abuses.

Except when they do, in small measured limited applications that help immensely with demonstrably minimal risk of harm:
-UT brainwave detection for translating basic concepts
-Crusher's autonomic remote-control doodad
-Altonian brainteaser from A Man Alone
-Denara Pel holographic avatar
-Lie detector from Menagerie and I think also Wolf In The Fold

Note the above examples were all "read-only" (with the exception of the receiver on John Doe).

Potentially more harmful examples that involve sending signals into the brain, or nervous system:
-Brain-replacing remote control device from Spock's Brain
-Similar device used to remote control the dead Vorta in The Magnificent Ferengi
-Ferengi Mind-Control Sphere
-Mind-control implant from The Fall
-Romulan Mindprobes
-Klingon Mind-sifter
-The Psionic Resonator from Gambit

There are likely to be many additional examples I've missed.

Mind-tech exists in Star Trek. And, it is explicitly stated in many cases to be Illegal in the Federation except when it's shown to be allowed for certain applications (or at least, in Starfleet). How do these certain applications pass muster?

Well. I'm sure whenever anyone pitches a new technological application for an existing concept which is highly regulated (such as mind-tech or genetic engineering in Star Trek, or stem-cells in real life), there's some panel somewhere empowered to decide whether the newly developed application will be banned or not banned, based on whatever criteria the judges deemed fit.

I'm telling you, the simplest explanation (occam's razor) for the perfect door behaviour, given what we know and without discarding canon examples as "production errors" just because they're inconvenient, is this: Some time in the 23rd century, some brilliant but zany developer who loved mind-tech like more than just a friend pitched this tribunal on the idea of non-invasive sensors which would, with 85% to 95% accuracy, predict a users intention with respect to automatic doors, using safe and non-invasive sensors.

You want a continuity wank, let's say she's Ira Graves's grandma, or one of Trip and T'Pol's secret kids or whatever.

This scientist would have presented carefully prepared research based on studying ship logs which I'm sure would contain dozens of examples where a life would be saved, or even an entire ship, if the doors had been only slightly more efficient in an emergency situation...

...And to the more conservative and cynical members of the tribunal, she would have presented carefully researched safety studies, showing that the extremely limited, nerfed application of the mind-sensing tech she intended to be manufactured into the door sensors couldn't possibly be strong enough to fry brains, and couldn't be extrapolated into a psy-weapon.

In the end (maybe even after two or three rounds at trying to convince the more reticent of the bunch), they decide that although her approach is unorthodox, given the evidence presented it's a reasonable idea. So, they agree to give it a try in some limited test-bed applications. After months of rave reviews for the intelligent doors, and not a single incident of brain-frying or violated confidence, Starfleet Science congratulates her for solving a problem they hadn't even previously realized they had, and she receives that year's Daystrom prize for making dumb motion sensors obsolete on Starfleet ships and bases.
 
Sorry, there is just no way that picking up and deciphering the faint signals within the brain is simpler than just visually observing a person's body language and parsing their words. Even today, we have creepily invasive software that can monitor people's conversations through their phone mikes and target advertising toward them based on their word usage, or so I've heard. So that's far closer to being a real technology than remote neurotelemetry. Heck, simply detecting brain activity like that is orders of magnitude more difficult than just watching people's motion, never mind the comparative difficulty of interpreting what's detected. So I cannot possibly accept your assessment of which of the two technologies is simpler.
 
Sorry, there is just no way that picking up and deciphering the faint signals within the brain is simpler than just visually observing a person's body language and parsing their words. (...) So I cannot possibly accept your assessment of which of the two technologies is simpler.

I concede that your suggestion is simpler. But my suggestion renders a better user experience and is compatible with more on-screen evidence.

Consider traditional surgical technique in contrast with laparoscopy. If you tried to explain why laparoscopy was better to a surgeon at the turn of the 20th century, about 120 years ago, they'd have a hard time believing you...

With photography basically in its infancy, and new medicines and chemicals seeming-- at the time-- to represent the pinnacle of medical achievement, the idea that lives might be saved more effectively by a tiny knife and camera on a flexible remote-controlled stick would seem completely batshit ridiculous, and from their standpoint it would require the combination of two incompatible technologies with a third hypothetical one that seemed impossible to believe.

To those doctors, cutting a patient open and using improved tools, and better drugs, but still administered by trusted human hands would be seen as the "much simpler" and more "likely" explanation were they to conjecture how surgery might be done "in the future."

And yet, laparoscopy.

Your own example of open-mics on smartphones being used to remotely monitor users, uploading samples to the internet to be processed, recognized, and interpreted automatically by computers... for the express purpose of optimizing digital marketing to the consumer?

Try explaining that to a person from 1970, and have them believe it's not a ridiculous over-application of hypothetical technology for a trivial purpose that could be served more "simply" by traditional methods.
 
Oh, forget it. We're arguing in circles now, and over something that's just too trivial to be worth the effort.
 
OK. I just rewatched Sisko leave Odo's office in the "Community Leader" chapter of "Emissary."

Sisko stands with his back to the door, and then turns to his right. It is reasonable for a system scanning his bodily movements, for the purpose of deciding whether to open the door, to (as it were) conclude that it is far more likely that Sisko intends to turn and exit than it is for him to turn around and stand facing the door. After he begins turning, the door opens. Not a magic door.
 
Some time in the 23rd century, some brilliant but zany developer who loved mind-tech like more than just a friend pitched this tribunal on the idea of non-invasive sensors which would, with 85% to 95% accuracy, predict a users intention with respect to automatic doors, using safe and non-invasive sensors.
When you put it that way, it starts to sound like an almost irresistible story idea!... :lol:
 
When you put it that way, it starts to sound like an almost irresistible story idea!... :lol:

A story has a beginning, a middle, and an end. "Woman invents thing and men don't take her seriously" doesn't really cut it.

Getting a whole story out of what I wrote so far would be like extrapolating Journey to Babel out of "Kirk meets Spock's Dad, and they go do a thing."
 
I get chrinFinity's basic point that something that looks complicated and ultratech to us today, may not be so that far into the Trek future even for a seemingly trivial application. But by that same token, it isn't a stretch for Starfleet or some other species in Trek to have mind-controlled ships. If a computer can read intent for a simple door application, it can read intent for everything else too. That's not too much of a stretch in the Trek universe just as laparoscopy isn't to us today.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top