• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

OK, how does the Universal Translator work?

Who said anything about baselines? Most of the peope using those doors every day are members of the crew, whose individual biometric data would be programmed in. All the people you cite are Starfleet officers, so presumably their data would be on file.

And it's implied this would be required of their civilian families too, right? This solution requires for additional manual administrative work to be done (inefficiency) and introduces an ethical issue that might otherwise be avoided.

As for visitors, keep in mind that 95 percent of the time, it's perfectly obvious whether someone plans to go through a door or not just by moving toward it. It's only in rare cases that a door would need any additional information to know the difference between an approach with intent to pass through and an approach without it (...)

I challenge you to find a specific instance where a non-Starfleet visitor interacts with a door in way that would require more than simple motion sensors to explain.

Check out the patient behaviour of the doors in relation to the Boralan chronicler when he leaves the Holodeck in Homeward.

Also, the Cardassian doors in Odo's security office open on cue for Sisko in Emissary after he's done blackmailing Quark and ready to leave, but before he starts moving... And this is explicitly before O'Brien has integrated any Fed or Starfleet tech into the station's computer system, and before Bashir has even arrived on the station, let alone had time to pick up the Infirmary and integrate any medical / biometric files.

Anyway, the ultimate explanation is that it's just a show and the behavior of the set is subordinate to the needs of the drama. If a door sliding open would distract from the performances, we don't see it slide open, even if it normally would. Roddenberry's preferred view was that the show was a sometimes-inaccurate after-the-fact dramatization of the "real" adventures of Kirk & co., and many mysteries and inconsistencies vanish if you view it on those terms. It certainly makes more sense than pretending it's some kind of found-footage documentary, complete with wobbly sets and backlot planets and recycled props and actors and innovative but limited visual effects.

I don't follow this line of debate. (C'mon, you're breaking my heart here)
 
Who said anything about baselines? Most of the peope using those doors every day are members of the crew, whose individual biometric data would be programmed in. All the people you cite are Starfleet officers, so presumably their data would be on file.

And it's implied this would be required of their civilian families too, right? This solution requires for additional manual administrative work to be done (inefficiency) and introduces an ethical issue that might otherwise be avoided.

What are you talking about? I implied no such thing.



Check out the patient behaviour of the doors in relation to the Boralan chronicler when he leaves the Holodeck in Homeward.
Okay, and is there anything about his body language that's particularly unusual? You're overcomplicating this. If you, a human viewer, can tell just by watching whether a character intends to go through the door or not, then a well-programmed computer could do so as well, if not better. It's far more likely than hand-wavey conjectures about mind-reading. (Indeed, one could argue that our ability to read others' body language and expressions is how we read their minds. It's a very powerful ability, one we tend to take for granted.)


Also, the Cardassian doors in Odo's security office open on cue for Sisko in Emissary after he's done blackmailing Quark and ready to leave, but before he starts moving... And this is explicitly before O'Brien has integrated any Fed or Starfleet tech into the station's computer system, and before Bashir has even arrived on the station, let alone had time to pick up the Infirmary and integrate any medical / biometric files.
Then I'd call that a production error and leave it at that. A TV production doesn't have time to redo a take a thousand times until every detail is perfect, and they'll prioritize one that has a good performance over one that gets the door timing perfect.


I don't follow this line of debate. (C'mon, you're breaking my heart here)
All right, I'll rephrase: "Then repeat to yourself, 'It's just a show -- I should really just relax."
 
First and foremost, I am enjoying this discussion and I appreciate your points of view even where I differ.

(re: civilians) What are you talking about? I implied no such thing.

But it would need to be set up with civilians' kinemetrics as well (according to your methodology), in order for the doors to give the families on the ship the same personal touch in terms of QOS.

You're overcomplicating this.

Not from my perspective. I legitimately believe your explanation is more complicated. Consider that we have, today, non-intrusive devices that derive digital input from electroencephalography. This exists and has already been used in consumer applications (and developers must program for it). For me to imagine that in 300 years this works more accurately and at a distance is not a stretch for me.

On the other hand, to program a computer to intuit intention from body language (which is unique to each person and differs across lines of species, cultures, and physical ability) is an extremely nontrivial task. Developers are still struggling to effectively implement explicit gesture-based tech, forget about subtle emotional cues implied by stance. I suppose it would be conceptually possible for Trek technology, but that explanation seems more contrived and unnecessarily convoluted when all the on-screen evidence points to the convenient explanation that they just work on a similar principle to the UT.

If you, a human viewer, can tell just by watching whether a character intends to go through the door or not, then a well-programmed computer could do so as well, if not better.

And if I as a human viewer can use contractions, why can't Data program himself with a simple search+replace? I still say that the doors reading simple intentions via UT technology makes more sense than decompiling body language just so the doors can operate for more dramatic effect. And the "intentions" explanation neatly corrects for the reality that all door behaviour witnessed is actually as the result of the production environment. Your "LCARS Kinect" explanation can't account for 100% of those cases.

It's far more likely than hand-wavey conjectures about mind-reading.

Again, I'm going from Spock's and Kirk's on-screen dialogue in Metamorphosis. We're already accepting the UT... Conservation of Suspension of Disbelief dictates that door sensors reading "intention" via UT technology is an easier pill to swallow than the idea that the Enterprise computer is hip to your persuasion.

A TV production doesn't have time to redo a take a thousand times until every detail is perfect, and they'll prioritize one that has a good performance over one that gets the door timing perfect.

I get what you're saying, but I take a different view. A production error is Worf and Garak fighting in the jefferies tube junction, and the hatch cover falls off to reveal plywood underneath. Obviously the Defiant's construction materials manifest did not include an entry for plywood, so we shrug and blame Hollywood.

OTOH, something as fundamental as how a specific Trek technology is consistently depicted as working on-screen across dozens of episodes can simply not be ignored. I'm distracted by the plywood; I prefer not to be distracted by the doors, so it eases my viewing experience to have an explanation for it that makes sense.

All right, I'll rephrase: "Then repeat to yourself, 'It's just a show -- I should really just relax."

I'm coming from a perspective where I'm trying to keep the explanations in-universe. This is like if we were debating St. Elsewhere and you tried to kill any and all debate by insisting it doesn't matter since it's all a snowglobe.
 
But it would need to be set up with civilians' kinemetrics as well (according to your methodology), in order for the doors to give the families on the ship the same personal touch in terms of QOS.

That's just the point -- it wouldn't need to be. I mean, come on, we've had automatic sliding doors in real life for nearly as long as we've had Star Trek. Like I said, the overwhelming majority of the time, all you need is a simple motion sensor. The more detailed stuff is just for those few anomalous instances where that isn't enough of an explanation.



You're overcomplicating this.
Not from my perspective. I legitimately believe your explanation is more complicated. Consider that we have, today, non-intrusive devices that derive digital input from electroencephalography. This exists and has already been used in consumer applications (and developers must program for it). For me to imagine that in 300 years this works more accurately and at a distance is not a stretch for me.

But we also already have sophisticated computer gait analysis that's precise enough to identify people by the way they move. And we have video games that can be controlled by people's movements. What I'm talking about is even less of an extrapolation from existing technology than what you're talking about. It's something we could probably do today, or five years from now. What you're talking about is far more pie-in-the-sky. It should be self-evident that just observing people's body movement from the outside is a simpler technology than reading their minds.


but that explanation seems more contrived and unnecessarily convoluted when all the on-screen evidence points to the convenient explanation that they just work on a similar principle to the UT.

By "all the onscreen evidence," you mean a single line from "Metamorphosis." That's rather an overstatement.


And if I as a human viewer can use contractions, why can't Data program himself with a simple search+replace?

It's a myth -- one unfortunately embraced by the show itself in "The Offspring" and "Future Imperfect" -- that Data was incapable of using contractions. All "Datalore" stated was that he preferred to speak more formally. He used plenty of contractions before "Datalore" and occasionally used them afterward.


Your "LCARS Kinect" explanation can't account for 100% of those cases.

It doesn't need to. We're trying to rationalize the vagaries of a TV show. It's never going to be perfect. It just has to be a plausible handwave. And I go by Occam's Razor. It's far, far simpler to assume that a door has motion sensors than to assume it can freaking read minds.
 
It would also be logistically ridiculous as a construction; even if we were to assume that this technology did exist, what possible reasonable gain would there be from designing a door that way? Why would it have been designed in the first place? It literally sounds like something out of Hitchhiker's Guide or Red Dwarf. Most Trek technology, however unrealistic it might be, at least one can see the logistic benefit that it would give were it to really exist, but doors that work by reading the intention of a user instead of by motion or physical interaction? Computer systems, maybe, sure, but doors?

If the existence of a technology isn't blatantly obvious, then saying that it's a plausible construction in the setting isn't enough justification, it needs to be plausible that someone would decide to implement it in the first place rather than other alternatives.
 
^Exactly. The question isn't whether it could theoretically be done, but whether it's the best or simplest option available for achieving that task. Using mind-reading technology just to know whether someone wants to use a door is spectacular overkill.

Not to mention the privacy issues. What if someone's having an idle sexual fantasy about a crewmate while approaching a door, or thinking about their illegal stash of space-weed, and those thoughts are recorded and somehow exposed? What if the data telepathically collected about door users' private thoughts (in a civilian setting) were used to target advertising at them? Good lord, the very idea of mind-reading doors is a prospective dystopian nightmare. I don't consider that a worthy trade-off for explaining away a trivial detail in a few episodes.
 
In TAS "The Terratin Incident," the doors were explained by electric eyes. In my view, that's most likely too simplistic for gait analysis, even allowing for 23rd-century electric eyes. (Electric eyes aren't just sensing devices; they involve detection of something blocking an active beam.) But doors automatically open for me at the grocery store without reading my mind, so I'd imagine they'll do a good job in the 23rd century without reading minds.
 
It would also be logistically ridiculous as a construction; even if we were to assume that this technology did exist, what possible reasonable gain would there be from designing a door that way? Why would it have been designed in the first place?

Better mousetrap. Come on, isn't using matter replication to make a grilled cheese sandwich the same kind of overkill?


Most Trek technology, however unrealistic it might be, at least one can see the logistic benefit that it would give were it to really exist, but doors that work by reading the intention of a user instead of by motion or physical interaction? Computer systems, maybe, sure, but doors?

But the doors are networked to the computers, just like my watch and my smartphone -- they are all part of the same system. The "intent" thing is just a bell whistle you get with doors that are on a top-of-the-line 24th century starship. "You won't believe what these new Galaxy-Class starships can do, sir."

Why do it? Because they can! For crying out loud, dream bigger!

A few years ago we would have said 1GHz processing and 32Gb of ram would be 'ridiculous overkill' for a person's mobile phone. Times change. We're talking 300 years here, they can make food appear from thin air and translate previously unknown alien languages and go thousands of times the speed of light and remote control someone's nervous system using computer input from another person's brain non-invasively. "Sure, let's also use it for the doors, because why not" does not seem like a ridiculous stretch to me, and I'm struggling to understand why you might think it is.

If the existence of a technology isn't blatantly obvious, then saying that it's a plausible construction in the setting isn't enough justification, it needs to be plausible that someone would decide to implement it in the first place rather than other alternatives.

A month or two ago, Amazon Prime started marketing a small stick-on button that you can place anywhere in your house -- laundry, fridge, whatever, and its sole purpose is that when you press it, a small computer inside the button housing logs onto your amazon prime account and has ONE PRE-PROGRAMMED ITEM shipped automatically to you. It's a "replace this thing I just ran out of" button.

Why do it? Because a door that only opens when you want it to is a superior product with better UX than the doors which came before it. To say "let's never use new technology to make this experience better" is to say "you know what, let's close the patent office, inventions are probably done."
 
Why do it? Because a door that only opens when you want it to is a superior product with better UX than the doors which came before it. To say "let's never use new technology to make this experience better" is to say "you know what, let's close the patent office, inventions are probably done."

It's just a question of which technology. You're taking one line from "Metamorphosis," a line never corroborated anywhere else in 700-plus hours of the franchise, and building a whole theory from it. Your conclusion is predicated on a whole chain of ad hoc assumptions growing out of that single conjecture. That just seems to be stretching things too far. Sure, if they had ubiquitous mind-reading technology in everything else, they'd probably use it in the doors -- but I do not accept the premise that they do. There's just no real evidence for that beyond that single line in a single episode.

And there is certainly evidence that the computers and equipment on the ship do not automatically understand their users' intent -- like in "Caretaker" where Tom couldn't get the replicator to give him the variety of soup he wanted, or in "Elementary, Dear Data," where the creation of Moriarty happened because the computer acted on the letter of Geordi's command in contradiction to his intent. You've taken what started out as an acknowledgment of an inconsistency -- that if they had the kind of mind-reading tech "Metamorphosis" established, it should logically have other uses, but there's no sign that it does -- and somehow distorted it into the working assumption that it actually does have those other uses.

And, again, there are some truly hideous privacy implications that you're simply ignoring. It seems to me that if such a mentally invasive technology existed, it would be legally and ethically restricted to a limited set of uses -- e.g. lie detection with a consenting witness/suspect (as in "Wolf in the Fold") or translation to facilitate contact with a new species. As they said in Jurassic Park, you mustn't get so caught up in the question of whether you can do something that you forget to ask whether you should -- or at least when you should.
 
Like I said, the overwhelming majority of the time, all you need is a simple motion sensor. The more detailed stuff is just for those few anomalous instances where that isn't enough of an explanation.

So we're both about Occam's Razor, we just see the probable tech tree differently. Just for some background, my line of work is as a developer and a lot of that field involves being aware of new sensors and innovative input solutions, so I'm predisposed more toward accepting the technology as possible.

But we also already have sophisticated computer gait analysis that's precise enough to identify people by the way they move. And we have video games that can be controlled by people's movements.

True. However,

1) They don't work as effectively as you might think,
2) They require extensive calibration,
3) We're talking one species, one culture,
4) Wouldn't account for "magic door" situations such as the one I alluded to earlier from Emmisary.

What I'm talking about is even less of an extrapolation from existing technology than what you're talking about. It's something we could probably do today, or five years from now. What you're talking about is far more pie-in-the-sky. It should be self-evident that just observing people's body movement from the outside is a simpler technology than reading their minds.

And projecting a 2D image on a white wall is a simpler technology than the Holodecks we see in Trek. That doesn't mean the Holodeck's not better.

How ridiculous would it seem to someone fifty years ago for me to explain that I wear a tracking device everywhere I go, which sends my location to a massive company, for the express purpose of letting my friends and even strangers know what things I like? That I let that company know almost everything I read, simply because they have demonstrated they can effectively suggest new things to me? After a few years of Big Brother Google and the CIA not hurting me with this power they have over me, the trust is there, bought by convenience. The same will be true of other technologies in the future.

What would they think 50 years ago if we told them we built a massive globewide interconnected web of computers, initially for noble and academic endeavours, but that eventually most people just use it to masturbate and look at kittens? "Hey, you know what would be really cool, and we already know it's proven-safe tech? We could use that brain scan instinct-intent pattern-signature thing to make the doors on the ship smarter. They'll go nuts for it. #StarfleetKickstarter"

Christopher, come on. Please admit that I am not being totally ridiculous.

By "all the onscreen evidence," you mean a single line from "Metamorphosis." That's rather an overstatement.

And there aren't any other explanations given, let alone more plausible ones, for the UT as seen on screen.

It's a myth (...) He used plenty of contractions (...)

Alright, absolutely fair enough, and I'll withdraw my lazy analogy. But to your original point, "if (human do) why (can't computer do also)," as a writer perhaps you'll appreciate that "computer can't do what human do" has been a recurring theme in Star Trek since back in the day. Not entirely unreasonable.

It doesn't need to. We're trying to rationalize the vagaries of a TV show. It's never going to be perfect. It just has to be a plausible handwave.

Okay, but the name of the game here is "how close to rational can we make it, vagaries and all." I've read close to all your work, Chris! You're super into tying up these kinds of loose ends. I would think you'd be into this.

I'll go ahead and concede the point that in real life it's just a TV show, if we can just keep debating and discussing how it "would" or "could" work in the context that what we see on screen were a true representation of a world that was internally consistent.

And I go by Occam's Razor. It's far, far simpler to assume that a door has motion sensors than to assume it can freaking read minds.

Mind reading is different than intent reading. To analogize, supermarket door sensors can see motion, but don't know how close, how fast, how big, etc. Just binary "moving" or "not moving."

An EEG (or analogous) intent sensor can't tell "want egress because need to urinate" from "want egress because duty shift over" from "want egress because must kill ambassador." It's the "because" that represents the limitations of what you call the "mind-reading" sensor. It can tell you're focussed on leaving the room, and maybe queries your motor center to ensure you're not queuing up a "stop" or "slow" command in the next 1 or 2 meters worth of walking, but it's not going to rise to the level of "Turbolift's Log Stardate Now: Commander Smith is inappropriately contemplating Lieutenant Thompsons hindquarters."
 
It's just a question of which technology. You're taking one line from "Metamorphosis," a line never corroborated anywhere else in 700-plus hours of the franchise, and building a whole theory from it.

I'll work on this and get back to you. But I will point out, in addition to the unambiguously specific canonical explanation, it has also been used in a number of places in TrekLit, and was in general justified rather satisfactorily for me earlier in this thread, so I'm inclined toward this explanation.

And there is certainly evidence that the computers and equipment on the ship do not automatically understand their users' intent -- like in "Caretaker" where Tom couldn't get the replicator to give him the variety of soup he wanted,

Taste is probably harder to read than intent or language. And from a person's imagined taste, figure out chemically how to build food on the molecular level that has the right taste, texture, nutritional value, not poisonous, etc.? Not even close to possible for a variety of reasons, for example that the combination of humanoid tastebuds and flawed memory create sufficient obfuscation as to render the resolution of such neurotelemetry useless for the purpose of producing a replicator pattern for food.

or in "Elementary, Dear Data," where the creation of Moriarty happened because the computer acted on the letter of Geordi's command in contradiction to his intent.

Ooh, unanticipated side effect of that Bynar upgrade... What if the computer knew that Geordi was really excited by the idea of an opponent that could defeat Data? Under those old interrogative protocols, it might have said "Warning: Requested function would divert 19% of secondary processing capacity and create a power surge on decks 8, 9, and 10. Please confirm." But with its new intent detection circuits, it didn't have to, because it read how seriously Geordi and Kate wanted to stick it to their android friend.

That Data happens to be streets ahead of any possible Starfleet competition, and this wasn't considered in Holodeck or computer design, is a freak circumstance.

You've taken what started out as an acknowledgment of an inconsistency -- that if they had the kind of mind-reading tech "Metamorphosis" established, it should logically have other uses, but there's no sign that it does -- and somehow distorted it into the working assumption that it actually does have those other uses.

A number of comparable technological examples have been pointed out by me, and others, in this thread, and I would appreciate if you would acknowledge them.

And, again, there are some truly hideous privacy implications that you're simply ignoring. It seems to me that if such a mentally invasive technology existed, it would be legally and ethically restricted to a limited set of uses -- e.g. lie detection with a consenting witness/suspect (as in "Wolf in the Fold") or translation to facilitate contact with a new species.

Or, for example, "at eighty-five kiloquads of resolution, it can tell if they want to leave the room, but not the why. We can give that a certified 'pass.'"

As they said in Jurassic Park, you mustn't get so caught up in the question of whether you can do something that you forget to ask whether you should -- or at least when you should

When has that realistically ever stopped anyone from devving new tech? :/
 
Last edited:
An EEG (or analogous) intent sensor can't tell "want egress because need to urinate" from "want egress because duty shift over" from "want egress because must kill ambassador." It's the "because" that represents the limitations of what you call the "mind-reading" sensor. It can tell you're focussed on leaving the room, and maybe queries your motor center to ensure you're not queuing up a "stop" or "slow" command in the next 1 or 2 meters worth of walking, but it's not going to rise to the level of "Turbolift's Log Stardate Now: Commander Smith is inappropriately contemplating Lieutenant Thompsons hindquarters."

And I question the assumption that some kind of cyber-telepathy can do that more easily than just observing a person's body language, gaze direction, keywords in the conversation, and so forth. Heck, gaze could be a huge part of it -- whether someone's looking at the door as they move toward it is a major cue about whether or not they actually want to use it, and a very simple one at that.

And yes, discerning those things from analysis of visual or auditory cues is complicated, but it seems to me that discerning them from scans of brain activity would probably be even more complicated. Even if we accept the silly premise that there are some sort of "universal concepts" shared by all minds, that doesn't mean you could just see them sitting there on the surface waiting to be read. It would take a lot of analysis to extract that information from the EM signals of the brain.

And those signals are hard to read! They're incredibly faint, they're hard to pick up in a moving target -- it's incredibly picky, precise work to tease usable information out of that. Even if the difficulty of analyzing the signals is comparable, as you claim, the difficulty of picking up the signals in the first place is orders of magnitude greater. It's just so much simpler to rely on external cues.


Taste is probably harder to read than intent or language. And from a person's imagined taste, figure out chemically how to build food on the molecular level that has the right taste, texture, nutritional value, not poisonous, etc.? Not even close to possible for a variety of reasons, for example that the combination of humanoid tastebuds and flawed memory create sufficient obfuscation as to render the resolution of such neurotelemetry useless for the purpose of producing a replicator pattern for food.


Ooh, unanticipated side effect of that Bynar upgrade... What if the computer knew that Geordi was really excited by the idea of an opponent that could defeat Data? Under those old interrogative protocols, it might have said "Warning: Requested function would divert 19% of secondary processing capacity and create a power surge on decks 8, 9, and 10. Please confirm." But with its new intent detection circuits, it didn't have to, because it read how seriously Geordi and Kate wanted to stick it to their android friend.
That Data happens to be streets ahead of any possible Starfleet competition, and this wasn't considered in Holodeck or computer design, is a freak circumstance.

Both instances of starting with your desired assumption and handwaving things to fit it. That's backward reasoning. I'm saying that, starting from first principles and following the evidence alone, it wouldn't lead to that conclusion above all other possibilities.
 
I've got it! :) !!!!!!

I've got the explanation you've been asking for, and it's squarely based on established Trek.

The Klingon Mind-Sifter, and Romulan Mindprobes. These things exist, they can be done with the 24th-century technological knowhow which has percolated throughout the galactic-power level governments of the Trek universe. But it's damaging. Brutal, invasive, destructive.

It's outlawed in the Federation... Unthinkable, untouchable, unusable. Like nuclear weapons: We know they exist and how to make them, but even we have been wise enough not to employ them as an offensive weapon since you-know-when.

The kind of mind-tech you're comparing my door-sensors to is the type of thing only the real bad guys, or Section 31, would ever dream of using.

Take another comparison from the present day... With microwaves, and strong EM and inductive charging, we could charge a cel-phone from across the freaking room, except it would put cancer in anyone who stood near it. So despite the convenience it would offer, it is quite reasonably and rightly banned and regulated against, even to the degree of criminal penalties (e.g. if you violate FCC bans you can be prosecuted).

But we allow small-scale uses of technologies which, on a much larger scale of the identical technological principle, would be harmful.

Just like the Doctor can correct Miral's spine, but to turn Jules into Julian is an offense punishable by imprisonment.

You want a door that knows you're coming? You can have it. You want a door that emails your captain when it catches you sympathizing with the Maquis? Sorry, that's against the Federation Charter, and besides, when you dial that shit up to eleven, it swiss-cheeses people's memory.

Thoughts?
 
Last edited:
It would also be logistically ridiculous as a construction; even if we were to assume that this technology did exist, what possible reasonable gain would there be from designing a door that way? Why would it have been designed in the first place?

Better mousetrap. Come on, isn't using matter replication to make a grilled cheese sandwich the same kind of overkill?

If it was a replicator that only made grilled cheese sandwiches, then yes. But if it's a replicator that makes various kinds of food, one example of which is a grilled cheese sandwich, then no.

If the existence of a technology isn't blatantly obvious, then saying that it's a plausible construction in the setting isn't enough justification, it needs to be plausible that someone would decide to implement it in the first place rather than other alternatives.

A month or two ago, Amazon Prime started marketing a small stick-on button that you can place anywhere in your house -- laundry, fridge, whatever, and its sole purpose is that when you press it, a small computer inside the button housing logs onto your amazon prime account and has ONE PRE-PROGRAMMED ITEM shipped automatically to you. It's a "replace this thing I just ran out of" button.

Why do it? Because a door that only opens when you want it to is a superior product with better UX than the doors which came before it. To say "let's never use new technology to make this experience better" is to say "you know what, let's close the patent office, inventions are probably done."
Mind-reading doors isn't better UX, it's the same UX with a different interface. A button that lets you order something you ran out of because it's something you commonly run out of makes sense. A door sensor that works based on you thinking about going through it instead of one that works based on you starting to go through it doesn't, because nothing is actually gained, there is no reasonable use case that would apply in the former scenario but not in the latter. I would even go so far as to say that literally no one has ever regretted a motion-sensing door opening when the approached it to the point of wishing it would have known they didn't really want to open it. :p

My point is that this isn't an advancement, it's forcing technology for the sake of putting more technology in to solve a problem that doesn't actually exist. It's the sort of thing Thinkgeek would put up on April Fools next to the bacon dish detergent and the Bitcoin mining pickaxe. And I'm speaking as someone that's also a developer. Come on, as a developer yourself you should know that good software depends on not overdesigning as much as it does on providing good features. A feature that your users neither want nor need is just time wasted that could've been devoted to a feature that solves a problem they actually do have. An intent-reading door isn't progress, it's feature creep.
 
(...) from the EM signals of the brain.

And those signals are hard to read! They're incredibly faint, they're hard to pick up in a moving target -- it's incredibly picky, precise work to tease usable information out of that. Even if the difficulty of analyzing the signals is comparable, as you claim, the difficulty of picking up the signals in the first place is orders of magnitude greater.

Kind of like detecting a quantum phase signature? Or the average spin rate of local neutrinos? Or detecting the temporal focal-point through the lensing effect of a chroniton particle field? Or the ability to record, on a handheld device, thousands of years worth of imagery through an alien time portal in a matter of seconds? Or distinguishing in a brain the strange increase in synaptic potentials following an unfortunate exposure to electroplasma? Or the ability to non-invasively detect multiple distinct brainwave patterns simultaneously occupying the same brain? Or discern a neuro-electric field permeating the ship, no brain in sight? Or to read a barcode imprinted on a single base pair of a single strand of DNA, implanted by some evil alien scientists? Or to bounce a tachyon beam off a quantum singularity to generate a micro-wormhole in order to open up a subspace commlink across thirty-five thousand light-years?

Or the ability to reach out at 40,000 km and disassemble a living being at the quantum level, in order to reconstruct them qubit by qubit including their entire neural pattern in an entirely new location, without specialized machinery being necessarily present at either location?

Kind of like that?

(...)instances of starting with your desired assumption and handwaving things to fit it. That's backward reasoning. I'm saying that, starting from first principles and following the evidence alone, it wouldn't lead to that conclusion above all other possibilities.

I can't shake the deeply-held notion that you are so in love with "motion sensors" that you're forgetting it's Star Trek.
 
Last edited:
(...) from the EM signals of the brain.

And those signals are hard to read! They're incredibly faint, they're hard to pick up in a moving target -- it's incredibly picky, precise work to tease usable information out of that. Even if the difficulty of analyzing the signals is comparable, as you claim, the difficulty of picking up the signals in the first place is orders of magnitude greater.

Kind of like detecting a quantum phase signature? Or the average spin rate of local neutrinos? Or detecting the temporal focal-point through the lensing effect of a local chroniton particle field? Or the ability to record, on a handheld device, thousands of years worth of imagery through an alien time portal in a matter of seconds? Or distinguishing in a brain the strange increase in synaptic potentials following an unfortunate exposure to electroplasma? Or the ability to non-invasively detect multiple distinct brainwave patterns simultaneously occupying the same brain? Or a neuro-electric field permeating the ship, no brain in sight? Or to read a barcode imprinted on a single base pair of a single strand of DNA, implanted by some evil alien scientists?

Or the ability to reach out at 40,000 km and disassemble a living being at the quantum level, in order to reconstruct them qubit by qubit including their entire neural pattern in an entirely new location, without specialized machinery being necessarily present at either location?

Kind of like that?

You are literally mischaracterizing his argument with that selective quoting; he was arguing that it was hard in comparison to reading body language, not that it was beyond the capabilities of Trek technology. As in it's the equivalent of using bubblesort in your code on a 1500-core processor over quicksort just because it's so much processing power that being efficient doesn't matter.
 
The kind of mind-tech you're comparing my door-sensors to is the type of thing only the real bad guys, or Section 31, would ever dream of using.

Exactly my point. Something like that would have its use constrained to instances where it's ethically acceptable and nothing else can do the job.


Take another comparison from the present day... With microwaves, and strong EM and inductive charging, we could charge a cel-phone from across the freaking room, except it would put cancer in anyone who stood near it.

No, it wouldn't. Microwaves aren't ionizing radiation. It's physically impossible for them to cause the kind of molecular-level genetic damage that causes cancer. The waves are simply too big and too low in energy to penetrate. It's tantamount to the idea of shooting someone in the heart with a weather balloon rather than a bullet.


You want a door that knows you're coming? You can have it.

Yes, and we already have it, without electronic brain-scanning. I went through automatic doors at the supermarket yesterday. I didn't need my brain scanned to do it.


A door sensor that works based on you thinking about going through it instead of one that works based on you starting to go through it doesn't, because nothing is actually gained, there is no reasonable use case that would apply in the former scenario but not in the latter. I would even go so far as to say that literally no one has ever regretted a motion-sensing door opening when the approached it to the point of wishing it would have known they didn't really want to open it. :p

Also, how many times do people think about using a door without actually using it? Imagine it's near the end of a long shift and everyone on the bridge is eagerly looking forward to the shift change, thinking about how much they want to go through those doors. The doors would be constantly sliding open just from reading all those intentions.

It's obvious that the primary mechanism used by the doors is a simple motion sensor. There's no rational reason why it wouldn't be. We're talking about a tiny smattering of unusual situations where that isn't sufficient to explain what we see. At most, there'd be some kind of supplement in addition to that.



Kind of like detecting a quantum phase signature? Or the average spin rate of local neutrinos? (etc.)

Stop shifting the goalposts. You know perfectly well that this is not a debate about whether the technology could exist, but about whether it's reasonable to use for something as trivial as this, something where a far simpler and more obvious solution already exists.


I can't shake the deeply-held notion that you are so in love with "motion sensors" that you're forgetting it's Star Trek.

Now, that's uncalled for. Accusing me of personal bias just because I don't share your opinion? That's obnoxious and petty. If you're going to start insulting me, then I have nothing more to say to you. This has gone long past the point of being worth talking about anyway.
 
The doors on Star Trek know when to open and when not to, based on the user's (the character's) intent.

Not most of the time, but always.

You want to say "it's a TV show," whereas I want to come up with an in-universe explanation which covers all instances.

That's the true nature of the impasse we've reached.
 
Honestly personally I say that in a majority of times they don't, no. That it's a small enough minority of occurances to be chalked up to "not the creator's intent, just them making mistakes". I'd go so far as to say that out of the hundreds if not thousands of times doors opened on Trek, there are maybe one or two dozen times at best that their behavior couldn't be explained by motion sensors.

Are there any examples of it happening in TAS? That would help show if the doors are supposed to be taken that say or not, as it would have to have been an explicit animation choice instead of just happenstance.
 
The doors on Star Trek know when to open and when not to, based on the user's (the character's) intent.

Not most of the time, but always.

You want to say "it's a TV show," whereas I want to come up with an in-universe explanation which covers all instances.

That's the true nature of the impasse we've reached.

No, it's not merely intent. It's the intent, as telegraphed by actions. There has never ever been an unlocked automatic door that opened in any series without someone having first made a movement that could reasonably be interpreted as intending to use it. If someone has a specific example to the contrary, would they please present it?

The only situation which is perhaps a little wonky is that there have been occasions when characters changed their minds after moving towards a door, but it didn't open in anticipation of the initial movement when perhaps it should have. I'll simply chalk that up to implausible execution.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top