Like I said, the overwhelming majority of the time, all you need is a simple motion sensor. The more detailed stuff is just for those few anomalous instances where that isn't enough of an explanation.
So we're both about Occam's Razor, we just see the probable tech tree differently. Just for some background, my line of work is as a developer and a lot of that field involves being aware of new sensors and innovative input solutions, so I'm predisposed more toward accepting the technology as possible.
But we also already have sophisticated computer gait analysis that's precise enough to identify people by the way they move. And we have video games that can be controlled by people's movements.
True. However,
1) They don't work as effectively as you might think,
2) They require extensive calibration,
3) We're talking one species, one culture,
4) Wouldn't account for "magic door" situations such as the one I alluded to earlier from
Emmisary.
What I'm talking about is even less of an extrapolation from existing technology than what you're talking about. It's something we could probably do today, or five years from now. What you're talking about is far more pie-in-the-sky. It should be self-evident that just observing people's body movement from the outside is a simpler technology than reading their minds.
And projecting a 2D image on a white wall is a simpler technology than the Holodecks we see in Trek. That doesn't mean the Holodeck's not
better.
How ridiculous would it seem to someone fifty years ago for me to explain that I wear a tracking device everywhere I go, which sends my location to a massive company, for the express purpose of letting my friends and even strangers know
what things I like? That I let that company know almost everything I read, simply because they have demonstrated they can effectively
suggest new things to me? After a few years of Big Brother Google and the CIA
not hurting me with this power they have over me, the trust is there, bought by convenience. The same will be true of other technologies in the future.
What would they think 50 years ago if we told them we built a massive globewide interconnected web of computers, initially for noble and academic endeavours, but that eventually most people just use it to masturbate and look at kittens? "Hey, you know what would be really cool, and we already know it's proven-safe tech? We could use that brain scan instinct-intent pattern-signature thing to make the doors on the ship smarter. They'll go nuts for it. #StarfleetKickstarter"
Christopher, come on. Please admit that I am
not being totally ridiculous.
By "all the onscreen evidence," you mean a single line from "Metamorphosis." That's rather an overstatement.
And there aren't any other explanations given, let alone more plausible ones, for the UT as seen on screen.
It's a myth (...) He used plenty of contractions (...)
Alright, absolutely fair enough, and I'll withdraw my lazy analogy. But to your original point, "if (human do) why (can't computer do also)," as a writer perhaps you'll appreciate that "computer can't do what human do" has been a recurring theme in Star Trek since back in the day. Not entirely unreasonable.
It doesn't need to. We're trying to rationalize the vagaries of a TV show. It's never going to be perfect. It just has to be a plausible handwave.
Okay, but the name of the game here is "how close to rational can we make it, vagaries and all." I've read close to all your work, Chris! You're super into tying up these kinds of loose ends. I would think you'd be into this.
I'll go ahead and concede the point that in real life it's just a TV show, if we can just keep debating and discussing how it "would" or "could" work in the context that what we see on screen were a true representation of a world that was internally consistent.
And I go by Occam's Razor. It's far, far simpler to assume that a door has motion sensors than to assume it can freaking read minds.
Mind reading is different than intent reading. To analogize, supermarket door sensors can see motion, but don't know how close, how fast, how big, etc. Just binary "moving" or "not moving."
An EEG (or analogous) intent sensor can't tell "want egress because need to urinate" from "want egress because duty shift over" from "want egress because must kill ambassador." It's the "because" that represents the limitations of what you call the "mind-reading" sensor. It can tell you're focussed on leaving the room, and maybe queries your motor center to ensure you're not queuing up a "stop" or "slow" command in the next 1 or 2 meters worth of walking, but it's not going to rise to the level of "Turbolift's Log Stardate Now: Commander Smith is inappropriately contemplating Lieutenant Thompsons hindquarters."