• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

OK, how does the Universal Translator work?

Here is a reference to an actual constructed language used in SETI research called Lincos, that dates from 1960.

http://en.wikipedia.org/wiki/Lincos_(artificial_language)

The idea of preceding messages to extraterrestrials with mathematical dictionaries dates no later than then.

Radio astronomy and messages broadcast into space as a part of the SETI program, or otherwise carried aboard space probes, were widely popularized in media in the 1970s, most famously the the Arecibo message and the Pioneer plaque. It's hardly a leap to suppose that "linguacode" was intended to suggest something along those lines, even if it was never described on-screen in any detail more than the basically throwaway line decades later in "In a Mirror, Darkly, Part II":

ARCHER: (reading) Hoshi Sato. Comm. Officer on Starfleet's first warp five ship. In her late thirties, she created the linguacode translation matrix.
 
^The TMP novelization says of linguacode: "Its keys were universal constants like pi, simple molecular relationships, the speed of light". So yes, the basic idea of using scientific constants as a starting point for translation was there, and of course was based on what scientists and earlier SF writers had posited about the way to establish a communication baseline between species with no history or linguistics in common. I was probably drawing partly on the linguacode idea, though also on things like the Arecibo message, the Pioneer and Voyager plaques, Carl Sagan's novel Contact, and so forth. But the idea of translation being a lengthy exchange between ships' computers, starting from first principles and learning each other's languages at such speed that it seemed near-instantaneous to the crew, was mine. I used a version of the idea in my first published story, "Aggravated Vehicular Genocide," nearly five years before I started writing Trek fiction. (I'm sure other SF writers have used similar ideas before me, of course, but not in Trek as far as I know.)
 
^The TMP novelization says of linguacode: "Its keys were universal constants like pi, simple molecular relationships, the speed of light".
That sounds like another interesting piece of apocrypha from the novelization. I've never read the novelization, but hearing about this sort of thing makes me want to, despite some of the wonkier other things I've heard about it on the board.

But the idea of translation being a lengthy exchange between ships' computers, starting from first principles and learning each other's languages at such speed that it seemed near-instantaneous to the crew, was mine. I used a version of the idea in my first published story, "Aggravated Vehicular Genocide," nearly five years before I started writing Trek fiction. (I'm sure other SF writers have used similar ideas before me, of course, but not in Trek as far as I know.)

That idea reminds me of the extended negotiation between the Colossus and Guardian supercomputers in the film Colossus: The Forbin Project (1970), used for "establishing a common basis for communication," starting with the multiplication table. The novel Colossus dates from 1966, but I haven't read it, so I can't comment on what it does.

By the way, the modern computer science term for an initial automated phase of negotiation on a communications channel to select the parameters appropriate for the normal communication that follows on it is handshaking, of which the kind of process you describe would be arguably a highly elaborate form.
 
Couple things.

Everyone saying that implants couldn't account for situations with multiple languages simultaneously is forgetting there's a very plausible "middle man," the unsung hero of the universal translator: Federation Standard.

If everyone's output is standardized into one language, the problems are easier to solve. It also conveniently solves the problem of why Federation Standard "is" English. In fact Standard could be an entirely artificial language... Not a mishmash of other languages like Esperanto is, but specifically optimized for compression and syntax... making it easier and faster for other translators to work with.

Thus, the screen and prose versions of Trek would be a "real-time translation" of Fed Standard into contemporary English from the year of publication.

This still does not preclude Arcturis from hearing and understanding Fed Standard in Hope and Fear... He even remarks how simple to learn it was. "A natural universal translator," they called him... But if Standard is not the actual English we hear, but rather an artificial language designed to be easy to decode by UT's (even the parallel-developed UTs of other cultures) then this makes perfect sense.

Compression and optimization would also partially, but not entirely, solve the word order problem Christopher pointed out. Round that out with the idea that brainwave reading might be involved, combined with the possibility of context-awareness in the UT tech, and maybe just a dash of the 24th century equivalent of my smartphone's predictive text input, and there's enough present there to bridge the gap for me without suspension of disbelief.

I'm sold on implants, for sure. But I think there's more going on. People have gestured at their communicators when speaking of the UT, as if it's "in there." I prefer the idea that a variety of sensors and devices including implants and the ship's computer work cooperatively, doing the best they can with whatever compatible devices are to hand, and I believe this explains some of the apparent inconsistencies we've seen.

Rather than UT being a single standalone device, UT could be a dynamic system of devices working in concert with the specially designed language of Federation Standard. The variety of components operate cooperatively, but perhaps not always in the same way (if for example the ship is out of range, or the communicator is damaged).

The Japanese guy from 37's needs to be explained by either the abductor aliens providing him UT, or having UT built into the cryo chamber room and then the Doctor also giving him UT off screen.

The earlier suggestion of networked computing power is an excellent explanation for situations that involve being stranded somewhere with aliens who can't be understood. For example in Gravity, The fact that the Doctor is actually heard by Paris and Tuvok as speaking Noss's language lends credence... Their UT implants alone had low computing power, weak in comparison with the optimized UT algorithms compiled into the Doctor's holographic matrix. This also correlates with statements made about holographic matrices in episodes such as Ship in a Bottle, Life Line, and Inside Man. I have an easy time believing the Doc's matrix wouldn't be directly compatible with their implants... Because EMH not designed for away missions, because Holomatrix, because 29th century mobile emitter, take your pick.

Although it's Voyager, so screw logic anyway.
 
Thus, the screen and prose versions of Trek would be a "real-time translation" of Fed Standard into contemporary English from the year of publication.

Except it's been made clear multiple times that English speakers from the 20th/21st century recognize their language as English: Captain Christopher in "Tomorrow is Yesterday," Khan in "Space Seed," Cochrane in "Metamorphosis." Not to mention numerous references to the language they speak being English. The term "Federation Standard" is entirely a creation of tie-in literature, never actually used onscreen.
 
Except it's been made clear multiple times that English speakers from the 20th/21st century recognize their language as English: Captain Christopher in "Tomorrow is Yesterday," Khan in "Space Seed," Cochrane in "Metamorphosis." Not to mention numerous references to the language they speak being English. The term "Federation Standard" is entirely a creation of tie-in literature, never actually used onscreen.

Fuck. You're right.

Okay, okay, we can work with that... Maybe it defaults to native language mode if everyone present (or the vast majority of participants) speak the same language, then.

In particular, if some present aren't equipped with functional UT components, perhaps it switches the vocalization component to match the language of whoever is being spoken to. You'd sacrifice some of the advantages, in exchange for more natural speech and backwards compatibility in situations with time travel or jerks out of cryostasis.
 
The whole think works on magic really, how they never seem to have any problems reading alien ship consoles and the UT should have blown up in both Voyager and DS9 they are always meeting new cultures (especially Voyager) I mean it really should have half the episode where the UT does not work and you can't communicate.
 
Hmm. Even if we acknowledge that the UT is basically tech that serves the needs of plot convenience and agree to suspend disbelief, there still needs to be some kind of in-story rationalization for it, right?

The closest TOS ever came to an explanation was in "Metamorphosis," actually quoted in the script search CLB linked earlier, when Kirk and Spock were able to use a handheld UT to communicate with Cochrane's "Companion":
KIRK: "There are certain universal ideas and concepts common to all intelligent life. This device instantaneously compares the frequency of brainwave patterns, selects those ideas and concepts it recognizes, and then provides the necessary grammar."
SPOCK: "Then it translates its findings into English." ...
KIRK: "With a voice or the approximation of whatever the creature is on the sending end. Not one hundred percent efficient, but nothing ever is."
The complicating factor here, of course, is that they were using a flashlight-size handheld UT, and other stories obviously (indeed routinely) relied on something much more discreet and sophisticated. Rationalization: the handheld version (or the equivalent inside a ship's computer) is only necessary to process new languages from completely unfamiliar life forms, like the Companion.

Still, the basic concept confirms what Timo was speculating about. Given what the the UT is observed to do, the only way to explain it is with some kind of telepathic function. We know other Treknology does exist that involves varying kinds of telepathic machine interfaces, even if the in-story handling has been inconsistent (as debated upthread). And the explanation Kirk and Spock give Cochrane validates at least the basics of those speculations.

So let's suppose you don't need the handheld model if you're just communicating with some kind of known alien language (i.e., anything a UFP computer has encountered before), maybe even that there's some kind of subdermal implant that's useful for those purposes for landing parties. Let's also suppose that it works as Timo speculated by modifying both perception (hearing) and expression (speech) as necessary, thus making it possible to communicate even with natives not equipped with UTs. And let's further suppose, as chrinFinity postulated, that when it's within range it networks with the communicators, the ship's (or shuttle's) computer, and possibly other devices, thereby expanding its data access/computing ability/overall power.

Starfleet officers may need to be trained to think carefully about what they're going to say before they speak in order to get it to operate optimally. But that's a good idea anyway, right?

What problems remain? Well, basically that this seems way beyond the tech level of most other known Treknology, and also that it raises squicky issues of telepathic ethics that various episodes have touched on in other ways. But while Deranged Nasat may have been joking with his post about Q-level tech, there's actually something helpful in there. After all, not everything in the Trekverse has to have been invented in the next 300 years. There have been countless high-tech alien races in both the recent and the distant past. It stands to reason that at some point one of them invented a UT that uses a targeted telepathic link to the language and speech centers of the brain, with safeguards to avoid any deeper or more personal areas of thought, and that such an outstandingly useful technology would be passed along to (or pirated by) other civilizations over time as they were encountered (or emerged). IOW, once that genie's out of the bottle, it's not going back in. This kind of UT tech could have been around for millennia for all we know.

So presto! Working collectively, we've provided all the pieces of a solution to the OP's question! The only thing this hypothesis doesn't really explain is the rudimentary state of UT tech in Enterprise, as e.g. experienced by Hoshi. But hey, it's ENT. If I need to throw details from that series under the bus in order to maintain continuity for the rest of Trek, I have no problem with that.
 
It still doesn't explain reading novel languages or understanding recordings of novel languages in real-time with no living speakers around either. :p
 
True, it doesn't explain that. I'm hard-pressed to think of any examples of that happening in episodes, but I'm certainly open to having my memory refreshed.
 
I can't immediately think of an example of the former (though I imagine there's likely something in ENT, DS9, or VOY along those lines), but for the latter, there's the Promelian captain's log in "Booby Trap" and the holographic DNA message in "The Chase" offhand.

Also I'm still suspicious of if it could explain live real-time translation of an alien language with variant word order, converting an OSV language into an SVO language like English or the like. Not without, as mentioned previously, forcing Yoda-like grammar on it, since the software wouldn't actually know, for example, what the verb of a sentence was until reaching the end of it.
 
I disagree that the UT was meant to be "telepathic." Kirk didn't say it read thoughts, he said it read brain waves, like an EEG. I'd say the intent was that it's the equivalent of something like functional MRI today. We actually are getting close to being able to "read minds" to an extent by measuring their neuroelectrical activity on a fine level. Brain scans have been able to do things like reconstruct a rough image of what someone is looking at based on the activity of their visual cortex, for example. And I believe that certain consistent activity patterns have been associated with certain categories of thought activity, sensory perception, etc. So we're talking about a more powerful form of that kind of sensing technology that can read far more precisely and from a distance. Sure, something that potent would effectively be "mind-reading," but it's not literally telepathic, because in Trek-universe terms, telepathy is a distinct biological and physical phenomenon involving eldritch phenomena like "psionic energy." This is just a super-futuristic EEG, to put it in '60s terms. If it shares a category with anything else in Trek, that category includes things like the lie-detector chair seen in "Wolf in the Fold," the Klingon mind-sifter, and the technology that let Voyager's EMH scan Denara Pel's consciousness into a holographic body. It's neurotelemetry, not telepathy.

However, its existence still raises quite a few awkward questions. Why aren't there more neurotelemetric technologies in use? Why don't we have equipment that responds to the user's brain state? (Although maybe we do, given how versatile communicators and tricorders are despite having only a few physical controls.) In particular, if neurotelemetry is sophisticated enough to instantly convert an alien energy-cloud creature's impulses into English, why the hell is Christopher Pike limited to one beep for "yes" and two for "no"?
 
I've often wondered about the desktop monitors we see in a lot of the Starfleet Officers quaters in the 24th century. They seem to only have 3 or 4 unmarked buttons, be we them using them like a keyboard.
 
However, its existence still raises quite a few awkward questions. Why aren't there more neurotelemetric technologies in use?

We've seen some for sure. The Altonian brain teaser from A Man Alone, and the Ktarian "game" also.

I can't remember if it's come up in this thread yet, but in Transfigurations, Crusher used a clip-on device to allow Geordi's brain to do remote control on John Doe's autonomic functions.

There was also the "Interface Probe" from Interface, which Geordi was particularly well adapted to, but which was also stated to work properly with other users such as Riker.

Why don't we have equipment that responds to the user's brain state? (Although maybe we do, given how versatile communicators and tricorders are despite having only a few physical controls.)

Like you, I feel that there are, and it's just transparent to the viewer.

In addition to the examples you gave with communications systems and tricorders, the doors seem to know when to open and when not to based on user's intent rather than just responding to motion detection.

The other point above about the 4 unlabelled buttons on desktop viewers is another good point-- Although to be fair, most smartphones today have a similar array of very few and unlabelled mechanical input buttons, and we somehow get by.

In particular, if neurotelemetry is sophisticated enough to instantly convert an alien energy-cloud creature's impulses into English, why the hell is Christopher Pike limited to one beep for "yes" and two for "no"?

"Delta-particle radiation."
 
In addition to the examples you gave with communications systems and tricorders, the doors seem to know when to open and when not to based on user's intent rather than just responding to motion detection.

That could be done with intelligent gait and body language analysis, based purely on visual input. So there's no need to interpret it as reading brain states.
 
That could be done with intelligent gait and body language analysis, based purely on visual input. So there's no need to interpret it as reading brain states.

Of so many different species and cultures though? Including non-humanoid? It seems far-fetched to assume the computer can perfectly interpret every single person's body language in every conceivable situation that they're near a door. Also, it's ableist to forget that not everyone is physically capable of presenting baseline body language for their culture. Pike, Jameson, Pazlar.

Given the absolutely perfect accuracy depicted by the automatic doors in Star Trek (because in real life, the doors had read the script), then if you accept the posited explanation in Metamorphosis about universally-common brain concepts (or whatever), it might actually be more plausible to believe there's a universally recognizable "I want to exit this space" neuro-electric thought wave.

The only hint I can think of is from In Theory:

Data said:
The door sensor is programmed to recognize only humanoid forms for entry and egress. Spot could not have triggered the mechanism.

Thoughts:

1. This is racist against non-humanoids, but it's Data's quarters so I guess he can do what he wants per Starfleet regulations.

2. "Door sensor" could mean a lot of different things. The sentence was deliberately constructed to avoid the use of terms like "motion" or "movement," however.

3. "Entry" and "egress" are conspicuously specific concepts, distinct from "open" and "close."

My conclusion is that Data's statement can support either of our conjectural models for the functionality of the doors.
 
That could be done with intelligent gait and body language analysis, based purely on visual input. So there's no need to interpret it as reading brain states.

Of so many different species and cultures though? Including non-humanoid? It seems far-fetched to assume the computer can perfectly interpret every single person's body language in every conceivable situation that they're near a door. Also, it's ableist to forget that not everyone is physically capable of presenting baseline body language for their culture. Pike, Jameson, Pazlar.

Given the absolutely perfect accuracy depicted by the automatic doors in Star Trek (because in real life, the doors had read the script), then if you accept the posited explanation in Metamorphosis about universally-common brain concepts (or whatever), it might actually be more plausible to believe there's a universally recognizable "I want to exit this space" neuro-electric thought wave.

That's tautological, though. Of course it would be more plausible to believe there's a universally recognizable thought wave for a concept if you accept a posited explanation that there's universally recognizable thought waves. And it would be more plausible to believe there's universally recognizable body language if you accept a posited explanation that there's universally recognizable body language.

Besides, how is it not ableist to assume that everyone presents the same neurological responses to given desires? Anyone not neurotypical almost certainly would not.
 
That could be done with intelligent gait and body language analysis, based purely on visual input. So there's no need to interpret it as reading brain states.

Of so many different species and cultures though? Including non-humanoid? It seems far-fetched to assume the computer can perfectly interpret every single person's body language in every conceivable situation that they're near a door. Also, it's ableist to forget that not everyone is physically capable of presenting baseline body language for their culture. Pike, Jameson, Pazlar.

Who said anything about baselines? Most of the peope using those doors every day are members of the crew, whose individual biometric data would be programmed in. All the people you cite are Starfleet officers, so presumably their data would be on file.

As for visitors, keep in mind that 95 percent of the time, it's perfectly obvious whether someone plans to go through a door or not just by moving toward it. It's only in rare cases that a door would need any additional information to know the difference between an approach with intent to pass through and an approach without it, e.g. in "The Naked Time" when Spock falls back against the just-closed briefing room door (though I always figured that was just because he was so close to the door that he was under the arc of its sensor -- something that can happen with supermarket doors, which is why it's not a good idea to dawdle while passing through them). I challenge you to find a specific instance where a non-Starfleet visitor interacts with a door in way that would require more than simple motion sensors to explain.

Anyway, the ultimate explanation is that it's just a show and the behavior of the set is subordinate to the needs of the drama. If a door sliding open would distract from the performances, we don't see it slide open, even if it normally would. Roddenberry's preferred view was that the show was a sometimes-inaccurate after-the-fact dramatization of the "real" adventures of Kirk & co., and many mysteries and inconsistencies vanish if you view it on those terms. It certainly makes more sense than pretending it's some kind of found-footage documentary, complete with wobbly sets and backlot planets and recycled props and actors and innovative but limited visual effects.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top