• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Communicators, How Do They Know?

Emperor Norton

Captain
Captain
Communicators are not complex. In TOS, they're a flip thing with 3 buttons, and in ships the Intercom is there for internal communication. In TNG, its just a badge you press and start talking to.

So how do the Communicators(/Intercom) know to go to Ensign Ricky on deck 7 and not Jim on deck 11, or Scotty in Engineering and not Mr Sulu on the Bridge?
 
Possibly by use of a system that's analogous to predictive texting. For example, if Picard touches his badge and says "Picard to Commander LaForge", the comm system then opens a line to LaForge and streams the message, starting from the start. Possibly it speeds it up a little to help synchronise things, but I doubt it. Anyway, that's my conjecture.
 
Last edited:
I think combadges are more than just a badge, they also contain an internal transceiver, a universal translator, and an ID system that the ship's computer keeps track of throughout the ship (one way to avoid people knowing where you are is simply to take the combadge off and put it someplace you're not).

Two people with the same last name would probably have to be called out by rank (to distinguish, for example, Ensign Smith from Crewman Smith). If they have both the same last name and rank, then they'd probably have to be called out by their full names (including their middle name or initial if necessary in the event there are two ensigns named John Smith).
 
I can hit a button on my phone and tell it to call my wife and it will. I can even tell it to text someone and then dictate the text and it will. And that's just on an Android phone without Siri. I do have to be careful if I have people with similar sounding names, but then a little working around it and it's not a big deal.

I'm sure they synchronize with the ship's computer and assume for the most part that you're trying to contact someone on the ship/station.
 
I think combadges are more than just a badge, they also contain an internal transceiver, a universal translator, and an ID system that the ship's computer keeps track of throughout the ship ....


We know from "Fist Full of Datas, TNG" that the commbadges contain circuitry.

dJE
 
The communicator has a sensor on it that can scan your brain like an fMRI. It can't exactly read your thoughts, but it can read neural firing patterns in your frontal lobe well enough to tell who it is you're trying to call with your communicator. You would, of course, have to pre-set the communicator when you're first issued one (kinda like with the voice dial on a smartphone), so you'd spend some time thinking about some officer you know and then tapping your communicator saying "That's Geordi... that's Data... that's the Enterprise' main channel..." This way, every time you tap the communicator you're telling it to scan your brain and "dial" whoever it is you're thinking of. TOS communicators probably worked the same way, except they had some extra buttons for functions that would be too awkward to send by thought or automation (gain controls, frequency controls, etc).

We already know, canonically, that even in TOS they had computers capable of performing active brain scans to figure out whether you were lying or not (Mudd's Women) and the Klingon Mind Sifter can evidently extract information directly from your brain. The "psychotricorder" from "Wolf in the Fold" is probably a less aggressive version of that technology, and it's been implied that the universal translator probably works on a similar principle, reading impulses from the speaker's brain and then outputting what it computes to be the best verbalization of what the speaker MEANT to say (and it can even figure out if the speaker is male or female). The Romulan Mind Probe in the 24th century is probably the most sophisticated version of all, not only able to scan your memories, but can actually download those memories into the minds of others.

With all this in mind, I think it's reasonable to accept the obvious and concede that the communicator probably IS scanning your brain and calling exactly who you want it to call the instant you pick it up.
 
^ Wow, that was pretty darn well thought out. I guess that the sender vocalising the name of the intended contactee would just be verification and/or politeness, then.
 
Radio discipline, mostly. Doesn't matter if you can recognize my voice or not, whether or not we're the only people on this channel or not, you don't just pick up a walkie-talkie and say "Hey Jim, I've got an idea..." Saying the name and announcing the speaker is just basic etiquette so the person you're talking to knows a conversation has begun; ending it with "Kirk out" or something means the conversation has ended.

And since the communicator has already patched you in to exactly the person you're trying to reach, they get to hear your "Kirk to Spock" at the same time you say it, no delay, no dialtone, and in fact the computers on the ship are smart enough to FIND the person you're trying to talk to and patch you in to the nearest convenient intercom circuit.
 
It should be remembered, though, that we never hear the "X to Y" words reach Y in an abnormally short time, or unrealistically lacking a delay. After all, X and Y always stand well apart (otherwise a communicator would not be necessary) and the camera action features a cut between the words and their reception. In all likelihood, then, the delay is there, and is not technomagically removed.

Another thing to remember is the computer's unwillingness to jeopardize a person's privacy. Removing one's commbadge is a classic way to disappear, yet the computer never notices this happening - a rather unlikely characteristic of a system based on brain-to-brain connections.

Also of note is the ability of random outsiders to operate the comm systems without preparation...

Timo Saloniemi
 
I don't think I can agree with the mind scan explanation. It seems a rather complex thing to have to do just to get Scotty on the line.

Being able to click on the button, speak to who or where you want ("Kirk to Bridge"), and have the computer pick up on that and route you thru to whom you wanted to talk to would make since though. The problem with that would be delay since it'd have to wait for you to finish saying where you wanted to speak to, and then route that sentence thru to there, and then have the person reply.
 
But we cope with such delays regularly today. Why should our 24th century heroes be less capable of coping? And what's the hurry?

The idea of routing a call based on the first words spoken does not even necessarily manifest as a delay. When Picard says "Picard to Riker", the call can be routed when the last "r" of "Riker" is sounded, at which point Riker can immediately say "Riker here!" without needing to hear any of what Picard just said. He just received a call from somebody - surely he's entitled to answer it immediately with "Riker here!"?

Timo Saloniemi
 
So basically while Picard says "Picard to Riker". All Riker hears is the com chime "ding ding".
Simply voice dialing.
 
I don't think I can agree with the mind scan explanation. It seems a rather complex thing to have to do just to get Scotty on the line.
Doesn't seem complex to me at all, considering the universal translator and a dozen other devices already use that technology as a matter of course.

Being able to click on the button, speak to who or where you want ("Kirk to Bridge"), and have the computer pick up on that and route you thru to whom you wanted to talk to would make since though.
But would have the disadvantage of needing a voice prompt to operate first. We have occasionally seen characters surreptitiously opening a communicator to use it as an eavesdropping device without first calling for the ship and then being overheard accordingly.

The problem with that would be delay since it'd have to wait for you to finish saying where you wanted to speak to, and then route that sentence thru to there, and then have the person reply.
And there is a distinct LACK of such a delay, especially in the 24th century.


Another thing to remember is the computer's unwillingness to jeopardize a person's privacy. Removing one's commbadge is a classic way to disappear, yet the computer never notices this happening - a rather unlikely characteristic of a system based on brain-to-brain connections.
As I said, tapping the comm badge would only serve as a command to the device to call the person you're currently thinking of (sort of like Siri's mic button). On the other hand, the comm badge evidently functions as a universal translator as well, which would require a certain amount of direct-brain interface if you want Jean Luc Picard to hear French instead of, say, Highland Klingon or Bajoran.

Also of note is the ability of random outsiders to operate the comm systems without preparation...
But not nearly as easily, which suggests the system has to "figure out" who you're calling no matter what method it uses.

But then, the universal translator is remarkably good at this anyway, so the same kind of software probably comes into play in the call. After all, there is NO DELAY AT ALL when using the universal translator, which is virtually impossible unless the translator is is actively converting those words as they are being spoken (otherwise, due to differences in grammar and sentence structure there should be a delay of at least a few seconds while the speaker finishes his sentence).

The translator knows -- somehow -- what you mean to say even before you say it, so why couldn't it know who you mean to call before you call them?
 
The problem I have with the brain scan thing is if the communicator could scan what you were thinking, why not just scan the entire message and beam it to the other person, so they know exactly what you want in an instant, without having to speak? Or why even ask the computer for Tea, Earl Grey, Hot, when you can just think it? Granted, we have universal translators, but I'm pretty sure it's established that the communicator links to the onboard computer in order to accomplish that. Alien species that aren't new aren't going to need to be scanned either—we already have translations made out for Klingon and Romulan and Ferengi. We could imagine that when two species meet for the first time, they have systems in place to send the brain signals of the speaker to the other ship, but I doubt if communicators have the same technology.

The simplest explanation is the voice recognition one. So Ensign Ricky says, "Ricky to Steve." The communicator should have a short recording function that is actively running, so only a couple seconds at a time. When it hears the "X to Y" phrase, the computer can isolate that phrase in the recording, determine who the speaker wants to talk to, and transmits that message to them. There would be a short delay, as the computer transmits and plays the message a second time, but for the rest of the conversation, there would be no delay. All transmissions after that time would be in "real time".
 
The problem I have with the brain scan thing is if the communicator could scan what you were thinking, why not just scan the entire message and beam it to the other person, so they know exactly what you want in an instant, without having to speak?
I doubt it's accurate enough to translate thought transmissions into audio output (and even if it was, that would require a bit of mental focus and training that Starfleet officers may not necessarily have). It's kind of like in Doctor Who when Jack hands Rose a piece of psychic paper that's supposed to tell her that he's a Captain in some space force somewhere, but the message actually comes out "Single and available."

It's enough to know that machine-aided telepathy is something many races have dabbled with over the years, but it's not necessarily practical in day-to-day use. At least in the 24th century, it's just a really convenient way of routing communications channels, while a hundred years earlier it was a staple of interspecies communications (translators).

Granted, we have universal translators, but I'm pretty sure it's established that the communicator links to the onboard computer in order to accomplish that.
Which wouldn't explain why the translator always works even when totally cut off from the computer (especially in Metamorphosis).

Alien species that aren't new aren't going to need to be scanned either—we already have translations made out for Klingon and Romulan and Ferengi.
Which still wouldn't be enough for the real-time translations we're seeing in the 24th century. The communications protocols of Romulan and Ferengi ships probably use similar brain-scan devices that translate the thought impulses into linguicode and then transmit THAT to the other ship along with the audio of their conversation. It's likely the computer needs both in order to make an accurate translation, considering it doesn't just translate the words, it also translates emphasis, inflection, and even tone of voice.

The simplest explanation is the voice recognition one. So Ensign Ricky says, "Ricky to Steve." The communicator should have a short recording function that is actively running, so only a couple seconds at a time. When it hears the "X to Y" phrase, the computer can isolate that phrase in the recording, determine who the speaker wants to talk to, and transmits that message to them. There would be a short delay, as the computer transmits and plays the message a second time, but for the rest of the conversation, there would be no delay. All transmissions after that time would be in "real time".
Again, that only works when a time delay is evident, as in TOS when the communicator actually beeps/chirps instead of relaying the voice prompt (the intercom isn't a problem either since it's probably an inter-ship page from the bridge to everywhere). By TFF, though, we have real-time conversations starting apparently on both ends, with no time delay whatsoever. Either the communicators are doing some tricky causality violation stuff, or they've gotten better at reading your intentions without having to listen for them.

Anyway, I'm not sure why scanning the frontal lobes of the speaker presents a technical problem at all. Again, that technology is actually pretty mundane by 24th century standards; your communicator may not be able to tell what you want to have for lunch, but it can at least tell that one of the words that's about to come out of your mouth represents the name of the person you intend to talk to.
 
After all, there is NO DELAY AT ALL when using the universal translator, which is virtually impossible unless the translator is is actively converting those words as they are being spoken (otherwise, due to differences in grammar and sentence structure there should be a delay of at least a few seconds while the speaker finishes his sentence).
Should there? I mean, when it's the brain doing the translating, rather than an outside interpreter (be it machine or human), the translation perfectly matches the input, or more exactly creates the perfect illusion of a match.

If I read the paragraph you wrote, or listen to it being spoken, its translation in my mind starts when you start and ends when you stop, regardless of English and Finnish being extremely different in terms of phrase length and the like. The same happens to somebody else who simultaneously listens to your phrase and translates it to Spanish in his mind, or to Russian, or to Mandarin. (*)

Remarkably, the same also apparently happens when people equipped with the UT listen to somebody. Those with their UTs set to "into Klingon" (say, Gowron) listen to the very same conversation as those with "into English" (O'Brien) and perhaps "into Farsi" (Bashir) or "into amusingly broken French" (Sisko), and the audience just happens to have its personal UT permanently set to "into English". It's actually not particularly remarkable, then, that the audience gets perfectly matching "translation lengths", as long as we assume that the UT works much like our mind does - or, more probably, that our mind is fluent in listening to the UT and doing the final step in translating. That is, no matter what the UT is feeding into the listener's ear or brain, the brain will only accept it as being lip-synched.

It's all about self-deception. And one wouldn't need much of an implant to get the brain to deceive itself on these translation issues; even if Data spoke Oomorogian with his lips locked to immobility, Picard would surely see his lips move in synch with English or perhaps French, simply out of habit, as his brain wouldn't accept anything else...

Timo Saloniemi

(*) Or then doesn't - to my best knowledge, I don't translate English into Finnish any more in order to understand it. But my best knowledge isn't worth zip, and I'd be delighted to do some fMRI to find out whether translating is still taking place in there or whether some sort of a "second language" functionality has taken over. I'm sure it would be a breeze to isolate one of the multiple "tuners" within our noggins and slave it exclusively to UT interpretation, the way we apparently can slave parts of ourselves to the tasks of foreign language or singing, and retain those skills even when aphasia strikes on our native tongue.
 
If I read the paragraph you wrote, or listen to it being spoken, its translation in my mind starts when you start and ends when you stop, regardless of English and Finnish being extremely different in terms of phrase length and the like. The same happens to somebody else who simultaneously listens to your phrase and translates it to Spanish in his mind, or to Russian, or to Mandarin. (*)
Actually that's not entirely true. People who learn a native language early on and then learn a second language later in life DO have a very brief delay when translating into another language. That delay reduces with proficiency, but still remains measurable in most tests. The delay is even longer when translating written communications; they can read their native language a lot faster than they can read their second one, mainly because they are translating in their head as they go along.

The delay is only absent in bilingual subjects who learned both languages at the same time; supposedly this is because the brocas region of the brain learns to process both languages as extensions of one another and not as separate categories between the two.

Of course, this doesn't apply much to tranlators, which HAVE to translate from one language to another with very different vocabulary and grammatical structure. In some cases it wouldn't actually be possible to begin the English version of the sentence until the foreign version has ceased (Spanish and French, for example, have different subject/predicate placement).

One way or the other, the translator needs to know what you MEAN to say as well as what you're actually saying, and the only way to know that is to listen to your speech centers at the same time it listens to your words.

It's actually not particularly remarkable, then, that the audience gets perfectly matching "translation lengths", as long as we assume that the UT works much like our mind does
For that to be the case, the translator would have to be physically implanted into the brain like a firmware update. It doesn't seem to work that way in the 23rd century, though, despite the fact that it does about the same thing (and even the 22nd century version works the same way, albeit less accurately/reliably).

The two possibilities really are that the translator is either beaming the translation into your brain, or obtaining the translation from the speaker's brain. I think the latter is more likely.

I'd be delighted to do some fMRI to find out whether translating is still taking place in there or whether some sort of a "second language" functionality has taken over.
IIRC, it depends on proficiency. Subjects RAISED with both languages can understand both and the fMRI images show only a single area of activity in the brocas region. Subjects who learn a second language later in life develop a secondary (usually smaller) activation region that seems to be partitioned off specifically to process the additional language. Curiously, for the latter subject this partition seems to remain in place no matter how proficient they become. Even more curiously, bilingual subjects who learn a THIRD language later in life have the same partition structure in the brocas region, and again, the same measurable translation delay.
 
Regarding the universal translator, it's important to keep in mind that the universal translator was mostly invented for production purposes, so that we can have English spoken everywhere and the production team doesn't have to bother with alien languages. The technology isn't consistent, and is mostly just there to aid in story-telling.

In "Darmok", if the universal translator was reading the meaning the aliens intended to convey, shouldn't Picard have heard "monster" instead of "Darmok and Jalad"? They use metaphor, but the meaning being conveyed should be the same. In "Babel", we are told by Bashir that the virus is affecting the pathways between the brain thinking of something and the associated words. The universal translator would have been perfect for this. However, it wasn't used because the story didn't want it there.

That's why I like to take the universal translator with a grain of salt. It was invented for production-side problems, and thus can be inconsistent in-universe.
 
In "Darmok", if the universal translator was reading the meaning the aliens intended to convey, shouldn't Picard have heard "monster" instead of "Darmok and Jalad"? They use metaphor, but the meaning being conveyed should be the same.
That's a perfect example. The weird alien language was concocted purely as a plot device to have some sort of "Struggle to understand each other" storyline in a universe that already had the universal translator. The explanation was given that it wasn't so much a problem with the alien language -- that was perfectly translatable -- but the fact that their entire psychology was structured so radically different that you really couldn't understand what they were trying to say unless you knew exactly what they were referencing. Basically, it's a language made up entirely of metaphors, which depends on the understanding of the proper nouns used in those metaphors. As one really good example: the next time a Federation starship meets them in space, their Captain will recognize the uniform and say "Picard and Dathan at El Adrel," which is basically to say "I know you, you're our friends." But to somebody who has no idea what happened to Jean Luc Picard at El Adrel, this seems like gibberish; the universal translator isn't programmed to dig out historical references in a millisecond's notice either.

Realistically, there are probably a few languages that don't translate well for similar reasons. There is, for example, at least one Klingon language that the UT can't really process, and it seems to have a great deal of difficulty with communiques from non-humanoids (it was virtually useless in communicating with the Horta, for example).

I like to take the universal translator with a grain of salt. It was invented for production-side problems, and thus can be inconsistent in-universe.
Sure it can. You just assume that it's a telepathic device that can "sort of" read your thoughts, but not in a lot of detail. Telepathic devices are pretty much verified in canon; hell, we've been dabbling with that kind of technology since the 1990s with fMRI scanners.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top