• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Computer cores on Fed starships...why so huge?

Tribble puncher

Captain
Captain
If you look at some of the schematics, the computer cores are bigger than the warp core. why would a computer so large be necessary, esp. given that computers in the 23rd/24th century will be orders of magnitude more advanced than what we have today? It just seems a tad impractical, (not that trek ships have the most practical designs anyway...)
 
It is a bit unlikely that the Trek 24th century computers have anything to do with today's computer technology - the underlying principles of both computing and the physical phenomena enabling that computing could be just as different as, say, a computer of today is from an abacus. We don't have any particular reason to think that Trek computers would be using the currently fundamental binary expression of data, for example.

Also, distributed computing may have fallen in disfavor with the introduction of those fancy subspace fields that allow the computing to happen faster-than-light (TNG Tech Manual p.49). There simply wouldn't be any point in pursuing the currently fashionable ideas of parallel processing when you get so much more power from centralizing everything and then encasing it in this magic field.

One might as well ask why the masts of modern warships are such puny things - shouldn't they be huge, to support all the necessary sails?

Timo Saloniemi
 
A perfect example of conventional game play in our subforum.

The OP implies that more advanced = smaller. Tribble puncher attempts to appeal to the reality criterion in making reference to computer that we have today and (implicitly) the trends we have seen in computing for decades.

It is a bit unlikely that the Trek 24th century computers have anything to do with today's computer technology

Timo's opening move is a claim of disanalogy. If future computers won't have "anything" to do with today's computers, then we cannot reasonably guess that computers will continue to get smaller and smaller.

Timo argues that there is too much uncertainty about the future to positively invoke this criterion. He follows with examples of major shifts in past technologies. Basically he is arguing - there have been big changes in the past, so it is likely that there will be big changes in the future.

Big changes are unpredictable changes, paradigm shifts, game changers. What this implies is that we must accept that we are in a state of ignorance about future technologies. The upshot, however, is that this severely constricts discussion using the reality criterion. If Timo is right, we can't really interrogate Star Treknology in terms of what we know about the world (because the future might always be surprisingly different).

NOTE: Timo is also working under the reality criterion here. He makes reference to real historical examples to make the prediction that we will most likely NOT be able to predict the future.

- the underlying principles of both computing and the physical phenomena enabling that computing could be just as different as, say, a computer of today is from an abacus. We don't have any particular reason to think that Trek computers would be using the currently fundamental binary expression of data, for example.

The discontinuity between the abacus and the modern computer (reference to reality as proof discontinuity) problematizes even the assumption that future computers will be binary systems.

What comes next is the inevitable weak plausibility argument. It is weak (not pejoratively!), in part, because it is wholly negative in nature. By Timo's own reasoning he cannot reasonably project future technologies either. What he can do, however, is spin a few "just so" stories to establish the plausibility paradigm shifting technological changes.

Also, distributed computing may have fallen in disfavor with the introduction of those fancy subspace fields that allow the computing to happen faster-than-light (TNG Tech Manual p.49). There simply wouldn't be any point in pursuing the currently fashionable ideas of parallel processing when you get so much more power from centralizing everything and then encasing it in this magic field.

Here we get a reference to an off-screen source (a tech manual produced for fans) and speculations about subspace computing. That is, it could make sense to have a big mainframe in this situation.

That Tribble cannot appeal to the reality criterion to project future trends in miniaturization (e.g., no access to Moore's Law) and that, according to one speculative argument the show's arrangement could be plausible, Star Trek is (again) vindicated.

Timo closes with another historical example which shows the discontinuities between leap in technologies.

One might as well ask why the masts of modern warships are such puny things - shouldn't they be huge, to support all the necessary sails?

Timo Saloniemi

In essence, Timo turns the reality criterion on its head. He uses it to argue that you cannot use it to make reasonable guesses about the future.

If he is right, however, if we should be respectfully silent about the mysteries of the future, then we cannot adjudicate the plausibility of Star Treknology either way.

This results in a significant asymmetry in Treknology discussions. That is, there is nothing, under the reality criterion (as it is argued by Timo) that would count as a falsification or disconfirmation of Star Treknology. The future is, after all, terribly mysterious. And this is a conversation killer. No matter what objection is raised, one can always appeal to divine mystery (i.e., the future).

On the other hand, if we do allow that we can make reasonable projections under the reality criterion, then it is painfully obvious that some Star Treknologies are already out of date. As Star Trek ages, this will only get worse. Our conversations can only end in this scenario, with a shrug, and the admonition that it's only a show.

No one seriously argues on behalf of the technology of Buck Rogers serials, but Trekkies still rail against the passage of time. The rear guard action is to turn reality criterion against itself on the grounds that the future is truly unknowable (So anything goes!).

Between the Scylla of aging Treknology and the Charybdis of divine future mysteries, both of which are conversation killers because they either set the bar too high or too low (respectively), there are alternative criteria we might explore.
 
Why?

I mean, we do quite well with purely descriptive argumentation. What we see is what we get - there's no pressing need to examine the putatively underlying principles of the technologies in order to meaningfully discuss their implications to the past and future plotlines, to the pseudo-history and pseudo-society of Trek, or to other pseudo-technologies witnessed in Trek.

Descriptive argumentation doesn't carry predictive weight. But why would we want to predict entertainment? We're much better off getting surprised by it.

Timo Saloniemi
 
Clearly, the mainframe computers of TNG are big because the audience of the 1990s expected the computers of the future to be big. Right, sensei?

What do we know about the computers of TNG? They use quads instead of bytes. They operate at FTL. Anything else? Those two facts alone suggest a different basis for computing. Are we allowed to speculate without more data?
 
the audience of the 1990s expected the computers of the future to be big
Did it? Why? In the media and in the common experience, the computers of the 1990s weren't big - the average Joe had a tabletop system and a rudimentary idea of networking. It's the 1970s audience that would have been familiar with mainframes the size of an office floor; in the 1980s already, "Wargames" showed a much smaller mainframe and pitted it against distributed tabletops, and then we got "Whiz Kids"...

Timo Saloniemi
 
Oh, I'm just trying to see things from YARN's perspective. I'm not sure I agree with him/her, but it's a nice change of pace to argue from a different view.

I was actually going to ask a similar question about the size of the computer cores, but Tribble puncher beat me to it.

I recall one scene from TNG that was a dialogue between Worf and Troi that was supposedly set in the computer core (or at least one of the computer cores). It appeared to be mostly hollow. Certainly there was room for users to walk in, though I don't recall any terminals...well, maybe one in the episode "Evolution". So the cores are big, but how much of them is occupied by actual computer-related equipment?
 
Oh, I'm just trying to see things from YARN's perspective. I'm not sure I agree with him/her, but it's a nice change of pace to argue from a different view.

I was actually going to ask a similar question about the size of the computer cores, but Tribble puncher beat me to it.

I recall one scene from TNG that was a dialogue between Worf and Troi that was supposedly set in the computer core (or at least one of the computer cores). It appeared to be mostly hollow. Certainly there was room for users to walk in, though I don't recall any terminals...well, maybe one in the episode "Evolution". So the cores are big, but how much of them is occupied by actual computer-related equipment?

Thats really what I meant to ask, what was the show designers reasoning for making the comp. core so massive: http://www.cygnus-x1.net/links/lcars/blueprints/lcars24/lcars24-sheet-12.jpg

This nebula class MSD shows a computer core in the saucer that is 8 decks high. If you look in the engineering hull there is another computer core (a backup?) that is 5 decks high. if the scale of these msd's are to be believed, you could could stack over a dozen shuttles in this space. then there is this MSD:

http://www.cygnus-x1.net/links/lcars/blueprints/lcars24/lcars24-sheet-15.jpg

The saber class has a computer core bigger than its shuttle bay! My last example is this:

http://www.cygnus-x1.net/links/lcars/blueprints/lcars24/lcars24-sheet-13.jpg

The nova class, a dedicated science vessel, has only a 3 deck core. I'm just wondering what Mike Okuda offered as an in- universe explanation, or what the designers reasoning was.
 
Oh, I'm just trying to see things from YARN's perspective. I'm not sure I agree with him/her, but it's a nice change of pace to argue from a different view.

I was actually going to ask a similar question about the size of the computer cores, but Tribble puncher beat me to it.

I recall one scene from TNG that was a dialogue between Worf and Troi that was supposedly set in the computer core (or at least one of the computer cores). It appeared to be mostly hollow. Certainly there was room for users to walk in, though I don't recall any terminals...well, maybe one in the episode "Evolution". So the cores are big, but how much of them is occupied by actual computer-related equipment?

Thats really what I meant to ask, what was the show designers reasoning for making the comp. core so massive: http://www.cygnus-x1.net/links/lcars/blueprints/lcars24/lcars24-sheet-12.jpg

This nebula class MSD shows a computer core in the saucer that is 8 decks high. If you look in the engineering hull there is another computer core (a backup?) that is 5 decks high. if the scale of these msd's are to be believed, you could could stack over a dozen shuttles in this space. then there is this MSD:

http://www.cygnus-x1.net/links/lcars/blueprints/lcars24/lcars24-sheet-15.jpg

The saber class has a computer core bigger than its shuttle bay! My last example is this:

http://www.cygnus-x1.net/links/lcars/blueprints/lcars24/lcars24-sheet-13.jpg

The nova class, a dedicated science vessel, has only a 3 deck core. I'm just wondering what Mike Okuda offered as an in- universe explanation, or what the designers reasoning was.

Why use bio-neural computing? would that not be actually slower?
 
If you look at some of the schematics, the computer cores are bigger than the warp core. why would a computer so large be necessary, esp. given that computers in the 23rd/24th century will be orders of magnitude more advanced than what we have today? It just seems a tad impractical, (not that trek ships have the most practical designs anyway...)
IMO, it's not so much the computers that are huge, but the memory banks themselves. We've seen officers reconfiguring (or dismantling) circuitry by moving around isolinear chips (rods on DS9). I don't see there being any point to that unless the chips provide the 24th century equivalent of Random Access Memory to a particular system; each chip probably has its own processor and a couple of terabytes (quads?) of random access memory.

Those chips are all over the ship, in the walls, in the consoles, even in the padds and tricorders, and probably represent the real computing muscle of the Enterprise. But the ship's memory banks are another matter; that data has to be accessible from anywhere on the ship (and is occasionally accessible by the alien-of-the-week), and includes the complete library of every written work, every musical composition, every scientific paper, textbook and article ever produced from hundreds of Federation worlds. It also seems to include a dizzying library of holodeck programs... and this for a ship fresh out of spacedock. The memory capacity has to allow for fifteen to twenty years of hardcore exploration, during which time the ship will collect ten times as much data from its sensors, logs and downloads from other cultures.

IOW, the "computer" core isn't a computer as much as it is the mother of all external hard drives. Stores the capacity of a billion library of congresses.
 
Thats really what I meant to ask, what was the show designers reasoning for making the comp. core so massive: http://www.cygnus-x1.net/links/lcars/blueprints/lcars24/lcars24-sheet-12.jpg

This nebula class MSD shows a computer core in the saucer that is 8 decks high. If you look in the engineering hull there is another computer core (a backup?) that is 5 decks high. if the scale of these msd's are to be believed, you could could stack over a dozen shuttles in this space. then there is this MSD:

http://www.cygnus-x1.net/links/lcars/blueprints/lcars24/lcars24-sheet-15.jpg

The saber class has a computer core bigger than its shuttle bay! My last example is this:

http://www.cygnus-x1.net/links/lcars/blueprints/lcars24/lcars24-sheet-13.jpg

The nova class, a dedicated science vessel, has only a 3 deck core. I'm just wondering what Mike Okuda offered as an in- universe explanation, or what the designers reasoning was.

The MSD's you linked are fan made and not canon. Mike Okuda would probably not have designed them the same.
 
I tend to agree with newtype alpha, this is why I've never been swayed by objections to FJ's computer design in his "Booklet of general Plans". He had a column-like central core (or CPU?) surrounded by circular (memory?) banks, all contained in one room on one deck (center of deck seven). Nowadays, people seem to think his design is terribly outdated, but not if we look at it as newtype alpha suggests. This way, only the central column would be the actual computer, the rest would be memory banks storing the encyclopedia galactica?
 
^ Actually I think it would probably be the reverse: the central column is a giant storage array with an obscenely huge capacity and the various modules around it are library terminals from which that data may be accessed by the various departments that happen to need it. I actually thought that was the whole point of putting the computer core directly under the bridge: Spock's library computer on the bridge is the only one terminal on the entire ship from which ANY set of data can be accessed, whereas the auxiliary terminals could only access files relevant to a particular department (say, computer files marked for the astrophysics department or the anthropology database, etc). in the TMP era that central column runs all the way through the saucer section, which probably echos that design intent.

In that sense, I think the TOS Enterprise and its TNG counterparts are hooked up in a client/server type of network: individual computers scattered around the ship all have their individual functions and processors, but the computer core (IIRC, TOS vessels never actually used that term) is a type of server for library files, records, archives and relevant data used in the crew's day-to-day operations. The reason it's as huge as it is is because it contains literally ANY piece of information you could ever possibly need, which includes all the scientific and historical information ever compiled by every major world in the Federation.
 
...Fundamentally, I'd think the very large computers of the E-D were there mainly so that they would be easily visible in the graphics. Their size was dictated by their function, which wasn't to process data - but to make the viewers think "Oh, this ship is geared to process data!".

The artists would on one hand want to give visual form to as many starship functions as possible, and would have to create iconic visual representations for most of them because we had seen basically nothing yet in the preceding shows and movies. On the other hand, they would want to somehow guide the audience towards the idea that this ship is a vessel of exploration and science - not an easy thing to do at all, but visible computers would be a good idea there.

From those fundaments would then evolve all sorts of pseudo-science to justify the visually driven choice. Some would be done by the artists themselves (TNG TM), some would be added by the fans. But "what we see is what we get" would be the driving force, even if now in reverse, as any explanation would have to ensure that what we get is what we have already seen and what we will be seeing in the foreseeable future as well.

Timo Saloniemi
 
Sure - which is why I dared bring it up in the 14th reply only, and after three personal pitches. :devil:

I guess a starship is such a centralized resource to start with that it makes little sense to attempt distribution of any of its functions. A computing system based on a thousand PADDs, all of them aboard the ship, would be just as vulnerable, and probably more so, as its communication links could be disrupted. Distribution overall seems to come as a surprise to our heroes when they encounter their first Borg ship!

Timo Saloniemi
 
Maybe the computer core itself is quite small. What we could be seeing in the MSDs could be just the hardware needed to make the thing work at FTL speeds.
 
Or the big cylinders could be purely for shielding the invaluable data from the hostile energies of enemy combatants or space anomalies or whatnot. Things that couldn't harm human beings or impulse engines or phaser conduits might still be fatal to data without the extra degree of protection.

Timo Saloniemi
 
A computing system based on a thousand PADDs, all of them aboard the ship, would be just as vulnerable, and probably more so, as its communication links could be disrupted.
Which happens, like, every other week in Star Trek, doesn't it? The ship takes a huge amount of damage and suddenly they loose helm control or such and such a system stops responding for some reason.

With a distributed system, you have the advantage of not having to rely on a physical data bus for all ship controls. Each ship has its own (presumably top secret) prefix code, which would theoretically allow for the bridge stations to maintain vessel control even through a wireless link to those systems. You could bypass the computer core altogether in that case.

Distribution overall seems to come as a surprise to our heroes when they encounter their first Borg ship!
Only because the Borg distribution system puts EVERY function in EVERY module of the ship to a very small degree. It wasn't that control was distributed throughout the ship, it was that there was no one part of the ship that controlled any specific function.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top