• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Hows today's tech match Star Treks?

One thing I think has resulted in an enormous slowdown of software is the proliferation of XML. Text processing is slow, especially parsing hierarchical lexical trees like XML. The more applications we base around things like XML, the slower the whole shebang gets and the more processing power we need just to get back to status quo.

I really dislike XML. I avoid it as much as I can. I agree with you that it probably is responsible for a lot of slowness and padding of data.

I do like flat files for configuration, and in my projects I usually use my own variation on the INI file. I won't bother describing how that works right now, but it is very versatile and easily editable with notepad.

What software shouldn't be doing (but I fear it does do) is "think" in XML. It would be bad if data remains in memory as XML formatted text and has to be parsed/re-encoded every time that data is accessed or changed.

Software should convert data into binary where it is to be used internally. It only needs to be parsed once, and data only needs to be text encoded once in a final output, like when the user orders a file to be saved to disk... and only then if the user is expected to want to edit that data.

In my experience, data is not represented in memory as XML. Rather, the XML is a storage medium. When you read it in, it gets mapped to an object structure (or some other relatively efficient binary representation), and then when you're done with it it can be serialized back to disk as XML.

There are exceptions, but usually that's how XML is used. I would say if you have a static bit of XML that needs to be used over and over, it would be best to keep the binary representation around instead of constantly reading the XML. But some applications are stupid.

There's a build tool called NAnt where you write your entire build script in an XML dialect. Yes, actual code is done as XML. It's hideous and a pain in the ass. I definitely don't support using XML as that kind of blunt tool. XML is good for storing complex data, but it isn't fast and it shouldn't be used as a programming language.

For my own projects, I like delimited flat files. I'll use XML if the data structures are complex, but another option is serializing the data's binary format. This is typically not cross-platform, though, so your data will be stuck on whatever architecture generated it. Part of XML's strength is that it is never tied to a specific platform.
 
I would say if you have a static bit of XML that needs to be used over and over, it would be best to keep the binary representation around instead of constantly reading the XML. But some applications are stupid.

We can easily imagine how the stupid stuff happens, say if we create a function that combines two pieces of data, and returns the result. The function will call other functions, which themselves may call other functions. But superficially it could be a very innocent looking function.

The problem is that we can easily lose track of what these innocent looking routines are doing deep down in the nesting. Somewhere down the line it could be parsing XML to obtain a particular piece of data, when it would be far more efficient to have cached that data to binary.

Another thing I suspect performs below par is SQL.

SQL databases can be too generic a system for storing structured data. It does a lot of book keeping and error checking and caching and index building simply because the system doesn't know its purpose. To access data with a query, the computer must go through a lengthy procedure of looking different things up, calculating hashes, searching for indexes and the like. The result being that data is much slower to retrieve than it would be if it were loaded into structs (for small amounts of data).
For larger amounts of data, we could create something specific (and efficient) for the task at hand.
 
Last edited:
One example of how we are not even close to star trek computers, the fact that the on board computer can store the information for replicators, transporters, and crew manifest. That fact that hundreds of people use that one computer for hundreds of different functions, yet it never slows. That computer tech is even here yet and won't be soon.
The communicator beats cellphones. One it was for military, so you can't compare it to use of personal cellphones. It hardly ever lost connection, it could be hidden on your shirt, it can be tracked easily, instant pick-up of reception.

We don't hold the shoe of trek tech yet.
To qoute "Optimis Prime"- The humans are a young race with great potential.
 
One example of how we are not even close to star trek computers, the fact that the on board computer can store the information for replicators, transporters, and crew manifest. That fact that hundreds of people use that one computer for hundreds of different functions, yet it never slows. That computer tech is even here yet and won't be soon.
You've never used a big corporate mainframe have you?
The communicator beats cellphones. One it was for military, so you can't compare it to use of personal cellphones.
Um, why not? what does it being military based have to do with anything? I'm sure the civilian communicators in Trek were just as good. Probably better.
It hardly ever lost connection, it could be hidden on your shirt, it can be tracked easily, instant pick-up of reception.

We don't hold a shoe to the tech of trek yet.
To quote Optimus Prime - "The humans are a young race with great potential."
 
I am 18 almost 19, what do you think?

We are closer to Enterprise than we are TOS or TNG.
You look at the computer size of the TNG, Mid-sized room, all those functions, programing, and saved memory. I doubt that a corp's mainframe could take up the same size and do the same functions, hold the same memory and programming.

Good point. I never saw the civilian communicator. Haven't seen all the TNG(most) or TOS(half), Hardly any DS9. So cellphones and all those apps may beat the communicator. However, match function by function, communicator in my opinion wins. Also, the time periods and the needs of the users would be completely different.
 
I would say if you have a static bit of XML that needs to be used over and over, it would be best to keep the binary representation around instead of constantly reading the XML. But some applications are stupid.

We can easily imagine how the stupid stuff happens, say if we create a function that combines two pieces of data, and returns the result. The function will call other functions, which themselves may call other functions. But superficially it could be a very innocent looking function.

I touched on this indirectly when I talked about third party libraries. If those libraries do things that are inefficient and/or stupid, you may have no way of knowing it. Most people who use libraries are not going to pore through them to see how we'll they're written, they just want to make sure the advertised feature set works. So you end up with some libraries that are absolute nightmares internally but they work for their designed purpose so no one cares.

Another thing I suspect performs below par is SQL.

SQL databases can be too generic a system for storing structured data. It does a lot of book keeping and error checking and caching and index building simply because the system doesn't know its purpose. To access data with a query, the computer must go through a lengthy procedure of looking different things up, calculating hashes, searching for indexes and the like. The result being that data is much slower to retrieve than it would be if it were loaded into structs (for small amounts of data).
For larger amounts of data, we could create something specific (and efficient) for the task at hand.

SQL isn't a database, it's a query language that database implementations support. SQL itself works just fine. What you're really talking about is poor database design and administration. You're right that most databases are overkill for small datasets. That's what things like SQLite and BDB were built for. For enormous datasets (I'm talking Google or Facebook level here) it's preferable to use specialized column-driven databases, which aren't so much your typical row/column approach but rather collections of keys/values that each point to a key/value map. (Google BigTable and Cossandra to see what those are all about.) In the middle, when you're dealing with tens of thousands to millions of records, that's where most SQL-based databases shine. MySQL works pretty well in this area. I use a database called Cache which is designed for large-ish databases. There are many tables I work with that have millions of records in them, and the most important aspect of designing them is to build smart indexes. You have to know what kinds of queries you'll want to run first, but once you know that you can determine what indexes you'll need. Indexes take up a fair amount of space, obviously, so you don't want to put indexes on everything. Performance of a row lookup by a non-primary-key value on a table with several million records and an appropriate index takes a few milliseconds--without relying on a SQL cache. In fact, I have several SQL tables backing a live claim adjudication system, so they have to be fast and I've tuned them for maximum performance. Each claim takes roughly 1/10th of a second to process, and that involves several SQL queries, numerous non-SQL database lookups, and several thousand lines of code.

If your point is that a lot of people use SQL databases without having the slightest clue what they're doing, I'll give you that. But that's symptomatic of an overall commoditization of software development. Anyone can do some Googling and figure out how to write code, but there's no guarantee they'll really understand anything they're doing or know how to do it correctly.

One example of how we are not even close to star trek computers, the fact that the on board computer can store the information for replicators, transporters, and crew manifest. That fact that hundreds of people use that one computer for hundreds of different functions, yet it never slows. That computer tech is even here yet and won't be soon.
The communicator beats cellphones. One it was for military, so you can't compare it to use of personal cellphones. It hardly ever lost connection, it could be hidden on your shirt, it can be tracked easily, instant pick-up of reception.

We don't hold the shoe of trek tech yet.
To qoute "Optimis Prime"- The humans are a young race with great potential.

Like sojourner said, we have computers today that can handle that kind of load. A typical server rack fits into a corner and if it's got several reasonably-powerful servers in it it can easily serve thousands of concurrent users simultaneously.
 
OK great, but It still couldn't hold up against a corner of the TNG's computer. We are not even near what is needed. If we were then research into transporters and or replicators would be far more advanced because our computers would be able to handle all that information.
The partical accelerator in Sweden. The information gathered by one explosion is so great that not even their computers can handle it all at once.
 
^ Developing fictional magical tech requires the allocation of infinite real resources.

And what particle accelerator in Sweden?
 
I may have the place wrong, but I don't think. The actually name starts with an H, but to slow confusion I am not going to try to spell it.
 
Like sojourner said, we have computers today that can handle that kind of load. A typical server rack fits into a corner and if it's got several reasonably-powerful servers in it it can easily serve thousands of concurrent users simultaneously.

No server known to humanity can 'remember' where to put all the particles needed to make a human - something a star trek transporter computer can easily do; not to mention that scanning all these particles precisely enough is imposible, as per Heisenberg's uncertainty.
Or where to put all the particles needed to replicate a cofee - again, fesible in star trek.
Or accurately process in a few seconds ALL the information coming from a full - and extremely detailed - planet scan; etc.

Star trek fantasy tech has the ENORMOUS advantage of not being impeded by real physical laws:
'The laws of physics are like a bad wine.' - Bender, Futurama.
 
The iPad is a handheld device akin to the Trek PADD, what they are working on now is making their desktop displays have the capacity to swivel down so they can also be used for multi touch input as well.

That would bugger people's arms. Keyboards and mouse are still around for a reason. If a suitably robust voice activated system was around, it still wouldn't be used in places like my office, where everyone's in the same room.
 
Like sojourner said, we have computers today that can handle that kind of load. A typical server rack fits into a corner and if it's got several reasonably-powerful servers in it it can easily serve thousands of concurrent users simultaneously.

No server known to humanity can 'remember' where to put all the particles needed to make a human - something a star trek transporter computer can easily do; not to mention that scanning all these particles precisely enough is imposible, as per Heisenberg's uncertainty.
Or where to put all the particles needed to replicate a cofee - again, fesible in star trek.
Or accurately process in a few seconds ALL the information coming from a full - and extremely detailed - planet scan; etc.

Star trek fantasy tech has the ENORMOUS advantage of not being impeded by real physical laws:
'The laws of physics are like a bad wine.' - Bender, Futurama.

My argument has been that while our computers aren't nearly as powerful in raw terms as Trek computers, they don't need to be for most purposes. Yeah, we don't have transporters--so of course we don't have computers designed to hold data for them, which would no doubt require a new data storage paradigm we haven't yet discovered. We also don't have FTL computers on account of not having FTL. :lol:

But it would be unwise to underestimate just how powerful today's computers are, and are capable of being. For instance, there's no technological reason we couldn't take every single Internet-connected computer on the planet and use their idle cycles to power a global grid computer, which we could put to just about any purpose. There are about 4 billion Internet-connected computers in the world, and if we assume each one averages 3 GFLOPS (a reasonable and probably lowballed assumption), that's a grid computer capable of about 12 ExaFLOPS, or 12,000,000,000,000,000,000 floating-point operations per second--which is a lot. :lol:

I think that's really where our computing power will end up in the future, too. It will be distributed rather than centralized, though each individual computer will get progressively more powerful as time goes on and technology improves. It's just not an apples-to-apples comparison because Trek has always taken a more centralized view of computing technology.
 
Robert Maxwell

We don't have star trek computers decause we don't KNOW how to make computers so powerful, NOT because we don't need them.

By comparison to star trek computers, today's servers are more like abacuses, with ALL what they're capable of doing.
That's how powerful star trek computers must be for some of the things we saw them accomplish.

For example - "12,000,000,000,000,000,000 floating-point operations per second" is not even close to what's needed in order to remember how to put one human together, one quantum particle at a time.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top