• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

TNG computers for real

Jadzia

on holiday
Premium Member
Hello everyone.

What I'd like us to discuss here are computer systems, both those in TNG and in the real world. I hope that we can find inspiration from TNG in how we might design tommorrow's computer systems.

Where's WIMP?
-------------
The operating systems of modern computer systems are predominantly built around the WIMP interface. For those who don't know what this is, WIMP is an acronym for Windows, Icons, Menus and Pointers. It is through these objects that we navigate the operating system.

Home computers from the era of Amiga and Atari introduced this computing environment to the home audience, while Microsoft later made it popular with Windows 3, and have carried it forward in all of their home operating systems.

However, the computers in The Next Generation seem to do away with the WIMP interface, and Chakotay made reference to how humanity phased out the use of pictographic icons.

Presumably this move was in favour of numerically labelled illuminated rectangles, a move jointly seeing the replacment of pointers with touch screens, and replacing windows with switchable full screen displays whenever we run multiple programs.

So apparently, future computers won't require us to be dragging data from window to window, while we don't see toolbars or menu bars like MS windows gives us.

Altogether, this gives an interface less like the Windows PC, and more like the displays of yesteryear. A simplicity reminiscent of machines of the early-mid 1980s? If we were provided with this kind of operating system today, we can easily imagine how alien and awkward it would feel, but how much of that is due only to our familiarity with WIMP?

So are the kinds of systems we see in TNG real advancement opportunities, or would they be a step back in time? Because let's understand, that even though it is a fictional context in which these computers are used, they do nevertheless give the appearance of being functionally efficient. So if we can succeed in emulating that appearance, then we also succeed in emulating the functional efficiency.

I'm hoping in this discussion that we can dismiss our initial reactions to what are perspex and cardboard models, and think about things this way.


What is wrong with WIMP?
------------------------
At the moment, we have to drudge through long sequences of keys and mouse clicks to perform any task, while each of these actions does a relatively insignificant thing.

Just think how much effort we must make to do even the simplest of tasks on the computer. What is one mouse click or one key press away? How many different tasks might you want to perform?

Are the kinds of menu driven interfaces we see on mobile phones and PDAs superior or inferior in terms of efficiency? Does the command prompt of unix hold merit? Or does "Start The Tape. Press Any Key" invite us to restore a lost yet once almighty simplicity? What is so great about WIMP?

Computers of today are really quite tedious in what they expect you to methodically sit and do. Just as computer programming is slow and tedious, even with high level languages. It is so much easier to verbally describe what we want software to do, than actually making it do it. Let us realise that WIMP is slow and tedius.

I expect that over the next 20 years, the drive for human efficiency will see us examine this communications bottleneck between computer and user. We can easily imagine that the computer systems in TNG are not so much faster than what we have today. They still take a visible time to do things like display media on screen, a visible time to access files, and a visible time to process data. Yet they are so much quicker to work with not solely because they accept high level instructions. In comparison, WIMP can be thought of as a low level visual "language". This way of thinking is more revealing than calling it a GUI.


Better Than WIMP
----------------
Rather than having hundreds of icons and keys available, that each do very little, the computer in the 24th century presents the user with an interface that appears to be much more efficient in terms of what work the user must do to get the job done.

Presumably it utilizes artificial intelligence to anticipate what we may do next, and presenting those few options through a menu system, that is accessible and partially expanded through the console of coloured rectangular buttons layed out before us. Common tasks require only one key. More complex tasks will require more. Perhaps intelligent menus are the way forward in place of the tedium of WIMP? Let us explore this idea.

Obviously some systems appear ridiculously oversimplisic like the central desk consoles in main engineering, where we can apparently retune the whole ship with just six buttons. As if!

Most star trek systems are voice controlled, and let's not forget that. It provides a new kind of communications interface. At the moment, voice control software requires us to say "Cut" "Paste" "File Menu" etc. And it is actually quicker to simply press the keys because windows is designed for mouse and keyboard. It is easiest to interface with it in the way it is supposed to be interfaced with. But where a computer has an interface designed for more forms of interaction, such as verbal, its GUI may not need as much investment. Always keep that in mind.

What Do They Do?
----------------
The kinds of things they use the computers for most in Star Trek is as follows:

(1) Displaying stored data/information/multimedia. Firstly, the relevant data library must be accessed, and secondly the data must be filtered and formatted.

(2) Displaying a live stream of audio or video.

(3) servo-control commands like doors or lights or power relays. Firstly, the correct servo-system must be accessed, and secondly must be manipulated.

(4) data analysis. eg, voice recognition, sensor monitoring.

(5) batch files/programs/macros for customising controls (cf silicon avatar)

(6) communications (and management of, including data access, routing, and security). It is easy to open a channel to another station, setup a live feed of audio or video, or to transfer data, software elements, or screen displays, to or from another console. Without loss of generality, a communicator badge may itself be considered a computer station, and judging from the "A Matter Of Time" episode, all starfleet hardware is like this in that they have built in wireless comms and unique hardware identities.


Realise that none of these components are beyond our means today.


Looking at servo controls as the most interesting of these. We can imagine a linear menu system would suffice for selecting which one to access, but what about a non-linear approach? ie, a database of all possible controls with field filters? It would aid the AI in suggesting likely choices, and also any related systems would be closely associated through an intelligent menu that is easily navigable whether by buttons or voice control. eg, controls in the same room (lights/doors/music), controls in the same subsystem (ship-wide environmental adjustments/ ship-wide security lockdown). Controls on the same deck (all bridge functions). Controls related to similar servos (eg, all helm controls).

By selecting one field filter of a given control, we have immediate access to other controls which either share that field or unlock that field.


It is then perhaps understandable why multitasking is as it is with no windows as such. Arbitrarily scattered windows are awkward, as one window is disabled and occluded when the user brings another window to the front. But for stations like the helm, we require the console to be fully mapped to helm functions without any occlusion of the controls by some other set of controls.

However, part of the helm console does allow additional functions, like accessing navigational archives and the sensor grid, or indeed any ship system, such as turning out the lights on deck 12 if the helmsman wanted to do that.

So most if not all consoles appear to be designed to provide this semi-permanent "split screen" for multitasking wherever it is desired, limiting in a predictable way how the user's multitasking will take place so as to not interfere with existing controls.



What lies behind the Interface? Ideas for how a superior Operating System might work
------------------------------------------------------------------------------------
Data processing is also a popular computer function as is closer to the heart of the operating system than the interface is. But even this seems to be automated in some way, where algorithms (and indeed everything) appears to be preprogrammed and modular in design. Mostly it is simple macros which are hand programmed. eg, "Computer, run program Picard 1."

Even the program architecture of the EMH which we occasionally saw in voyager is of modular construction, consisting of a network of strange symbols, presumably representing high level processing elements: Inputs -> Processing -> Outputs. These may be built up structurally in ways reminiscent of how electronics circuits are today, but rather than current flow it is formatted data, or metadata, like windows clipboard.

eg,
Input: Wave audio - Output: String of words
Input: Wave audio - Output: Emotional tone
Input: Video - Output: Faces
Input: String of words - Output: Emotional content
Input: String of words - Output: Subject
Input: String of words - Output: Clause
Input: Face - Output: Identity of person
Input: Face - Output: Emotional expression

Or something along those lines. Perhaps not quite this inflexible, but you get the idea. With this kind of modular approach you can imagine just how much easier it would be to write software. Each of these modules may be represented symbolically in circuits, while each module may itself be a nested arrangement of sub modules.



I believe that most computer functions, especially the analytic ones, must be complex semi-autonomous applications. User controls may simply stimulate such software to approach problems from different strategic angles. I suppose user efficiency is then a question of having a complete set of software modules at hand... whatever that means. :-)

Notice that I haven't explored the physical X-tronics principles which these computers operate from as it would be pure sci-fi speculation what technologies we will discover and invent over the next 300 years.


So... After this length introduction, I would like us to discuss some of the issues I raise here. Looking at the efficiency of the computer systems in TNG, let us think about how the interface and general design of both hardware and software lends inspiration to our real world interests for developing superior technology.



Thanks

Jadzia
 
Well, most of the computers we see in TNG are usually used for one thing (displaying data about the reaction in the warp core, for example), not general-purpose PCs.

I disagree about the window thing, though. We've seen several cases where they had more then one window open - usually the larger screens on the bridge or in science labs.
 
Heyo Ariadne,

I see it as a case of each computer is potentially general purpose, but is installed and configured for administering one particular system. Because we so often we see information and control being accessed from a console normally associated to a different system.

Think of their laptops maybe? They're general purpose.

Anyway, I'm not so interested in us recreating specifically how things are in trek, but how trek lends inspiration to how things could be.

The perfect outcome here (in this forum) is that we collaborate, discuss and design the layout and structure of a superior user interface and operating system, that would be just about possible with todays technology via a little stretch of the imagination. :-)

Thanks

Jadzia x
 
Does the command prompt of unix hold merit?

Heck yes. For those who know what they're doing, using the command prompt can be much faster than using a GUI. Try watching someone who knows their stuff working in vi.....it's uncanny.

Besides which, most GUIs are simply a pretty interface over a command-line utility anyway. Windows takes more pains to hide this than Unix-based systems like OSX do, but it's still the case.

The only downside of direct command-line work is that there's a steep learning curve, and options must be explicitly selected. BUT----if you've got AI sufficiently good to parse natural-language commands, as they do on TNG, then that ceases to be a problem, and suddenly one interface to the command-line----a GUI of today----can be replaced by a speech parsing interface.
 
Well, I have built a real LCARS system for PCs, and it's had over 100,000 downloads. The limiting factor is that you need a late-1990s TrueColor Pentium laptop with standard graphics and sound cards and no built-in power management to install it on. So basically, you have to have one in folded up in the attic or find one in a dumpster somewhere. But people who do have it love it. And development continues. It's not for Windows and doesn't use a mouse.

Its file manger is split-screen, with folder trees left and right and files listings of open folders below. Instead of drag and drop, one keypress sends a file from the file list at lower right to the one at lower left, for example, moving it from one folder to another, into its new place in alphabetical order, and the number of files contained in each of those folders is updated and displayed immediately, as are both file lists. Or you can just as easily copy and paste, delete, or view contents of files, be they images, text, etc. or hear them if they are sound files, in rapid succession.

The LCARS Library has files displayed in four scrollable columns, organized by file type, like this:

http://i6.photobucket.com/albums/y247/LCARS24user/ACCESS.png

All the apps start up instantly, and all functions are clearly labled.

The Options menu is like this, also with no need for a mouse (buttons change according to user input):

http://i6.photobucket.com/albums/y247/LCARS24user/opt2.png

Library pages rendered from a specialized LCARS markup language are user editable on the fly and capable of a wide variety of displays, including labeling of schematics, like this:

http://i6.photobucket.com/albums/y247/LCARS24user/NOVA-L24.png
 
Last edited:
Touch screen displays are the future, and as such the idea of mouse and keyboard input may go away eventually. But I don't see pictograms going away, you will have icons you touch on the screen still.
 
I worked in a pharmacy for a couple years (2001 - 2003), and we used a DOS-based command line system to dispense meds and look up patient information. While the learning curve was steep, once you memorized all the commands you could access information and execute commands amazingly quickly, far faster than the GUI-based system I use for similar functions now. When you have a terminal largely dedicated to one function (like the helm, an engineering station, or my pharmacy computer), it is very efficient to have a command-line system. And you would expect starfleet personnel to have extensive training with these systems, because in combat you don't want to have to be hunting for menu options or the right command.

And since they have the voice-command capability for more complex or rarely used operations, or as backup for someone unfamiliar with a particular console layout, you don't need the in-depth file and menu "options" and "preference lists" that you find in programs today.

Though I disagree with Brent, I don't think touch screens will replace keyboards altogether. The tactile feedback is essential for fast input. Have you ever tried typing on a QWERTY touchscreen? It takes significantly longer than a standard keyboard because you can't place your fingers by feel. While I do think touchscreens are very useful for certain devices -- such as smartphones and iPods -- where you have a large variety of functions that you need to utilize and a limited space to input commands, the tactile nature of the key- and switch-based command systems allows for much quicker access if you have the space.
 
Though I disagree with Brent, I don't think touch screens will replace keyboards altogether. The tactile feedback is essential for fast input.
I work with touch screens at my job, and lack of tactile feed back is a problem. It is easy to mistype something when I try to go too fast. And sometimes I will hit the wrong option on the screen without realizing it.
 
Touch screen displays are the future, and as such the idea of mouse and keyboard input may go away eventually. But I don't see pictograms going away, you will have icons you touch on the screen still.

Though I disagree with Brent, I don't think touch screens will replace keyboards altogether. The tactile feedback is essential for fast input.
I work with touch screens at my job, and lack of tactile feed back is a problem. It is easy to mistype something when I try to go too fast. And sometimes I will hit the wrong option on the screen without realizing it.

some touchscreen phones have haptic feedback, which basically sends a pulse through the handset's 'vibrator' when a 'button' is pressed
I'm not sure what the ultimate outcome with haptic feedback will be, but I think the goal is to emulate the feel of a keystroke where no keys exist
for instance the iphone onscreen keyboard feels like tapping a piece of plastic instead of pressing buttons

I've read about some patents that integrate some sort of delineation for different buttons so a touchscreen can be used to touch-type
 
Though I disagree with Brent, I don't think touch screens will replace keyboards altogether. The tactile feedback is essential for fast input.

*nods* Also fingers getting in the way of the screen means you don't see info at the bottom as much as at the top.

The idea in TNG seems to be touch screen with separate keypad console beneath.

But TNG does have audible noise to replace tactile feedback. Roddenberry did foresee the demise of mechanical switches. Visually attractive, but the Jury's still out for me on that one.

Has anyone here tried these novelty things?

http://www.maplin.co.uk/Module.aspx?ModuleNo=222220&criteria=keuboard&doy=25m4
 
Hey LCARS24 :-)

This software you've made sounds great. Just a shame you don't have the source code to adapt it into some windows directx thing for the modern audience to sample? Is it native DOS?

I'm also interested to know - is the environment capable of supporting new applications? What I mean is, have you developed a visual programming environment, that lies behind the GUI, or is it basically a collection of programs displaying fixed Okuda-esque pictures?

Could we see some more pictures too please :-)

Jadzia x
 
I'd hope any software updating would be done via a more cross-platform means than DirectX.....
 
I worked in a pharmacy for a couple years (2001 - 2003), and we used a DOS-based command line system to dispense meds and look up patient information.

The only downside of direct command-line work is that there's a steep learning curve, and options must be explicitly selected.


Hi JGordon & Lindley,

Some interesting thoughts you bring.

Yeah, I guess that Unix get raves from those who use it, and respect from those who see it being used well, while everybody else looks on it in bewilderment. It is a very old system, and I think it's important to ask why we chose to move away from command prompts.

The steep learning curve is only partly responsible. I think more important, is to realise that fluent control of a command prompt is only suited to a particular mind; one that is quite comfortable with abstract logically precise languages. Few people have such a mind. Perhaps 5% of the population?

But in tense combat situations do we really want to be fretting over whether we have our capital letters and backslashes and colons in the right place?

Part of the utopia which star trek sells us is the pleasure derived from conveniency. Being reliant upon a computer system so arcane, would not give this feeling. Humanity would feel enslaved by the technology, rather than served by it.

We've all seen those joke pictures from the early 90s of shirt'n'tie guys having nervous breakdowns as their computer streams out syntax errors and other insults prefixed with code numbers. Seems kind of relevant here.

Anyway, what emerged from command prompts was Microsoft Windows. The success of Microsoft is owed to the user-friendliness of WIMP. WIMP is simple to pick up. Three year olds can learn it. The MS OSs have mass appeal simply because they are easy to navigate and can be learned through play. Yes they are bug-ridden, and yes they make ridiculous demands of the hardware, but overall, they have given home computers a fair degree of conveniency.


Also remember that many users of the enterprise-d computers were civilians and children, who wouldn't have the training and skill to use a command prompt, or to speak to it using abstract logically precise terms. And these same trekkers generally don't use voice commands when they're standing right next to the console - they use fingers.

So yes, voice recognition is one of these modules we would need to build.

But to have it interpret imprecise language would be an equal priority.

I don't know what the current ability of machines is to derive meaning from a string of words, but I don't imagine it is too difficult, even if we rely on scanning through a massive database of sentences and phrases to assign prespecified meaning, or using that same database to approximate strings of imperfect grammar with strings of perfect grammar, and derive an analytic meaning.

We could in essence teach the voice recognition software what we mean, by clarification, rather than relying solely on formal linguistic analysis. With thousands of beta-testers holding daily conversation with the computer, exposing it to all of the quirks of the human tongue, it should quickly learn how it is supposed to interpret the words, if we take time initially to tell it how. And via the net we share these Tomes of Wisdom we build.

Once taught it's taught forever. Until its hit with the latest teenage slang. :-)

It would also be pretty cool to have a Majel Barrett voice synthesizer. But that's a whole other topic :-) Anything like that out there?

Jadzia x
 
Heyo Ariadne,

I see it as a case of each computer is potentially general purpose, but is installed and configured for administering one particular system. Because we so often we see information and control being accessed from a console normally associated to a different system.

Think of their laptops maybe? They're general purpose.

Anyway, I'm not so interested in us recreating specifically how things are in trek, but how trek lends inspiration to how things could be.

The perfect outcome here (in this forum) is that we collaborate, discuss and design the layout and structure of a superior user interface and operating system, that would be just about possible with todays technology via a little stretch of the imagination. :-)

Thanks

Jadzia x
My (clearly poorly phrased) comment was supposed to be about the fact that the best GUI for a systems control interface is going to be very different (and probably less complex) then the best GUI for doing the things people do on PCs.

And, also, that I'm not so sure they don't have WIMP - they certainly have windows, as I said in my first post, and pointers, since it's a touch screen.

Though I disagree with Brent, I don't think touch screens will replace keyboards altogether. The tactile feedback is essential for fast input.
In Trek, they tend to use the voice input for longer things - touch screens are just for one input at a time.

Also remember that many users of the enterprise-d computers were civilians and children, who wouldn't have the training and skill to use a command prompt, or to speak to it using abstract logically precise terms. And these same trekkers generally don't use voice commands when they're standing right next to the console - they use fingers.
I suspect that may be out of courtesy for the other people in the room - if everybody was talking to their computer at the same time, it would get loud and confusing.
 
Yeah, I guess that Unix get raves from those who use it, and respect from those who see it being used well, while everybody else looks on it in bewilderment. It is a very old system, and I think it's important to ask why we chose to move away from command prompts.

We didn't. They're still used all over the place by software developers----and systems supporting them have advanced features, so they're not "old". I believe Linux is still one of the only platforms that has a preemptive kernel available (Windows certainly doesn't yet).

Only "everyman" users have the illusion that command prompts aren't important anymore, because software engineers do their best to keep them with that impression.

The steep learning curve is only partly responsible. I think more important, is to realise that fluent control of a command prompt is only suited to a particular mind; one that is quite comfortable with abstract logically precise languages. Few people have such a mind. Perhaps 5% of the population?

But in tense combat situations do we really want to be fretting over whether we have our capital letters and backslashes and colons in the right place?
"Tab completion", allowing the shell to guess what you're typing and complete it for you, takes a lot of the simple errors out of the picture. And as to the "mindset", well, there may be some of that but the important thing to realize is that you're getting the same options either way. The only difference is whether you open a menu, select the proper pane, and click the proper button (10-5 seconds) or simply type -option (1-3 seconds).

The only real difficulty is in remembering what the options are in the first place and how to specify them. That's what the -help option is usually for, not to mention the man pages ("man emacs" brings up the help page for emacs, a common text editor.)

We've all seen those joke pictures from the early 90s of shirt'n'tie guys having nervous breakdowns as their computer streams out syntax errors and other insults prefixed with code numbers. Seems kind of relevant here.
We're a ways past punch cards, and I'm not talking about coding. I'm talking about using existing programs, same as anyone does.

Anyway, what emerged from command prompts was Microsoft Windows.
Well, first Xerox came up with a GUI which someone thought was a nifty side project that wasn't really worth anything. Then someone at Apple computer saw it and saw the potential; thus emerged the Macintosh. Only then did Microsoft realize something was happening and jump on the bandwagon.

The success of Microsoft is owed to the user-friendliness of WIMP.
And some dirty business tricks like threatening to deny licensing to hardware companies that didn't ship their machines preloaded with Windows.

WIMP is simple to pick up. Three year olds can learn it. The MS OSs have mass appeal simply because they are easy to navigate and can be learned through play. Yes they are bug-ridden, and yes they make ridiculous demands of the hardware, but overall, they have given home computers a fair degree of conveniency.
And yet Apple, which has fairly consistently been slightly ahead in the GUI department for most of the last quarter century, has in recent times moved back to a Unix-based system. OSX has a very nice GUI, but the command-line functionality is all right there and accessible.

To a degree that's true on Windows as well, but they're still trying to hide it as best they can. And lets face it, DOS was never as developed a product as the various flavors of Unix out there.

Also remember that many users of the enterprise-d computers were civilians and children, who wouldn't have the training and skill to use a command prompt, or to speak to it using abstract logically precise terms. And these same trekkers generally don't use voice commands when they're standing right next to the console - they use fingers.
On the other hand, they apparently were taught warp theory in second grade. Make of that what you will.

So yes, voice recognition is one of these modules we would need to build.

But to have it interpret imprecise language would be an equal priority.
Naturally. That's already being worked on, with mixed results.

I don't know what the current ability of machines is to derive meaning from a string of words, but I don't imagine it is too difficult, even if we rely on scanning through a massive database of sentences and phrases to assign prespecified meaning, or using that same database to approximate strings of imperfect grammar with strings of perfect grammar, and derive an analytic meaning.
It's rather harder than you think. The first challenge is recognizing what's being said in the first place----we haven't even got that down yet. Deriving semantic meaning would be easy after that in a logical language, but English is not logical-----observe:
"This sentence is false."

We could in essence teach the voice recognition software what we mean, by clarification, rather than relying solely on formal linguistic analysis. With thousands of beta-testers holding daily conversation with the computer, exposing it to all of the quirks of the human tongue, it should quickly learn how it is supposed to interpret the words, if we take time initially to tell it how. And via the net we share these Tomes of Wisdom we build.

Once taught it's taught forever. Until its hit with the latest teenage slang. :-)
You're right that this form of reinforcement learning is one of the currently active areas of AI research. However, I don't know how well it works at present; I haven't studied that specific subject.
 
Last edited:
Hey LCARS24 :-)

This software you've made sounds great. Just a shame you don't have the source code to adapt it into some windows directx thing for the modern audience to sample? Is it native DOS?

I'm also interested to know - is the environment capable of supporting new applications? What I mean is, have you developed a visual programming environment, that lies behind the GUI, or is it basically a collection of programs displaying fixed Okuda-esque pictures?

Could we see some more pictures too please :-)

Jadzia x

Windows is a lousy platform for it, but if you want to a small taste of what in can do in native Windows mode, you can try the Periodic table and puzzle standalone for Windows, a free download from SourceForge:

Screenshot of puzzle in progress:
http://i6.photobucket.com/albums/y247/LCARS24user/PT-puzz640.jpg

SoureForge project page (click on Download and select that Windows standalone):  
http://i6.photobucket.com/albums/y247/LCARS24user/PT-puzz640.jpg

Some minor apps are written in SFML (Starfleet markup language), which is similar to HTML but dedicated to LCARS. The next release has some LCARS panels that which are completely user-editable and can run background programs or perform various functions of the LCARS core in response to keypresses labeled on the panel (or other type of LCARS screen).

The next version also has an LCARS answer to PowerPoint, called Briefing, which lets you run through a slide show of interactive (scrollable, zoomable, etc.) screens, not all of which neccesarily have to be in LCARS style.

Native DOS? Not really. It's 32-DPMI with a DOS stub attached. DOS sees it as a small DOS program with a large block of data attached. At startup the stub switches out of DOS mode to DMPI mode and starts the real program. Windows and Linux are also DPMI programs. Stubifying LCARS 24 so that it can be launched from DOS makes installation a lot easier for the user.

What the screenshots don't tell you is how fast, and smooth, and user friendly this whole thing is. What I may do if I can ever find the time is make a YouTube demo.

Beside utility programs, there are lots of games included. Two more going into the next version are Sudoku and Kakuro.
In the case of Kakuro, not only does the grid vary from puzzle to puzzle, but so do the indentations and punch holes of the puzzle frame:
http://i6.photobucket.com/albums/y247/LCARS24user/KK5.png
And each puzzle is a 100K text file.

And the whole thing doesn't display pictures of LCARS screens. it renders them, and they are interactive. And it can render an interactive screen from a text file describing the screen and its functions in detail.

I also put a download link to a native Window version on my Web site. It has about half the functions of the regular version. And, by the way, the regular version is open source.

And here's a screenshot gallery at StarTrek4U:
http://www.startrek4u.com/index.php?page_id=12

Here's standby mode with the regular version properly installed on an old laptop:
http://i6.photobucket.com/albums/y247/LCARS24user/PICT0016.jpg
http://i6.photobucket.com/albums/y247/LCARS24user/LC24W.png
(Those are old. The current version looks better.)

By the way, the regular version is launchable from Windows XP, and it will run on on some XP machines. It depends on the graphics card. And the sound won't work right on most XP machines, because most have nonstandard sound cards. So when it talks while running under XP, it may sound garbled. And DirectX downgrades the color depth if running the regular version.
 
Last edited:
Thanks LCARS24. You've obviously put a lot of work into that. :-)

I like the idea of GUIs in the future being scripted like this because it so easily allows for customisation. This SFML language you've invented must be similar to what I'd like to see in future PCs.

But what you've got so far is more of a trek celebration than a practical everyday computer. That's ok. You've shown what can be done, what is workable, as alternative user interfaces go. So I say well done :-)

Jadzia
 
The steep learning curve is only partly responsible. I think more important, is to realise that fluent control of a command prompt is only suited to a particular mind; one that is quite comfortable with abstract logically precise languages. Few people have such a mind. Perhaps 5% of the population?

That's the fundamental problem in User interfaces. Forget the practical aspects of voice recognition--that's almost solvable.

Right now we have specially trained people to translate ambiguous human language into a logically precise one that can be understood by computers. i.e. Programmers.

We also have GUIs that force people to express what they want in a logically unambiguous manner--at the cost of speed and flexibility.
 
Yes Arthur, this is the current difficulty we have with computers. :-)

The design challenge is to develop a user interface that is fast flexible and understood by the machine, from input that is not necessarily precise or logical.

Whooo. Did I just say that. LOL .Go girl!

Jadzia x
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top