Hello everyone.
What I'd like us to discuss here are computer systems, both those in TNG and in the real world. I hope that we can find inspiration from TNG in how we might design tommorrow's computer systems.
Where's WIMP?
-------------
The operating systems of modern computer systems are predominantly built around the WIMP interface. For those who don't know what this is, WIMP is an acronym for Windows, Icons, Menus and Pointers. It is through these objects that we navigate the operating system.
Home computers from the era of Amiga and Atari introduced this computing environment to the home audience, while Microsoft later made it popular with Windows 3, and have carried it forward in all of their home operating systems.
However, the computers in The Next Generation seem to do away with the WIMP interface, and Chakotay made reference to how humanity phased out the use of pictographic icons.
Presumably this move was in favour of numerically labelled illuminated rectangles, a move jointly seeing the replacment of pointers with touch screens, and replacing windows with switchable full screen displays whenever we run multiple programs.
So apparently, future computers won't require us to be dragging data from window to window, while we don't see toolbars or menu bars like MS windows gives us.
Altogether, this gives an interface less like the Windows PC, and more like the displays of yesteryear. A simplicity reminiscent of machines of the early-mid 1980s? If we were provided with this kind of operating system today, we can easily imagine how alien and awkward it would feel, but how much of that is due only to our familiarity with WIMP?
So are the kinds of systems we see in TNG real advancement opportunities, or would they be a step back in time? Because let's understand, that even though it is a fictional context in which these computers are used, they do nevertheless give the appearance of being functionally efficient. So if we can succeed in emulating that appearance, then we also succeed in emulating the functional efficiency.
I'm hoping in this discussion that we can dismiss our initial reactions to what are perspex and cardboard models, and think about things this way.
What is wrong with WIMP?
------------------------
At the moment, we have to drudge through long sequences of keys and mouse clicks to perform any task, while each of these actions does a relatively insignificant thing.
Just think how much effort we must make to do even the simplest of tasks on the computer. What is one mouse click or one key press away? How many different tasks might you want to perform?
Are the kinds of menu driven interfaces we see on mobile phones and PDAs superior or inferior in terms of efficiency? Does the command prompt of unix hold merit? Or does "Start The Tape. Press Any Key" invite us to restore a lost yet once almighty simplicity? What is so great about WIMP?
Computers of today are really quite tedious in what they expect you to methodically sit and do. Just as computer programming is slow and tedious, even with high level languages. It is so much easier to verbally describe what we want software to do, than actually making it do it. Let us realise that WIMP is slow and tedius.
I expect that over the next 20 years, the drive for human efficiency will see us examine this communications bottleneck between computer and user. We can easily imagine that the computer systems in TNG are not so much faster than what we have today. They still take a visible time to do things like display media on screen, a visible time to access files, and a visible time to process data. Yet they are so much quicker to work with not solely because they accept high level instructions. In comparison, WIMP can be thought of as a low level visual "language". This way of thinking is more revealing than calling it a GUI.
Better Than WIMP
----------------
Rather than having hundreds of icons and keys available, that each do very little, the computer in the 24th century presents the user with an interface that appears to be much more efficient in terms of what work the user must do to get the job done.
Presumably it utilizes artificial intelligence to anticipate what we may do next, and presenting those few options through a menu system, that is accessible and partially expanded through the console of coloured rectangular buttons layed out before us. Common tasks require only one key. More complex tasks will require more. Perhaps intelligent menus are the way forward in place of the tedium of WIMP? Let us explore this idea.
Obviously some systems appear ridiculously oversimplisic like the central desk consoles in main engineering, where we can apparently retune the whole ship with just six buttons. As if!
Most star trek systems are voice controlled, and let's not forget that. It provides a new kind of communications interface. At the moment, voice control software requires us to say "Cut" "Paste" "File Menu" etc. And it is actually quicker to simply press the keys because windows is designed for mouse and keyboard. It is easiest to interface with it in the way it is supposed to be interfaced with. But where a computer has an interface designed for more forms of interaction, such as verbal, its GUI may not need as much investment. Always keep that in mind.
What Do They Do?
----------------
The kinds of things they use the computers for most in Star Trek is as follows:
(1) Displaying stored data/information/multimedia. Firstly, the relevant data library must be accessed, and secondly the data must be filtered and formatted.
(2) Displaying a live stream of audio or video.
(3) servo-control commands like doors or lights or power relays. Firstly, the correct servo-system must be accessed, and secondly must be manipulated.
(4) data analysis. eg, voice recognition, sensor monitoring.
(5) batch files/programs/macros for customising controls (cf silicon avatar)
(6) communications (and management of, including data access, routing, and security). It is easy to open a channel to another station, setup a live feed of audio or video, or to transfer data, software elements, or screen displays, to or from another console. Without loss of generality, a communicator badge may itself be considered a computer station, and judging from the "A Matter Of Time" episode, all starfleet hardware is like this in that they have built in wireless comms and unique hardware identities.
Realise that none of these components are beyond our means today.
Looking at servo controls as the most interesting of these. We can imagine a linear menu system would suffice for selecting which one to access, but what about a non-linear approach? ie, a database of all possible controls with field filters? It would aid the AI in suggesting likely choices, and also any related systems would be closely associated through an intelligent menu that is easily navigable whether by buttons or voice control. eg, controls in the same room (lights/doors/music), controls in the same subsystem (ship-wide environmental adjustments/ ship-wide security lockdown). Controls on the same deck (all bridge functions). Controls related to similar servos (eg, all helm controls).
By selecting one field filter of a given control, we have immediate access to other controls which either share that field or unlock that field.
It is then perhaps understandable why multitasking is as it is with no windows as such. Arbitrarily scattered windows are awkward, as one window is disabled and occluded when the user brings another window to the front. But for stations like the helm, we require the console to be fully mapped to helm functions without any occlusion of the controls by some other set of controls.
However, part of the helm console does allow additional functions, like accessing navigational archives and the sensor grid, or indeed any ship system, such as turning out the lights on deck 12 if the helmsman wanted to do that.
So most if not all consoles appear to be designed to provide this semi-permanent "split screen" for multitasking wherever it is desired, limiting in a predictable way how the user's multitasking will take place so as to not interfere with existing controls.
What lies behind the Interface? Ideas for how a superior Operating System might work
------------------------------------------------------------------------------------
Data processing is also a popular computer function as is closer to the heart of the operating system than the interface is. But even this seems to be automated in some way, where algorithms (and indeed everything) appears to be preprogrammed and modular in design. Mostly it is simple macros which are hand programmed. eg, "Computer, run program Picard 1."
Even the program architecture of the EMH which we occasionally saw in voyager is of modular construction, consisting of a network of strange symbols, presumably representing high level processing elements: Inputs -> Processing -> Outputs. These may be built up structurally in ways reminiscent of how electronics circuits are today, but rather than current flow it is formatted data, or metadata, like windows clipboard.
eg,
Input: Wave audio - Output: String of words
Input: Wave audio - Output: Emotional tone
Input: Video - Output: Faces
Input: String of words - Output: Emotional content
Input: String of words - Output: Subject
Input: String of words - Output: Clause
Input: Face - Output: Identity of person
Input: Face - Output: Emotional expression
Or something along those lines. Perhaps not quite this inflexible, but you get the idea. With this kind of modular approach you can imagine just how much easier it would be to write software. Each of these modules may be represented symbolically in circuits, while each module may itself be a nested arrangement of sub modules.
I believe that most computer functions, especially the analytic ones, must be complex semi-autonomous applications. User controls may simply stimulate such software to approach problems from different strategic angles. I suppose user efficiency is then a question of having a complete set of software modules at hand... whatever that means. :-)
Notice that I haven't explored the physical X-tronics principles which these computers operate from as it would be pure sci-fi speculation what technologies we will discover and invent over the next 300 years.
So... After this length introduction, I would like us to discuss some of the issues I raise here. Looking at the efficiency of the computer systems in TNG, let us think about how the interface and general design of both hardware and software lends inspiration to our real world interests for developing superior technology.
Thanks
Jadzia
What I'd like us to discuss here are computer systems, both those in TNG and in the real world. I hope that we can find inspiration from TNG in how we might design tommorrow's computer systems.
Where's WIMP?
-------------
The operating systems of modern computer systems are predominantly built around the WIMP interface. For those who don't know what this is, WIMP is an acronym for Windows, Icons, Menus and Pointers. It is through these objects that we navigate the operating system.
Home computers from the era of Amiga and Atari introduced this computing environment to the home audience, while Microsoft later made it popular with Windows 3, and have carried it forward in all of their home operating systems.
However, the computers in The Next Generation seem to do away with the WIMP interface, and Chakotay made reference to how humanity phased out the use of pictographic icons.
Presumably this move was in favour of numerically labelled illuminated rectangles, a move jointly seeing the replacment of pointers with touch screens, and replacing windows with switchable full screen displays whenever we run multiple programs.
So apparently, future computers won't require us to be dragging data from window to window, while we don't see toolbars or menu bars like MS windows gives us.
Altogether, this gives an interface less like the Windows PC, and more like the displays of yesteryear. A simplicity reminiscent of machines of the early-mid 1980s? If we were provided with this kind of operating system today, we can easily imagine how alien and awkward it would feel, but how much of that is due only to our familiarity with WIMP?
So are the kinds of systems we see in TNG real advancement opportunities, or would they be a step back in time? Because let's understand, that even though it is a fictional context in which these computers are used, they do nevertheless give the appearance of being functionally efficient. So if we can succeed in emulating that appearance, then we also succeed in emulating the functional efficiency.
I'm hoping in this discussion that we can dismiss our initial reactions to what are perspex and cardboard models, and think about things this way.
What is wrong with WIMP?
------------------------
At the moment, we have to drudge through long sequences of keys and mouse clicks to perform any task, while each of these actions does a relatively insignificant thing.
Just think how much effort we must make to do even the simplest of tasks on the computer. What is one mouse click or one key press away? How many different tasks might you want to perform?
Are the kinds of menu driven interfaces we see on mobile phones and PDAs superior or inferior in terms of efficiency? Does the command prompt of unix hold merit? Or does "Start The Tape. Press Any Key" invite us to restore a lost yet once almighty simplicity? What is so great about WIMP?
Computers of today are really quite tedious in what they expect you to methodically sit and do. Just as computer programming is slow and tedious, even with high level languages. It is so much easier to verbally describe what we want software to do, than actually making it do it. Let us realise that WIMP is slow and tedius.
I expect that over the next 20 years, the drive for human efficiency will see us examine this communications bottleneck between computer and user. We can easily imagine that the computer systems in TNG are not so much faster than what we have today. They still take a visible time to do things like display media on screen, a visible time to access files, and a visible time to process data. Yet they are so much quicker to work with not solely because they accept high level instructions. In comparison, WIMP can be thought of as a low level visual "language". This way of thinking is more revealing than calling it a GUI.
Better Than WIMP
----------------
Rather than having hundreds of icons and keys available, that each do very little, the computer in the 24th century presents the user with an interface that appears to be much more efficient in terms of what work the user must do to get the job done.
Presumably it utilizes artificial intelligence to anticipate what we may do next, and presenting those few options through a menu system, that is accessible and partially expanded through the console of coloured rectangular buttons layed out before us. Common tasks require only one key. More complex tasks will require more. Perhaps intelligent menus are the way forward in place of the tedium of WIMP? Let us explore this idea.
Obviously some systems appear ridiculously oversimplisic like the central desk consoles in main engineering, where we can apparently retune the whole ship with just six buttons. As if!
Most star trek systems are voice controlled, and let's not forget that. It provides a new kind of communications interface. At the moment, voice control software requires us to say "Cut" "Paste" "File Menu" etc. And it is actually quicker to simply press the keys because windows is designed for mouse and keyboard. It is easiest to interface with it in the way it is supposed to be interfaced with. But where a computer has an interface designed for more forms of interaction, such as verbal, its GUI may not need as much investment. Always keep that in mind.
What Do They Do?
----------------
The kinds of things they use the computers for most in Star Trek is as follows:
(1) Displaying stored data/information/multimedia. Firstly, the relevant data library must be accessed, and secondly the data must be filtered and formatted.
(2) Displaying a live stream of audio or video.
(3) servo-control commands like doors or lights or power relays. Firstly, the correct servo-system must be accessed, and secondly must be manipulated.
(4) data analysis. eg, voice recognition, sensor monitoring.
(5) batch files/programs/macros for customising controls (cf silicon avatar)
(6) communications (and management of, including data access, routing, and security). It is easy to open a channel to another station, setup a live feed of audio or video, or to transfer data, software elements, or screen displays, to or from another console. Without loss of generality, a communicator badge may itself be considered a computer station, and judging from the "A Matter Of Time" episode, all starfleet hardware is like this in that they have built in wireless comms and unique hardware identities.
Realise that none of these components are beyond our means today.
Looking at servo controls as the most interesting of these. We can imagine a linear menu system would suffice for selecting which one to access, but what about a non-linear approach? ie, a database of all possible controls with field filters? It would aid the AI in suggesting likely choices, and also any related systems would be closely associated through an intelligent menu that is easily navigable whether by buttons or voice control. eg, controls in the same room (lights/doors/music), controls in the same subsystem (ship-wide environmental adjustments/ ship-wide security lockdown). Controls on the same deck (all bridge functions). Controls related to similar servos (eg, all helm controls).
By selecting one field filter of a given control, we have immediate access to other controls which either share that field or unlock that field.
It is then perhaps understandable why multitasking is as it is with no windows as such. Arbitrarily scattered windows are awkward, as one window is disabled and occluded when the user brings another window to the front. But for stations like the helm, we require the console to be fully mapped to helm functions without any occlusion of the controls by some other set of controls.
However, part of the helm console does allow additional functions, like accessing navigational archives and the sensor grid, or indeed any ship system, such as turning out the lights on deck 12 if the helmsman wanted to do that.
So most if not all consoles appear to be designed to provide this semi-permanent "split screen" for multitasking wherever it is desired, limiting in a predictable way how the user's multitasking will take place so as to not interfere with existing controls.
What lies behind the Interface? Ideas for how a superior Operating System might work
------------------------------------------------------------------------------------
Data processing is also a popular computer function as is closer to the heart of the operating system than the interface is. But even this seems to be automated in some way, where algorithms (and indeed everything) appears to be preprogrammed and modular in design. Mostly it is simple macros which are hand programmed. eg, "Computer, run program Picard 1."
Even the program architecture of the EMH which we occasionally saw in voyager is of modular construction, consisting of a network of strange symbols, presumably representing high level processing elements: Inputs -> Processing -> Outputs. These may be built up structurally in ways reminiscent of how electronics circuits are today, but rather than current flow it is formatted data, or metadata, like windows clipboard.
eg,
Input: Wave audio - Output: String of words
Input: Wave audio - Output: Emotional tone
Input: Video - Output: Faces
Input: String of words - Output: Emotional content
Input: String of words - Output: Subject
Input: String of words - Output: Clause
Input: Face - Output: Identity of person
Input: Face - Output: Emotional expression
Or something along those lines. Perhaps not quite this inflexible, but you get the idea. With this kind of modular approach you can imagine just how much easier it would be to write software. Each of these modules may be represented symbolically in circuits, while each module may itself be a nested arrangement of sub modules.
I believe that most computer functions, especially the analytic ones, must be complex semi-autonomous applications. User controls may simply stimulate such software to approach problems from different strategic angles. I suppose user efficiency is then a question of having a complete set of software modules at hand... whatever that means. :-)
Notice that I haven't explored the physical X-tronics principles which these computers operate from as it would be pure sci-fi speculation what technologies we will discover and invent over the next 300 years.
So... After this length introduction, I would like us to discuss some of the issues I raise here. Looking at the efficiency of the computer systems in TNG, let us think about how the interface and general design of both hardware and software lends inspiration to our real world interests for developing superior technology.
Thanks
Jadzia