• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Human input in Trek spaceship battles

Given that phasers can do everything from cook your dinner to drilling through a planet, the deflector dish performs miracles on demand, I don't see why torpedoes wouldn't have the same versatility. I think it would even be required to be multipurpose. It also is convenient in that it can better explain the inconsistent VFX and writing.

So I guess I wholeheartedly agree with you.
 
I don't think computers should ever control the battles, ever seen Terminator? Human input should be used, because a computer doesn't have intuition and intuition is key.

And starfleet captain's definately use the 3 dimensions...remember in DS9 when the fleet was engaging those dominion ships (sans Klingons)? Remember those big galaxy class attacking a galor class? One went up the other went down. Same thing in Sacrifice of Angel's, when the defiant was rescuing the cardassian's from the klingons, they definately used the three dimensions of space, and so did the Klingons when they were attacking the station.

And fyi, the torpedo's are indeed as versatile as the other weapons, you can set the yeild of the warhead to whatever you want, even with quantum torpedo's since they use m/am in the detonation process. One weapon that should be used: Tachyon emitters...it'll disrupt the shield frequencies and harmonics.
 
"Intuition" is just a more charitable way of saying "didn't have time to think it through so we're just going to guess based on the most obvious patterns we can pick out."

Ever read the Culture novels? Smarts and speed are key.
 
I don't think computers should ever control the battles, ever seen Terminator? Human input should be used, because a computer doesn't have intuition and intuition is key.
IIRC, the robots from the Terminator movies were EXTREMELY efficient killers, in some ways rather intuitive themselves. In any case, the requirement for intuition is, in fact, a popular myth.

What fighter pilots (who are famous for making this claim in rejection to AI-pilotted fighters) are referring to in speaking of "intuition" is, in fact, training. This used to be called "flying on stem power," where training is so ingrained that no conscious thought is required; instincts are programmed into the pilot that allow him to operate his aircraft by reflex alone, without needing to consciously think about the operation of his craft.

A machine so programmed can (and most of the time, DOES) outperform a human being in almost every regard. The only real drawback is programming and software, which can easily be circumvented if you have the right design specifications mapped out in advance. As a simple example: most people are unaware that the Tomahawk cruise missile uses ALOT of artificial intelligence to find and attack its target, which is the main reason it has the long range that it does. The Tomahawk has literally taken over a task that was once performed by human pilots: that have intelligently-piloted flying bombs (or as the Japanese called them, "Kamikazes"). Even heavy torpedoes used by submarines are basically suicidal robots with abandonment issues.

And starfleet captain's definately use the 3 dimensions...
So did M5 in "The Ultimate Computer." In point of fact, it seems evident that the only problem with M5 wasn't that it failed to outperform its human counterparts (it actually mopped the floor with Wesley's task force on two separate engagements) but that M5's software was patterned after a neurotic weirdo with a seething inferiority complex.
 
AI will never replace humans because it's too predictable. Combat is not chess.

Technically, AI has already replaced humans in a vast range of applications, guidance of weapons being the most obvious.

Obviously, AI will never COMPLETELY replace humans, since human decision will inevitably remain part of the loop. The "push button war" remains an inevitability on some level or another since, sooner or later, any weapon known to man can and will be mated with artificial intelligence systems capable of wielding that firepower on automatic without human guidance. Manual operation will of course remain an option, and the people who give the orders will still be humans.

When you invent a machine that is smart enough to make even command decisions, you quickly discover that the machine itself is no longer a machine, it has become--in fact--an artificial person. When that happens, you've just replaced human decision with slightly faster human decision, and the easiest way to keep up with the artificial humans is to upgrade yourself with either cybernetics and genetic modification, or to just build a smarter artificial person to keep up with your enemy (just imagine of Geordi had ever gotten Data's brain-computer interface to work properly).
 
AI will never replace humans because it's too predictable. Combat is not chess.

Technically, AI has already replaced humans in a vast range of applications, guidance of weapons being the most obvious.

Obviously, AI will never COMPLETELY replace humans, since human decision will inevitably remain part of the loop. The "push button war" remains an inevitability on some level or another since, sooner or later, any weapon known to man can and will be mated with artificial intelligence systems capable of wielding that firepower on automatic without human guidance. Manual operation will of course remain an option, and the people who give the orders will still be humans.

When you invent a machine that is smart enough to make even command decisions, you quickly discover that the machine itself is no longer a machine, it has become--in fact--an artificial person. When that happens, you've just replaced human decision with slightly faster human decision, and the easiest way to keep up with the artificial humans is to upgrade yourself with either cybernetics and genetic modification, or to just build a smarter artificial person to keep up with your enemy (just imagine of Geordi had ever gotten Data's brain-computer interface to work properly).

I agree with the first part, but I don't think AI will ever be making command decisions, unless it's an emergency, like Data in Redemption. I don't think they'll ever be good enough to make complex decisions.
 
AI will never replace humans because it's too predictable. Combat is not chess.
What known quality of artificial intelligence inherently prohibits it from outperforming natural intelligence?

We don't know enough about the phenomenon of consciousness to claim with certainty that non-biological computing systems cannot attain it, and claiming that machines cannot have "intuition" likewise cannot succeed.

Even if human-level intelligence requires something more than the quantifiable factors classical computers excel in--and Penrose aside, this is not a certain thing, as humans are still quite better in most of those fields, just not in specific applications thereof--humans remain machines. I know of no principle that dictates that whatever as-yet-undiscovered mechanism that grants consciousness and intuition to a human cannot be replicated in a non-biological entity.

The assertion that computers, in principle, cannot equal or exceed human speed and wisdom is, to me, roughly analogous to saying that a non-biological system could, in principle, never utilize energy more efficiently than a human cell.
 
AI will never replace humans because it's too predictable. Combat is not chess.

Technically, AI has already replaced humans in a vast range of applications, guidance of weapons being the most obvious.

Obviously, AI will never COMPLETELY replace humans, since human decision will inevitably remain part of the loop. The "push button war" remains an inevitability on some level or another since, sooner or later, any weapon known to man can and will be mated with artificial intelligence systems capable of wielding that firepower on automatic without human guidance. Manual operation will of course remain an option, and the people who give the orders will still be humans.

When you invent a machine that is smart enough to make even command decisions, you quickly discover that the machine itself is no longer a machine, it has become--in fact--an artificial person. When that happens, you've just replaced human decision with slightly faster human decision, and the easiest way to keep up with the artificial humans is to upgrade yourself with either cybernetics and genetic modification, or to just build a smarter artificial person to keep up with your enemy (just imagine of Geordi had ever gotten Data's brain-computer interface to work properly).

I agree with the first part, but I don't think AI will ever be making command decisions, unless it's an emergency, like Data in Redemption. I don't think they'll ever be good enough to make complex decisions.
In other words, you agree with Commander Hobson: androids can't be starship Captains.

DATA would obviously disagree with you. Then again, Data is such an artificial person, so his being a machine makes him a partner, not a successor or replacement as such.
 
There is no reason an artificially intelligent machine cannot think equal or better than humans. I'm not sure if this would be a good thing for mankind however as it would put everybody out of a job, even they would be able to program themselves and superior versions to themselves, which would then advance at a faster and faster rate.

There is a term for this kind of runaway advance, it's called the Singularity. Let's just hope future A.I. systems don't end up turning on mankind and wipe us all out or something.
 
There is no reason an artificially intelligent machine cannot think equal or better than humans. I'm not sure if this would be a good thing for mankind however as it would put everybody out of a job, even they would be able to program themselves and superior versions to themselves, which would then advance at a faster and faster rate.
And yet like the Androsynth in the Star Control games, they would then become superior competitors to humans, no longer under our control, and therefore no longer of any use to us. They'd go into business for THEMSELVES and wash their hands of humanity entirely. If we're wise, we simply let them go where they please and make due with the less intelligent machines that are still stupid enough to take orders from us.

Let's just hope future A.I. systems don't end up turning on mankind and wipe us all out or something.

They don't need to wipe us all out. At the point of the singularity those machines would essentially BECOME smarter versions of us. If anything, we would all go extinct just by sheer obsolescence... on the other hand, the machines we created would make more than worthy successors.
 
Somehow, the setting of Trek feels like it ought to make the issue moot.

After all, the Trek galaxy is already populated by post-singularity entities. The method of ascension isn't technological self-abstraction, but rather a seemingly biological transformation. It happens to the Ocampa, it happens to the Zalkonians, it apparently has happened to the Organians. In the first two cases, the species persist with both divine and nondivine segments, even if the latter are quite uneasy about the former. We have no idea if the former wiped out/"outdated" the latter in the Organian case...

It seems clear that there can exist a hierarchy of sentiences in the Trek galaxy, including the sub-animalistic, the animalistic, the sapient, the post-sapient, and the post-post-sapient, all in a happy sort of continuum. The higher-ups have no pressing urge to wipe out or outdate the lower-downs, it seems, given the continuing survival of the lower-downs. There may be individual cases where the biological or physical "root species" suffers mightily and perhaps terminally from the singularity ascension - and then there may be cases where a fruitful cooperation follows, Bajor perhaps being one such example.

Timo Saloniemi
 
Newtype Alpha,

Reply to Post #104

As a simple example: most people are unaware that the Tomahawk cruise missile uses ALOT of artificial intelligence to find and attack its target, which is the main reason it has the long range that it does. The Tomahawk has literally taken over a task that was once performed by human pilots: that have intelligently-piloted flying bombs (or as the Japanese called them, "Kamikazes"). Even heavy torpedoes used by submarines are basically suicidal robots with abandonment issues.

I guess it would be inevitable that a guidance system would have to use some kind of artificial intelligence. I'm wondering how intelligent would you say a BGM-109 Tomahawk or a heavy-torpedo used by an submarine would compare to

- An insect
- A fish
- A frog
- A crocodile
- A hawk/eagle/seagull
- A dog or cat
- An orangutang
- A chimpanzee
- A human


Reply to Post #111

And yet like the Androsynth in the Star Control games, they would then become superior competitors to humans, no longer under our control, and therefore no longer of any use to us.

I've never played the Star Control games and thus do not know of the AndroSynth, but regardless, that's actually a very astute point. It's kind of ironic that a lot of people in the field of A.I. even with that knowledge still wish to proceed with their efforts. I think their attitude is that it's inevitable and that we should welcome our successors.

It may be inevitable, but there are a lot of things that are inevitable that we don't do everything we can to further or accelerate the process along. Such as death. It is truly inevitable, every creature that is born will die. Yet we don't all decide to commit suicide under the attitude that we're going to die inevitably.

They'd go into business for THEMSELVES and wash their hands of humanity entirely.

Unfortunately in the process they might decide to push us out of their way, confine us to small areas where we won't get in their way. Eventually to them we would seem as intelligent as a bug seems to us. Since we don't have much respect for insects, this is not exactly a good thing necessarily.

They don't need to wipe us all out. At the point of the singularity those machines would essentially BECOME smarter versions of us. If anything, we would all go extinct just by sheer obsolescence... on the other hand, the machines we created would make more than worthy successors.

More than a worthy successor? And that's all that matters to you? Are you essentially saying humans should do everything they can to create an artificially intelligent successor to us and thus become the architect of our own demise?

It sounds like you have a really great regard for human life


CuttingEdge100
 
They'd go into business for THEMSELVES and wash their hands of humanity entirely.

Unfortunately in the process they might decide to push us out of their way, confine us to small areas where we won't get in their way.
In StarControl the Androsynth did declare war against humans, and ultimately banned humans from their space. The thing is, just because they're superior to humans in some such aspect doesn't mean they're omnipotent, nor does it mean they're superior to everyone else.

Eventually to them we would seem as intelligent as a bug seems to us. Since we don't have much respect for insects, this is not exactly a good thing necessarily.
And in the Trek universe this makes no difference, since there are already a vast number of life forms superior to humans in that way. OTOH, most of these super-advanced races aren't overtly hostile to humans anyway. Except for V'ger, but then only because it assumed that only intelligently-designed intelligence was truly alive.

They don't need to wipe us all out. At the point of the singularity those machines would essentially BECOME smarter versions of us. If anything, we would all go extinct just by sheer obsolescence... on the other hand, the machines we created would make more than worthy successors.

More than a worthy successor? And that's all that matters to you? Are you essentially saying humans should do everything they can to create an artificially intelligent successor to us and thus become the architect of our own demise?

"Demise" is hardly guaranteed, especially if our relationship with AI is that of a parent/child and less so of veteran/rookie. It is, for example, entirely possible that one of my three children will some day kill me and steal my job, but it is considerably more likely they will either wait patiently for me to retire or will find better jobs that pay a bit better. To that end, it is probably wiser to make sure my kids get the best social, moral, scientific and vocational educational instruction I can provide so that they will eventually grow up to be productive members of society and worthy successors and not patricidal cretins who stab their own parents in the back.

To put that another way: Laius would have been a lot better off if he had just raised Oedipus himself instead of trying to kill him.
 
I find it funny that CuttingEdge says, in the same breath, that death is inevitable and that building better minds than our own cheapens human life. You could say the same thing about a parent sending a child to college when the parent only has a GED--is he cheapening his own struggles by trying to increase the quality of life of his successor? I tend to think the opposite.

We aren't going to be around regardless, so what's the difference between building a kid out of a bunch of random hydrocarbons we've scrounged and building an AI out of whatever it turns out AIs need to be made out of? Other than the obvious objection that making the kid is more fun...

newtype_alpha said:
To put that another way: Laius would have been a lot better off if he had just raised Oedipus himself instead of trying to kill him.

Or if he'd at least finished the job himself. :p
 
Newtype Alpha,

In StarControl the Androsynth did declare war against humans, and ultimately banned humans from their space. The thing is, just because they're superior to humans in some such aspect doesn't mean they're omnipotent, nor does it mean they're superior to everyone else.

Well, they would be superior to any natural life wouldn't it? Especially if they reached the point of the singularity.

And in the Trek universe this makes no difference, since there are already a vast number of life forms superior to humans in that way. OTOH, most of these super-advanced races aren't overtly hostile to humans anyway.

Yes, but the world isn't like Star Trek. And all I can say is that generally the more intelligent the creature is, it usually treats those below it like they count less than it. If A.I. follows the same trend, we would be treated as if we didn't "count" as much as it.


Myasischev,

I find it funny that CuttingEdge says, in the same breath, that death is inevitable and that building better minds than our own cheapens human life.

That wasn't my intention if that's what you got out of it. My point was is that there are things inevitable that we don't necessarily do everything we can to further along.

You could say the same thing about a parent sending a child to college when the parent only has a GED--is he cheapening his own struggles by trying to increase the quality of life of his successor? I tend to think the opposite.

That really isn't a very good analogy. A better analogy would be a parent using genetic engineering to make his kid superior in every aspect to his natural parents, ensuring that he had the best genes, the best senses, the greatest intelligence, the greatest confidence.
 
Newtype Alpha,

In StarControl the Androsynth did declare war against humans, and ultimately banned humans from their space. The thing is, just because they're superior to humans in some such aspect doesn't mean they're omnipotent, nor does it mean they're superior to everyone else.

Well, they would be superior to any natural life wouldn't it?
Probably not. They'd be superior to us sure enough, but only because they were designed to be that way, in the specific environment for which both of us happen to operate. They would probably not be superior to, say, the Horta, who are optimized by millions of years of evolution for their environment. The machines would have to reinvent themselves for that environment, and in doing so would accidentally spawn a whole new race of themselves that is no longer completely unified with the whole.

Yes, but the world isn't like Star Trek.
Only in that we haven't discovered warp drive or aliens yet. Otherwise, it's pretty close.

That really isn't a very good analogy. A better analogy would be a parent using genetic engineering to make his kid superior in every aspect to his natural parents, ensuring that he had the best genes, the best senses, the greatest intelligence, the greatest confidence.
Which, again, DOES NOT gaurantee that the child will eventually murder his parents and steal their jobs. Superior intelligence does not breed superior psychopathy.
 
Newtype Alpha,

Probably not. They'd be superior to us sure enough, but only because they were designed to be that way, in the specific environment for which both of us happen to operate.

You're not thinking this through, the artificial intelligence that these entities would feature would be far in excess of what any natural brain could achieve being that it was designed on the principles of the brain and improved many many times over.

They would probably not be superior to, say, the Horta, who are optimized by millions of years of evolution for their environment.

Yes, but they could be adapted far faster a human could be adapted for that role. Humans are the product of billions of years of evolution in total and about 550 to 600 million years if you count land based life, especially multi-cellular life.

These machines wouldn't need to go through all the evolutionary changes, they could be modified, re-built, re-configured.
 
Newtype Alpha,

Probably not. They'd be superior to us sure enough, but only because they were designed to be that way, in the specific environment for which both of us happen to operate.

You're not thinking this through, the artificial intelligence that these entities would feature would be far in excess of what any natural brain could achieve being that it was designed on the principles of the brain and improved many many times over.
Actually, I've written a handful of published essays and a novel on this very subject; you bet your ass I've thought this through.

The common misconception is that A.I. would be superior to humans because they are "smarter," but few people give much thought to what "smarter" actually means. Intelligence is not a matter of crunching numbers and calculations, which modern digital computers already do better than humans. The issue of the singularity is a simple matter of clock speed: true sentient AI would have the singular advantage of being able to process stimuli and communicate complex messages between one another at a much faster rate than humans would. This doesn't make them omniscient geniuses that can somehow see the future by calculating every possible move you're going to make; special software would have to be written for that, and that type of software would have to evolve just like anything else.

In this case, the only advantage the machines have is their ability to think faster than humans. The sticking point here is the ability of sentient beings to communicate AT ALL, and the method being deployed. This type of AI would find incredibly stiff competition from, say, a race of telepaths who can transmit entire concepts without words, or even a race of creatures that just happen to have a higher metabolic rate. What's more, microscopic life forms would have every advantage over the machines, owing to the vastly more compact mental processors they would posses and naturally faster clock speed.

OTOH, it's just as possible to enhance normal human performance with genetic modification and neurological prosthesis. The end result being that the machines are not NECESSARILY superior to humans in a blanket respect, just significantly faster and a lot more communicative. Then again the possibility exists that a race of machines could evolve that still require linguistic communication just like humans do, in which case their advantage over humans would be slim to none.

They would probably not be superior to, say, the Horta, who are optimized by millions of years of evolution for their environment.

Yes, but they could be adapted far faster a human could be adapted for that role.
And as I said, adapting a class of machines FOR that role would instantly create a new race of machines no longer in relation to the old. It would be the same as if some humans evolved into a pack of minotaurs because they needed to be able to outrun the Terminators; the minotaur-humans are obviously a new species, are they not?
 
Newtype Alpha,

Actually, I've written a handful of published essays and a novel on this very subject; you bet your ass I've thought this through.

You may have, it doesn't necessarily make you right.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top