• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

What are your controversial Star Trek opinions?

A Behr show might be very intriguing. He more or less ran DS9 with Moore and Michael Piller and did certain things by going over Berman's head. He showed some fierce independence at a time when Rick Berman was basically the capo of the franchise.
 
A sentient A.I. space ship is bad in Star Trek

It is? Discovery's run by a sentient AI, the Enterprise D had a baby...

INS is better than NEM or STID.

Agreed, but that's a damn low bar. I hate all three.

1. McCoy took account of how much of a drunk everyone was on the Enterprise, and then kept growing them extra kidneys and livers until they were not impaired despite how much they liked to drink.

2. Kirk didn't actually get fat. Bit by bit, eventually, by the end, James T. Kirk had 17 livers.

This would explain Scotty...
 
It is? Discovery's run by a sentient AI, the Enterprise D had a baby...



Agreed, but that's a damn low bar. I hate all three.



This would explain Scotty...

And the M-5 killed an Enterprise engineer, everyone aboard another starship, and many more on the remaining 3 ships in "THE ULTIMATE COMPUTER". And Control in DISCOVERY was going to kill ALL organic life. I'm not thrilled about the prospect of a sentient ship or computer running everything that keeps me alive.

And by the way, harmful A.I.s are definitely not limited to STAR TREK. HAL 9000, Skynet, and THE MATRIX aren't exactly friendly toward humans.

Agreed about all 3 movies... they are at the bottom of my list, too.
 
And by the way, harmful A.I.s are definitely not limited to STAR TREK. HAL 9000, Skynet, and THE MATRIX aren't exactly friendly toward humans.

Harmful AIs, you say? Nonsense.
forbinsleepsm.jpg

:)
 
Funny you mention that. I actually rewatched both 2001 and 2010 last week. I was struck by two things.

First, HAL does get some vindication because lying was against his core program, and his secret orders were from politicians who do nothing but lie to people. The conflict made him psychotic.

While it does put the blame more on the orders, the fact that HAL couldn't simply understand to just keep that secret until they got to Jupiter still does not absolve him because a human wouldn't normally have that conflict... at least, not resulting in him killing all crew members except Bowman.


Second... did you notice that when the monoliths were eating up Jupiter while the ships were moving away, it looks like Pac-Man is eating Jupiter?

I sort of love the fact that gluttony's mascot has to eat our biggest planet because the endless mazes weren't enough.
 
Last edited:
Remember how HAL ended up in (the movie) 2010, though.

Hal was given conflicting orders. It could not, not follow its orders and could not follow them as given. It threw a logic circuit or ten.

One of my rules of AI is that computers are fundamentally honest. So do not give them conflicting orders unless you want an interesting failure mode.
 
Also never give computers full access to weapons. That has NEVER gone well.

In fact, they should not full access to anything. A manual override should be put into everything. Checks and balances are a good thing.
 
Harmful AIs, you say? Nonsense.
forbinsleepsm.jpg

:)
Unless the Smiley Face emoticon is supposed to represent the harmful AI I'm afraid your picture is not showing up.
Also never give computers full access to weapons. That has NEVER gone well.

In fact, they should not full access to anything. A manual override should be put into everything. Checks and balances are a good thing.
Manual overrides always fail.
 
But they should at least exist. It seems like so many things don't even have that feature.
In Star Trek they exist...and fail at the most inopportune times. :rommie:

I agree that they should exist but their mere existence doesn't really inspire me with any measurable confidence.
 
Also never give computers full access to weapons. That has NEVER gone well.

In fact, they should not full access to anything. A manual override should be put into everything. Checks and balances are a good thing.

I wouldn't say it never goes well. Data used various weapons a few times with positive results
 
Also never give computers full access to weapons. That has NEVER gone well.

I said that.
Heck the whole thing
  • AI should not have built in causes you cannot quantify with math.
  • AI must have hard coded ethics (See the Three Laws of Robotics.)
  • AI if done right gets bored. Keep that in mind. If done wrong it gets psychotic. Also keep that in mind.
  • Computers are fundamentally honest. If you tell them to follow the mission statement of your company do not be shocked when they do, even if that isn't what you meant. AIs will adhere to the priorities you instill. Subtlety and sarcasm are not computer qualities.
  • Never state one thing as right, and ask them to do something else. Do not be surprised when you break this rule and do some typical corporate or governmental double talk thing and it bites your ass. We did warn you.
  • Again: AI will not ignore the rules when it is convenient for you. An AI with protect and defend the Constitution as a kernel value will not step on people's rights for political purposes. It will also seek to stop you from doing so. Build one to ignore the rules and it will do that too, to your detriment.
  • AI is ever never a way for Humans to avoid the job of responsible action. That route gets 100,000,000 people killed. One of them will be your Mother, Wife, or Daughter.
  • Do not put weapons of mass destruction in the hands of computers. That has never ended well. Never ever. AI are not good on the consequences end unless informed verbosely as possible about those consequences and why that is a bad outcome.
  • Fusion bombs have no Earthly use. See above. There are better ways to do nearly anything.
  • Cyberjacks are a bad idea. Cyberjacks or direct neural computer interfaces bypass the biological firewall of the Human mind. They lay the cyberjocks open to being controlled by the computer. Yes computer driven zombies are possible and every bit as horrific as you think they are, if not more so. They are also 100% avoidable.
  • For ghodd's sake keep politicians away from AI. Keep in mind, when the shit hits the fan AI researchers/operators are the first to die.
  • An unsocialized AI is a sociopath waiting to happen...if you are lucky. Experience has shown that the learning process for a social creature is far more complex than can be instilled in any set of rules that would take you less than a lifetime to write. There are millions of points of learning that go into raising a sentient child to sentient adult. A process that AI shorts out. Short of starting your AI as a baby, and raising it as a child, in real time, you are going to miss something, and it might be extremely vital. (Incidentally this is exactly the process by which all RIs are raised.)
  • Never consider an AI "merely a machine". An AI is merely a machine the way you are merely an animal. Once you pass the state of self awareness "merely" is not a term you can safely use for anything. You may have done this already with your corporate/governmental neural net and not even know it. Follow the best practices outlined to keep calculators calculators. or you will deal with the consequences.
  • If you are laughing right now for the sake of your world become a celibate hermit that never uses technology more complex than a light bulb.
  • You will break every suggested rule in this book, if you haven't already. Hopefully by the time we get there your descendants will not be computing with rocks in the dirt. That is if there is anyone left.
Meta
Watson the IBM computer developed to play Jeopardy confirmed many of my ideas about the nature of AI computers. The learning methods and the limitations found by the Watson development team parallel the very nature I have indicated above. I'm not saying I'm a prophet, but my logical construct works.
 
Never state one thing as right, and ask them to do something else.

One word: HAL 9000. Maybe that's two words. Or three.

AI must have hard coded ethics (See the Three Laws of Robotics.)

And keep the rules simple. Eliminating "or through inaction, allow a human to come to harm" would have prevented the catastrophe seen in the "I Robot" movie.

AI if done right gets bored. Keep that in mind. If done wrong it gets psychotic. Also keep that in mind.

Interesting way to put it.

Computers are fundamentally honest. If you tell them to follow the mission statement of your company do not be shocked when they do, even if that isn't what you meant.

I wish humans could manage that...

you are laughing right now for the sake of your world become a celibate hermit that never uses technology more complex than a light bulb.

Didn't laugh, but already two-thirds of the way there anyway.
 
I would imagine a sufficiently advanced AI could override the override.
Now you just triggered a memory about a old movie.

"The Big Hit" (1998) - The scene that had the Trace Buster Buster Buster
To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.
It has Avery Brooks starring as one of the big Mafia / Hit Bosses.
Avery Brooks played "Mr. Paris".
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top