• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Multiple versions of the same story

Yeah, I'm confused by that too. Most of the reaction I've seen to it has been fairly positive. I'm reading it right now, and I haven't come across anything "bigoted", but I'm only on Chapter 2.
 
I can understand being upset about how it took ST5, but what do you mean by "bigoted"?

It is my understanding that the approach taken to Star Trek V was taken because of an approach to religious beliefs, that were present in different ways in V and Bread and Circuses. Those assumptions assume the superiority of a given group over a different group.
 
The Mirror Universe has gotten a few different interpretations. Aside from the on-air version, there's:
- The DC Comics (first series) story
- Diane Duane's Dark Mirror
- IDW's mirror universe comics series
- and a few others escaping my memory at the moment. As I recall David Mack's novels kept continuity with the on-air mirror universe pretty-much.
 
The Mirror Universe has gotten a few different interpretations. Aside from the on-air version, there's:
- The DC Comics (first series) story
- Diane Duane's Dark Mirror
- IDW's mirror universe comics series
- and a few others escaping my memory at the moment.

Also the second Shatnerverse trilogy, the Dark Passions duology, a Tim Russ-cowritten backup story in Malibu's DS9, and a one-shot in the '90s Marvel/Paramount Comics line.


As I recall David Mack's novels kept continuity with the on-air mirror universe pretty-much.

All tie-ins have to maintain continuity with on-air content as it exists at the time. But older ones are often contradicted by new screen content, e.g. Dark Mirror's depiction of a 24th-century Terran Empire being later contradicted by DS9's depiction of a post-Empire 24th century.
 
Q & A and one of the DS9R books both featured Mirror Universes where the Empire never fell, so maybe now we could just say that Dark Mirror took place in one of those universes.
 
I just realized one that I'm surprised hasn't been mentioned. You'd think it would be a big one.

The End of the Universe!

It's come up and been averted (usually) in DS9's "Millennium" trilogy, TNG's "I,Q," "Q&A," and "The Body Electric," Voyager's "The Eternal Tide," and the Shatnerverse "Totality" trilogy. A lot of them overlap or would seem to be mutually incompatible at first glance (come to think of it, "Q & A" specifically retconned "I, Q" as a fable setting the stage for its own version of the End Times).

The universe is a shockingly fragile place, when you get right down to it. We're all lucky Starfleet is on the case. ;)
 
Curiously enough, Dark Mirror featured a mirror Geordi La Forge, but no novel in the modern novelverse has featured the mirror version of Geordi.
 
Curiously enough, Dark Mirror featured a mirror Geordi La Forge, but no novel in the modern novelverse has featured the mirror version of Geordi.

In the Alliance, humans were slaves whose lives were considered worthless except insofar as they could be useful to their masters. A human who was born blind would've been unlikely to be treated well, or allowed to live at all.
 
Through the sentiment he expressed about his memento of Tasha Yar from "Skin of Evil."

Okay.

Of course, one of the reasons the whole "no emotion" thing was stupid and misguided was because Data always obviously did have emotions, no matter how much he insisted he didn't. He had affinities and preferences and hopes and dislikes. Even saying "I wish I had emotions" is a contradiction in terms, because just having that wish means you do feel something. It was deeply wrong to say that Data's emotions didn't exist just because he didn't express them outwardly in easily recognizable forms like laughing or crying or yelling. There are people on the autistic spectrum who can't show emotion in that way, but they definitely have the same emotions as anyone else. That's what Data was originally conceived to be like -- someone who had the capacity to feel, just not to express it very well. The mistake Piller and the later writers made was to confuse the expression of emotion with its existence. (Also to regurgitate the hackneyed and ignorant sci-fi cliche that "machines can't feel," that emotions are some uniquely human thing too complex for AIs. That's nonsense. Emotions are simple. Most animals have them, and they're very straightforward -- experience an urge, act on it. It's silly to say they can't be programmed, because they are programming -- automatic responses hardwired into the brain. What makes them complicated in humans is their interaction with our intelligence and abstraction, our ability to have thoughts and goals that come into conflict with our feelings.)

As I understand it, there's yet to be an emotional AI created in real life. Is that just because of the limitations of AI period or because we still need to figure out how to write the software?


No, it doesn't. Again, where did we ever get this bizarre notion that humans have a monopoly on emotion? Anyone who has a dog should know better. Anyone who's seen animals mourning their dead should know better. (Elephants can mourn their dead for years.) And of course, in Trek, every alien species we've ever seen has emotion. Vulcans put on a facade of emotionlessness, but we know that's because their emotions are actually so overpowering that they have to control them tightly. So really, why should the exemplar of a quest for emotionalism be humans rather than, say, Romulans (since they're Vulcans who don't suppress their powerful emotions)? Or what about Betazoids? Surely a race of empaths is a far better exemplar of emotional awareness than humans. Equating "emotional" with "human" is a non sequitur both in real-world terms and in Trek terms.

Um, no. Read what I said again. If we assume that Data doesn't have emotions, that's a piece he's missing if he wants to be like a human. The fact that other species have emotions too is beside the point. If I wanted to be a painter and tried to learn the method of Picasso exclusively (a bad analogy, but work with me here), it wouldn't matter Van Gogh also painted.

And again, there are autistic-spectrum humans who don't show emotion in conventional ways. The suggestion that people who don't show emotion are less than human is very disturbing and ugly in that context.

Okay, I don't even know where to begin, but here goes:

You're right about that. I never said that. I don't believe it. My suggestion doesn't even go there. So, we don't have a problem over this.

And it ran counter to Trek's ideal of inclusion to say that a neuro-atypical character like Data should aspire to conform to more ordinary behavior rather than being satisfied with who he was. I think that today there's more awareness of neurodiversity in the culture and the media, and I think a modern version of TNG would respect Data's individuality more rather than insisting there's something wrong with him just because his psychology and outward affect are atypical.

As I recall from the show, he was overall accepted (excusing a few higher ups, and even one of them changed his mind). The emotions and humanity quest was what Data wanted.
 
As I understand it, there's yet to be an emotional AI created in real life. Is that just because of the limitations of AI period or because we still need to figure out how to write the software?

There hasn't been a self-aware AI created in real life either. The absurdity is the belief that creating sapient thought is somehow easier than creating emotion. After all, emotion evolved in animals long before intelligence did. As I said, emotion per se isn't that complicated. It's just a pre-wired response to a stimulus. Conscious thought is immensely more complicated. We haven't successfully simulated either one in computers yet, but logically emotion would be the easier one to achieve.


Um, no. Read what I said again. If we assume that Data doesn't have emotions, that's a piece he's missing if he wants to be like a human. The fact that other species have emotions too is beside the point. If I wanted to be a painter and tried to learn the method of Picasso exclusively (a bad analogy, but work with me here), it wouldn't matter Van Gogh also painted.

But that's exactly the problem -- the idea that Data "should" aspire to be something he isn't and feel there's something wrong with him because of his difference, rather than accepting himself for what he is. That kind of intolerance of neurodiversity is unworthy of Star Trek.
 
Yeah, emotions aren't some magical outside-intellect thing or anything; that's a conception that's pushed by a lot of sci-fi (including Star Trek), the whole "Rational vs. Emotional" dichotomy, but it's contrary to basically all neurological and psychological research that's been done into the topic for the last couple decades. We know now that emotion is a foundational part of thought used to respond to specific stimuli or to communicate present state to those around you. Fear evolved to keep us away from quick-moving danger; if Data's got a self-preservational urge, that's an emotion, even if it was programmed in. Disgust likely evolved to keep us from stationary danger like detectable pathogens and contamination; same deal. Sadness and happiness (and the associated expressions thereof) likely came about as a way of quickly identifying the inner state of others in your group, so that you could look at someone and see if they needed help or if they were good and thus increase the likelihood of group survival.

Data has autonomous internal responses to stimuli, and so he has emotions by definition, since that's what emotions are. Even late-season Data had autonomous internal responses to stimuli. Data's definition of friendship?

...as I experience certain sensory input patterns, my mental pathways become accustomed to them. The inputs eventually are anticipated and even missed when absent.

That's literally, by definition, an emotion. It's an android emotion, but it's an emotion, in that it's his brain's way of autonomously responding to the continued presence of a positive figure.

Every species would have different emotions, since evolution is really good at finding a thousand different solutions to the same problems. Data's are different, but so are Tuvok's, and Worf's, and Deanna's, and Jadzia's.

Edit: Oh I meant to respond to this too:

It is my understanding that the approach taken to Star Trek V was taken because of an approach to religious beliefs, that were present in different ways in V and Bread and Circuses. Those assumptions assume the superiority of a given group over a different group.

As far as I know, Goodman only took that approach because Final Frontier sucked. Not because of its approach to religious beliefs. Just that it sucked. :p
 
There hasn't been a self-aware AI created in real life either. The absurdity is the belief that creating sapient thought is somehow easier than creating emotion. After all, emotion evolved in animals long before intelligence did. As I said, emotion per se isn't that complicated. It's just a pre-wired response to a stimulus. Conscious thought is immensely more complicated. We haven't successfully simulated either one in computers yet, but logically emotion would be the easier one to achieve.

So, how would that work? Would it be like programming an artificial animal, like, say a cat, or something?


But that's exactly the problem -- the idea that Data "should" aspire to be something he isn't and feel there's something wrong with him because of his difference, rather than accepting himself for what he is. That kind of intolerance of neurodiversity is unworthy of Star Trek.

Read the second part. It's what Data wanted to do, whether it was for the best or not.

Also, didn't his creator design him for that express purpose? If so, how do you think that would factor into the question if he should've tried to emulate people or just expand as a robot?
 
It is my understanding that the approach taken to Star Trek V was taken because of an approach to religious beliefs, that were present in different ways in V and Bread and Circuses. Those assumptions assume the superiority of a given group over a different group.

As far as I know, Goodman only took that approach because Final Frontier sucked. Not because of its approach to religious beliefs. Just that it sucked. :p
Goodman talks about his reasoning behind his treatment of STV in this post.
 
Also, didn't his creator design him for that express purpose? If so, how do you think that would factor into the question if he should've tried to emulate people or just expand as a robot?
Yes, I always had the impression that Data was deliberately created without emotions and with a desire to be human so that he wouldn't turn out like Lore.
 
So, how would that work? Would it be like programming an artificial animal, like, say a cat, or something?

Technically, you could say that some of the robots we've developed now actually do have emotion in a sense, in that they respond to stimuli without conscious intervention. It's at no higher level than that of, say, an insect or something, since there's no self-awareness of the sort you see in more complex lifeforms, but some emergent behavior...
 
So, how would that work? Would it be like programming an artificial animal, like, say a cat, or something?

That's taking my analogy way too literally. It's just an example to illustrate how absurd it is to treat emotion as some magic fairydust that's uniquely human and that's somehow more difficult to program than actual conscious thought. The sci-fi cliche that emotions can't be programmed is stupid, because they're the closest thing living beings have to programming. You don't learn emotions. You don't choose to feel something. They're hardwired into our brains and they happen automatically in response to stimuli.


Read the second part. It's what Data wanted to do, whether it was for the best or not.

But the show never treated that desire as harmful or a sign of self-loathing. It treated it as an aspiration we were supposed to sympathize with.


Also, didn't his creator design him for that express purpose? If so, how do you think that would factor into the question if he should've tried to emulate people or just expand as a robot?

According to "Brothers," Soong created Data because he wanted to leave a legacy, to pass on something of himself to posterity.

After all, why create something other than human just to try to copy humans? That seems like a pointless exercise. As Isaac Asimov once said, "If robots turn out exactly like human beings, it would be a terrible waste; we've got human beings."


Technically, you could say that some of the robots we've developed now actually do have emotion in a sense, in that they respond to stimuli without conscious intervention. It's at no higher level than that of, say, an insect or something, since there's no self-awareness of the sort you see in more complex lifeforms, but some emergent behavior...

Exactly. Emotion is much more basic than self-awareness. We romanticize it as some exceptional thing, but emotion without cognition is merely animal reflex. It's our thoughts and ideals and goals that channel our emotions in more complex ways.
 
And that sums up why I won't touch the bigoted James t Kirk autobiography with a barge pole.

I'd actually highly recommend the autobiography, I just don't like the one chapter dealing with Final Frontier. (Most of the book does a great job of including important events from Kirk's life from a bunch of different sources and making them feel like they had a real impact on the life of a fictional character, so in summarily dismissing Final Frontier and introducing an elaborate workaround explanation for its existence, the chapter seems out of place just in the context of the book itself.)

TC
 
That's taking my analogy way too literally. It's just an example to illustrate how absurd it is to treat emotion as some magic fairydust that's uniquely human and that's somehow more difficult to program than actual conscious thought. The sci-fi cliche that emotions can't be programmed is stupid, because they're the closest thing living beings have to programming. You don't learn emotions. You don't choose to feel something. They're hardwired into our brains and they happen automatically in response to stimuli.

Okay. (Also, I should probably note that emotions aren't the only thing that Data would need to figure out. It's just a thing and one that the show chose to explore more, than, say, the fact that he would outlive all his friends has Nemesis not happened.)


But the show never treated that desire as harmful or a sign of self-loathing. It treated it as an aspiration we were supposed to sympathize with.

I never said it was harmful or not. I was just observing that it was what Data wanted to do.


According to "Brothers," Soong created Data because he wanted to leave a legacy, to pass on something of himself to posterity.

Didn't Data say somewhere that becoming more human-like was part of his programming? Besides, Soong could've had multiple reasons for building Data in the first place.

After all, why create something other than human just to try to copy humans? That seems like a pointless exercise. As Isaac Asimov once said, "If robots turn out exactly like human beings, it would be a terrible waste; we've got human beings."

Well, excusing the challenge of pulling it off in the first place, nothing would be exactly the same. Say Data did succeed in his quest. He'd still be able to think faster, would live far longer, and experience stuff differently. So, he wouldn't be exactly like a human.
 
Well, excusing the challenge of pulling it off in the first place, nothing would be exactly the same. Say Data did succeed in his quest. He'd still be able to think faster, would live far longer, and experience stuff differently. So, he wouldn't be exactly like a human.

Then in what sense would he have succeeded at his quest? The only way he could know what it is to be like a human would be to be a human; it's the problem of qualia. In fact, it's literally one of the largest and most well-cited thought experiments in philosophy on the question of qualia. Literally the only way he could know if he is reaching the human experience would be to be a human, since if he is not a human, he cannot know if what he is experiencing is comparable to what it would be to be a human and so he cannot know if he has succeeded or failed. That "ineffable quality to memory" that he mentions in Measure of a Man, he can never know if that which he has associated to his own experiences is anything at all like that which would be associated to a human having the same experiences.

Though actually, this makes me wonder how the existence of telepathy in the Star Trek universes (or other similar universes) have impacted the question of qualia in the field of philosophy, and what sort of questions have further evolved from there. I can definitely see a few arguments and counter-arguments that could come out from it. :p
 
Last edited:
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top