• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Should Data been thrown out of Starfleet?

Just because a robot looks more like a human being than say an automatic lawnmower doesn't mean that it's any closer to a human being than the said lawnmower
Maybe, but you're talking about non sentient, non self aware devices that exist in reality. This topic is about a fictional person who happens to be a fictional sentient, self aware android. Comparing Data to an automatic lawnmower is tantamount to comparing your father to a waffle machine, that's basically nonsense.
 
Maybe, but you're talking about non sentient, non self aware devices that exist in reality. This topic is about a fictional person who happens to be a fictional sentient, self aware android. Comparing Data to an automatic lawnmower is tantamount to comparing your father to a waffle machine, that's basically nonsense.

I'd really appreciate it if you left any allusion to my family out of this discussion. I don't think that there's ever a need to get personal. Thanks for not doing that in the future.
 
Discofan, it's a science fiction show, which is similar to a fantasy. There are going to be things that are real in the show but would not be able to exist in real life! Warp drive, extra-terrestrials, and so forth. In this type of a discussion we're talking in the context of what is real as defined by the show, and in this case that includes sentient/sapient machines! "Can we have truly intelligent artificial life?" would be a whole other discussion, possibly for a science philosophy group. It would be like if someone was talking about xeno ethics in Star Trek and I replied "Well aliens don't actually exist, so ... " :)

Sometimes we have to simply suspend our disbelief, and wonder "What if?". What if there really were fully intelligent and self-aware machines which we had been using as tools? What would be the moral implications of that? Would it be slavery? I think the Exocomps were a great way to explore this, because they are not anthropomorphic. It's easy s you said for people to sympathize with a puppet that looks like a human. But what about something that looks like a screwdriver but nonetheless exhibits signs of conscious thought? This in my mind was a great Star Trek episode!

I understand all that but I was answering TauCygna that talked about the real world.

Read TauCygna's post and then my answer to it and you'll see that it is in context.
 
I apologize Discofan, I did not understand that you were replying to something specific. Due to the separation of the posts I did not connect any context, other than it appeared to me you were making a general reply to the discussion using the reality of artificial intelligence limits to apply to what happened with Data and the Exocomps.

I felt that what TauCygna was speaking of earlier was using an example of how we currently view intelligent animals to illustrate how people in the 24th century might view artificial intelligence. I may have been mistaken, but this was how I initially read the message and so you could see how I did not recognize your argument as a specific rebuttal.
 
I apologize Discofan, I did not understand that you were replying to something specific. Due to the separation of the posts I did not connect any context, other than it appeared to me you were making a general reply to the discussion using the reality of artificial intelligence limits to apply to what happened with Data and the Exocomps.

I felt that what TauCygna was speaking of earlier was using an example of how we currently view intelligent animals to illustrate how people in the 24th century might view artificial intelligence. I may have been mistaken, but this was how I initially read the message and so you could see how I did not recognize your argument as a specific rebuttal.

That's really ok. I may not have been clear enough myself. Communication is one of the hardest thing to achieve.
 
Two points: If a modern-day human puts other humans in danger of their lives to save the lives of animals (except specific missions that take these risks into account) he or she would likely face appropriate charges - even though we respect animals (hopefully), and respect their emotions and intelligence; they are not people.
Second, Data did not murder Lore in Descent II. Lore was disabled/killed in action during a military engagement. After the battle, Data chose to not repair Lore (a passive move) and instead took him apart. In human terms, it would be the same as a DNR.
 
I disagree with the animal comparison. The Exocomps were demonstrating signs of sentient intelligence, which is what made the difference. Self-awareness is a huge deal! :)

It also wasn't about a choice between saving the Exocomps or the crew members. It was about deliberately sacrificing them. These were beings that had the capacity to make the choice for themselves (which they did).
 
even though we respect animals (hopefully), and respect their emotions and intelligence; they are not people.

That is basically what I meant when I tackled the fields of speciesism and anthropocentrism. Laws and general said "common sense" treat humans as a priority over other animal species, when there is basically no scientific nor moral evidence to argument in favor of such decisions. Most vertebrate species do experience sentience and emotions the way we do. They also experience singularity, making them non-human people.
However, our cultures and legal systems don't take those facts into account, pretty much like the researcher didn't want to take into account Exocomps' sentience.

The topic of Exocomps' rights is a perfect analogy of animal rights :)


Talking about Data dismantling Lore in Brothers and not reparing him in Descent II - yes those were pretty confusing decisions, coming from someone who stood for Lal's and Exocomps' rights. I wonder if his decision was based on common good, due to Lore's dangerous behaviours ? Any thought ?
 
If robots don't work then we won't make them.
Fair enough, but the subject here is that they've already been made & in some way achieve consciousness, sentience, intelligence etc... thereafter, like the exocomps. In that hypothetical or fictional context, we could still hope that we'd be moral enough to respect it, once it was established, & maybe even grant it the liberty to reproduce itself. So, while we wouldn't make them, once they no longer serve a work function for us, they may find the ability & resources to do so themselves, no?
 
I don't think I like the "in the real world" comparison, shouldn't we see Star Trek as reality in the future.

Some things could or couldn't happen just because it's a television series, I think of it as the real deal, things that could happen in the real future.
 
The episode would have been much more interesting if the writers had made the exocomps not given their consent to be sent to rescue Picard and LaForge. What if the exocomps had said "screw you, we value our lives more than yours"?

Of course, the writers would have created a dilemma for themselves. They wouldn't do that to themselves. The writers waited until the very end of TNG, i.e. "Nemesis", before they actually allowed a major character, Data, to sacrifice his/her life to save the lives of the rest of the crew.

That is basically what I meant when I tackled the fields of speciesism and anthropocentrism. Laws and general said "common sense" treat humans as a priority over other animal species, when there is basically no scientific nor moral evidence to argument in favor of such decisions. Most vertebrate species do experience sentience and emotions the way we do. They also experience singularity, making them non-human people.
However, our cultures and legal systems don't take those facts into account, pretty much like the researcher didn't want to take into account Exocomps' sentience.

The topic of Exocomps' rights is a perfect analogy of animal rights :)
Yes.

Even after Riker may have grudgingly accepted Data's claim that the exocomps were life forms, he and probably everybody else aboard still held to a belief in a hierarchy of life forms, which life form was more important than another.

The exocomps were expendable. As long as Picard and Laforge were alright, that was what mattered to them. No one else except Data seemed to have lamented the loss of some exocomps during the rescue.

It would have been interesting if the exocomps had valued their lives more than Picard and Geordi's.
 
The episode would have been much more interesting if the writers had made the exocomps not given their consent to be sent to rescue Picard and LaForge. What if the exocomps had said "screw you, we value our lives more than yours"?

Of course, the writers would have created a dilemma for themselves. They wouldn't do that to themselves. The writers waited until the very end of TNG, i.e. "Nemesis", before they actually allowed a major character, Data, to sacrifice his/her life to save the lives of the rest of the crew.


Yes.

Even after Riker may have grudgingly accepted Data's claim that the exocomps were life forms, he and probably everybody else aboard still held to a belief in a hierarchy of life forms, which life form was more important than another.

The exocomps were expendable. As long as Picard and Laforge were alright, that was what mattered to them. No one else except Data seemed to have lamented the loss of some exocomps during the rescue.

It would have been interesting if the exocomps had valued their lives more than Picard and Geordi's.
That would've undermined our acceptance of them. Some major components of how intelligent & worthwhile we consider a being is their compassion & selflessness. From a storytelling perspective, that kind of needed to be shown, if we were to accept them as worthwhile intelligent beings, imho

Heck, we even do that amongst ourselves. Those of us we think less compassionate & more selfish, are certainly considered lowlier, & they who are selfless & generous are considered noble

Now what would have been interesting is that if in the end, our annoying guest actress of the week had done more than just wise up, & once she realized that they were intelligent & compassionate beings, willing to sacrifice themselves, they could've written in a way for her to sacrifice herself instead, so that none of the exocomps had to die. Sort of a fitting compensation for having enslaved them & erased some of them for noncompliance. It would have made me like her a buttload better than I do too lol
 
What would have been nice is to see an Exocomp make use of his me-time which all sentient beings are entitled to. What would it do, get drunk? Go watch a play or listen to a concert? What do Exocomps do for fun?
 
What would have been nice is to see an Exocomp make use of his me-time which all sentient beings are entitled to. What would it do, get drunk? Go watch a play or listen to a concert? What do Exocomps do for fun?
Have drag races with other exocomps, lol.

Seriously though. They could tie into the holodeck, & experiment with inhabiting different bodies, humanoid & the like
 
Heck, we even do that amongst ourselves. Those of us we think less compassionate & more selfish, are certainly considered lowlier, & they who are selfless & generous are considered noble
Well, one person's selflessness is another person's foolishness. And where some would see selfish, others would see a intelligent considered decision.

It's the whole "my philosophy supersedes yours" debate.

If the exocomp were instead a Human, do you feel the scene would have played out the same, or different? Can you (ethically) send a person into danger to save multiple people who are at risk?

If the exocomps had continued to refuse, and the three men had been killed, would Data have still been correct in his actions?
 
Yes, I feel that Commander Data would unequivocally been correct in his decision. It would have been very sad that the crew members had died, but we do not have the right to force another intelligent being to sacrifice its life even to save our own.
 
Yes, I feel that Commander Data would unequivocally been correct in his decision. It would have been very sad that the crew members had died, but we do not have the right to force another intelligent being to sacrifice its life even to save our own.

It depends on how we define "intelligent". Mere programs can now defeat anyone at Chess or at Go. Does that make them intelligent? It doesn't but for a long time, we thought that Chess was a good measure of people's intelligence. Now we know better.
 
I am meaning it in the manner of self-aware conscious thought.
How do you know that something is aware? And before answering, think about how easy it is to fake awareness. Just as you can play the part of a murderer without being one yourself. A non-aware device could give the same responses to questions or stimuli as a conscious being without being conscious itself.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top