And how exactly is it evident that Data is sentient versus merely the sum of his programming, doing what Soong's code tells him to do? Or does it not matter whether that's the case?
I think that sentence in bold perfectly sums up the prejudice humans can have against sentient machines. Because they're made from different components than us, because their minds have been actively coded by another person, people tend to think it makes them less sentient.
That is the main problem.
Just because Data, or Exocompts, or many other fictional sentient AIs, were programmed don't make them any less sentient. We humans, biological animal lifeforms, are programmed too. Our brains don't come blank when we are born, they're already wired and pre-scripted for survival behaviours like nursing or searching for our mother's scent. Watch a video on Youtube of a 2 hours baby foraging for his mother's breasts, it's very obvious it's a totally programmed behaviour.
As we grow and learn, our brains develop new connections in a way that is totally similar to Data's heuristic algorithms. We fixate on things,
imprint on things, learn by
imitation, by
trial and error, by
emulating behaviours we observed or sometimes read about...
All of this happens because our brains are programmed for developing in a such a way, which is different from a person to another (hello neurodiversity). We also inheritate brain patterns from our parents, which explains why some disorders (like bipolar disorders, generalized anxiety, schizophrenia...) and temperament traits are genetically inheritated.
So I don't really see what kind of a difference it can make if the person's brain was actively coded by another person, or if it was coded by long term evolutionary selection. This is an anthropocentrist point of view to think that a programmed AI is supposed to be less sentient than a programmed biological lifeform, just because the AI is following their scripts. We do the same as them everyday, we just don't like to aknowledge it.
I'll even get bolder here and throw another topoc tackled in
TMoaM : consent.
Maddox's excuse for refusing to accept Data's refusal is that he wouldn't be sentient, therefore his consent wouldn't seem to exist. That's a clever metaphor about the excuses abusers use to make an aggression appear less as an aggression, just because the victim was
this or did
that.
Whatever Maddow's point of view was about Data's sentience, Data's statement that
he didn't want to cooperate should have been enough to stop the whole process. Just because someone's level of sentience or self awareness isn't exactly the same as ours doesn't make it any less unethical to use them and abuse them, especially when they already said NO.
In this episode, Maddox is definitely not motivated by evil plans, but he's definitely acting as an anthropocentrist, speciesist abuser, and clearly demonstrates a moral prejudice against Data's nature.
Footnote : I'm not saying that to you in particular, Donlago. Just quoting.
Like I said, the real villain of the episode is the JAG.
I don't agree with you. I actually feel that Louvois did as much as she could to make sure Data got a chance to win.
When she tells Picard and Riker that she needs someone to represent Maddox's interests and Riker refuses, she warns them she'd have no other choice than ruling against Data. She could have remained silent and just ruled her decision.
That should have been a villain's action.
Instead, she strongly hinted that there was something she could do, if and only if Riker accepted to represent Maddox. She settled the chess game in a way that gave a chance to Data.