In the TNG episodes Descent parts 1 and 2, Lore and his Borg group were able to easily deactivate Data's ethical program and turn him into a psychotic villain gleefully torturing Geordi. It was only through some technobabble that the Enterprise crew were able to reactivate Data's ethical program. In First Contact, the Borg Queen reactivates Data's emotions chip to try to get him to join them. This tempted him for all of 0.68 seconds. Presumably Data's ethical program prevented him from joining the Borg for real. The question is, why didn't the Borg Queen just deactivate Data's ethical program? Then 'Descent-style' Data would be gleefully blowing up the Phoenix and assimilating the Enterprise just like she wanted. Lore's Borg henchman Crosis was able to deactivate the ethical program easily enough in Descent. And the Borg would know about Data's ethical program because the information about it would have been in Picard's memory when they assimilated him to become Locutus. They also presumably knew about the emotions chip from the assimilated Enterprise E engineering crew. Yet the Borg only press their advantage on Data using the emotion chip, not on any deactivation of the ethical program. Did Data massively upgrade the security around his ethical program after Descent? Even so, it's strange that the Borg Queen doesn't mention the ethical program at all during any of her "corruption" speeches to Data.
Yeah, the smart move would have been for Data to make it impossible for his emotions or ethics to be 'overridden' again so easily - but then where would we get our great android-gone-wrong Trek stories? The Rogue Borg as depicted in Descent I and II were kind of dummies. They weren't part of the larger collective, so they were at the mercy of Lore's ideology. Lore's plan was to get Data over to his side and then "destroy the Federation." By contrast, the Borg Queen wanted to enslave humanity via time travel. I thought Data was the consolation prize in that, as Picard had already avoided becoming Locutus. I think the Borg Queen really wanted to convince Data that being Borg was awesome. Maybe offering cybernetic coitis to Data was her version of 'altering his ethical subroutines'.
Ironically when Data is damaged in Insurrection, the ONLY thing working was his ethical program. And that caused him to start attacking the Son'a and their Federation allies who were going to relocate the Ba'ku. To me, this does sort of imply that Data upgraded the ethical program security after Descent (and what he did to Geordi) to the point that it's almost impossible to deactivate. Even when he's severely damaged, like in Insurrection, the ethical program would be the only thing that still functions. From Insurrection-- La Forge: All I know is he was functioning normally until he was shot. Then, his fail-safe system was activated. Picard: Fail-safe? La Forge: His ethical and moral subroutines took over all of his basic functions. Picard's seeming unfamiliarity with the fail-safe system seems to indicate that it's a relatively new addition to Data. The Memory Alpha article: http://memory-alpha.wikia.com/wiki/Memory_loss_fail-safe_system shows no mention of it before Insurrection, thus opening the possibility that Data installed it himself after Descent.
"Descent" doesn't happen until after BOBW. And we have no reason to think that before "Descent" Picard knew Data's ethics were controlled by a program that was possible to deactivate. I like your observation that "Insurrection" implies Data really beefed up security around his Ethical subroutines. It's also likely that modifying Data to get him to comply would be a hollow victory for the Queen. He either joins willingly or is destroyed. Reprogramming him to comply defeats the purpose, as Picard points out, she wants an equal not a drone.
Data certainly suffered amnesia in Thine Own Self, that's several episodes after Descent. Remember "Jayden", the Iceman? "I was attempting to download the sensor logs from the probe's on-board computer. There was a power surge… I believe the surge overloaded my positronic matrix. After that, I have no memory until this moment… (seeing himself still in Barkonian clothing)… but it appears, I had an interesting time." - Data, after returning to the Enterprise
What Star Trek often gets wrong with storylines like Data in Descent, or EMHs in Equinox is that deactivating ethics doesn't actually make one an evil villain, it just means they'll gladly do unethical things to accomplish their goal. The Borg Queen could have deactivated Data's ethics, but unless he was 100% on her side, that could be just as dangerous to her as it could be to former shipmates of his.
I didn't actually think about that, good point. In Descent, Data's goal was to experience emotions, to the point that he went into withdrawal if Lore stopped providing him with emotions. He went along with torturing Geordi to continue experiencing emotions. By the time of First Contact, Data likely has had his fill of emotional ranges (except for anxiety apparently), and Data does not go into withdrawal the moment he deactivates his emotions chip. Emotions are no longer the ultimate goal for Data at the time of First Contact. More likely, destroying the Borg is, and if the Borg Queen actually deactivated Data's ethical program, all this might accomplish is removing Data's inhibitions about killing fellow crew members in his fight to stop the Borg and he would promptly autodestruct the ship to kill the Borg regardless of the Enterprise crew still aboard.
I never really like the whole ethical subroutine thing for Data. I would have much preferred the idea that his morality was an integral part of his personality that couldn't be separated or simply turn on and off.
As sad as it is to say, there probably is a biological equivalent in humans. I think I remember reading that certain brain damage can indeed "deactivate" morality in people...
Well, morality is all about compassion, and the ability to see that one's own success in the generic case is dependent on altruism towards the broader community. And that is both a great mental challenge, and something highly dependent on specialized brain structures dedicated to deciphering social connections and the emotional states of others from scant visual cues such as facial expressions. Damage the ability to read another person's face, and you cripple the ability to formulate meaningful morals that boost the survival of the individual through boosting the community. There isn't any point in being nice to a person whose smile you don't recognize. The case of Data could have another layer to it, though. A psychopath may lack compassion, but need not be incapable of reading people. Lore would probably have had the means to devise a code of morals for himself, one enabling him to work as part of a society - but he didn't. In order to be able to test run an android within the society of the Omicron Theta refuge, Soong would thus have to cheat by installing extra social safeguards in Data, at least initially. The very idea might be to make those removable at a later stage, when Data has fully developed his own social conscience out of general humanoid needs - thus, a "subroutine" that can be independently messed with. Timo Saloniemi