Because the borg attacked the Federation(and god knows how many other civilizations) without any provocation whatsoever. The Federation, on the other hand, would have acted in self-defence if Picard would had used the virus.
It's the difference between murder and killing in self-defense. It's a HUGE difference. Legally, murder is harshly (and justly) punished, unlike self-defense. Acting in self-defense is moral.
While there is a difference between self-defense and murder, there are several issues you're not considering here.
One; Self defense requires an immedate threat. That is, a foe which is currently attacking you. It is not 'self defense' when you attack an enemy that has attacked you in the past, but is no longer attacking you. The borg, however, are an undetermined threat. It's known they will likely attack again, but it is unknown when. It could be five days, or five hundred years for all we know.
Two; the Drones of the Borg are incapable of making their own decisions. They're in fact forced to obey due to their nature, and are in fact captives. You're not just killing Borg, but innocent people who the Borg took control of. These people are simply vessels for the Collective to enact its constructs. Why are these innocent captives so unimportant? You'd also be killing billions of these.
True - but, on the other hand, it might work.
The invasive program probably had a very long incubation period, during which time it could spread through the collective. The Queen was a borg after all.
Yes, she was, but she wasn't as tied to the network as the rest of the Borg were. She retains nearly all of her personality(or at least some sort of individual personality), and can act independently. The Invasive Program requires a long incubation period, during which it slowly grows. If it starts to sap resources it's quite likely it could be recognised as a threat before it truly hampers the Borg, again as seen from Hugh's example in his own Cube. It's unlikely, when we consider such examples, that this would truly work. The Collective does have the ability to purge undesired attributes.
According to deontological ethics, you can't get more evil than that - it literally uses all of sentience in the cosmos as a means to an end.
This is actually contingent upon the ability to choose. For instance, a Lion in the savannah kills other animals, and eats them. It thereby uses the other animals as a means to an end. The Collective as a whole, the Borg Queen notwithstanding, does not have the consciousness required to make an ethical choice. They are a force of nature, like the lion. I can agree that the Borg Queen could be considered evil, but the rest of the Collective is actually influenced and controlled by the Queens. The Collective as a "whole" does not have the ability to choose, ergo not being 'evil' by Deontological standards.
According to utilitarianism, also, it is immoral - it uses the rest of the universe for his benefit. The greatest good for the few. It doesn't care about what's good for others - it never once asked a species it was about to assimilate if the species agrees to be assimilated; it didn't care that his victims didn't want to die.
The collective is ruled by egoism.
Well, not entirely. It's just a force. It has no real 'conscience' outside of the Borg Queen(Who can actually be described as Evil, but she is an entity somewhat distinct from the Collective). Likewise, remember that when you're assimilated you don't really 'die', you simply become a part of the Collective. As well, they don't see it as the greatest good for the few, they see it as adding others to their perfection. This is another problem with Utilitarianism, it is so incredibly subjective. It's hard to determine is Action A is actually the most beneficial.
You're almost correct, I think. The Collective is ruled by an egoistic being. Consider a Computer Virus. Someone makes it, and then unleashes it upon the world. It then spreads, destroying computers. Is this virus "evil"? I would say no, because it has no choice in the matter. It is simply an entity which can only operate upon what it is designed to operate like; like the Borg.
From an utilitarian POV:
If the virus works, you save billions of lives.
If it doesn't, the borg may attack you and kill billions of your people. Or, to be more precise, the collective will come in mass sooner than it would normally come and kill billions. Make no mistake, it will come eventually even if you don't use the weapon.
And you may never again get the chance to use this weapon.
This again casts the Utilitarian point of view into a hazy or uncertain field. However, let's look at it again.
We're dealing with, as you state, a gamble. The chances are unlikely that it would actually work. So we're dealing with an issue of time. I agree, eventually the Borg would come. This is why they symbolize the Jungian "Shadow". However your own argument works against the concept of Utilitarianism. If it doesn't and as judging from Hugh's example, it wouldn't, this means that they would likely attack sooner rather than later. If the Borg attack sooner, the Federation doesn't stand much of a chance(As seen by examples like Wolf 359). However, if the Borg attack later, after military preperations had been made, the Federation stand a much greater chance.
Now, here's the problem. Didn't you earlier mention that even a 10% chance of having billions die is unacceptable? I'm noticing a degree of inconsistency in your judgment calls. Likewise, in the canon universe the Borg /do not/ attack in a large force. The problem here is that you're not seeing the problem. Sure, the virus might work. But on the other hand it also might NOT work. And as seen from the example of Hugh, it's likely /not/ to work.
From my analysis, I'm finding that the properly Utilitarian approach is in line with the Deontological one. By using the drone you risking the immediate extermination of your entire civilization in exchange for an unlikely chance at eliminating another civilization
If you don't use the drone, billions will certainly die. This is immoral - you sacrifice billions so that that you can be moral, "the good guy", whatever - that's selfish.
As stated before, you can see the same exact result BY using the drone. I see no difference here. We're dealing with two potentialities where billions die. One potentially sooner than the other(Using the drone), and instigated. The other is later, and not instigated by brash actions.
As well, you have it reversed. A person doing it so they can be moral is not acting Deontologically. They are acting to self-aggrandize-Egoism. A person acting Deontologically in this point is doing it because it's the right thing to do. This is not selfish.
If you use the drone, you're immoral from the start.
Agreed.
As you can see, there is no deontologically moral way out of this situation. The universe doesn't always allow you to have a perfect moral choice. Sometimes, you have 2 bad options and you must choose the lesser evil.
No, there is a Deontological path. Not using the Drone. See? That's simple. The problem is that you're using Utilitarianism to judge Deontology, thus establishing the A Priori assumption that Utilitarianism is the correct Moral path. This makes sense, because we live in a Utilitarian society.
As well, didn't Kirk himself say that he doesn't believe in the "no-win" scenario? :P
In DS9:"In the pale moonlight" a similar situation was depicted. What do you think about Sisko's choice?
The thing with DS9 is that it often deals with situations where one has to make compromises with ones ideology. However, W. D. Ross deals with such a situation. He says that, Deontologically it may be more important to act upon one Duty over another. However, this shouldn't be a Consequentialist or Quantitative reasoning. We can't go, for instance "I should kill these five people to save those six people". In Sisko's situation, he was determining that some of his Duties were less important than others in certain cases. Specifically, his Duty to the Federation and the preservation of his ideals are more important than his duty to be honest and not to lie. This is also a Deontological position.