^All very well-stated distinctions here; yet our definitions are based on the biological paradigm of:
1 intelligence : 1 discrete entity
You could conceivably nullify these labels by creating an AI that is unaware of certain aspects of its existence, by compartmentalizing its processing centers. Is it self aware?
It could be programmed with completely erroneous sensors. It could believe it's immobile when it is not, or mobile when it isn't.
It can be out of control of its own responses, or even feedback wrong information to itself. It could be completely out of touch with itself and the world.
Or, it could outpace biological perspectives completely. It could perceive only one color - or beyond the visible spectrum into radio, for example. It could perceive individuals, their statistical likelihoods and their life histories, geneologies, the size of their jackets, boots and sunglasses, and lots more.
It could link and decouple with other like components at will (like Odo's Great Link). The intelligence could function as one entity, or several as needed - like a transforming mega-robot, physically or mentally.
Its S-ness (sapience/sentience/self-awareness) could be attenuated to almost any degree - with or without awareness or control. Can something so easily manipulated really be said to "have" these traits? It may exhibit them, but there's no single "it" there.
We have no choice but to be individuals. AI is not limited by this constraint (though our perception of them may be). Hence, it would be an error to look at a Cmdr Data or EMH, and not see that it is both much less than - and much more than - one entity, an individual. Again: hubris.
This raises more philosophical questions, like whether it would value being locked into a single entity.
BTW, philosophy may not produce value directly, but it does indirectly: as in the shaping of a free society that then transforms the whole planet. Yesterday I heard someone say "Why teach Language when you can teach Physics?" A fair question. To which was replied, "Try teaching Physics without Language." Yeah, when everything's working, you don't "need" it.... But when the stuff breaks down, you see that it is actually pretty damned relevant.
Personally I think it comes down to scope of concern (does it matter to me, my family, industry, society, history, species, etc ) - something of ourselves we may project onto others, but don't necessarily perceive accurately from other intelligences. Price of individuality, I suppose. Shifting values for shifting priorities or motives; and lots of assumptions that what matters to us now should always matter to others. You know, Human Bias. But - here's a thought - universities are not in the business of job training.