• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Sentient holograms: Minuet, Moriarity, and The Doctor

...Of course, he could still be an android, and this in no way would reflect on the fact that his regular-universe counterpart is software. Living people can move on to become androids, as Dr Korby and Ira Graves can readily testify. So can software, or noncorporeal life, or whatnot.

I'd hate to think outside influence would be required for the fantastically capable and complex computers of Starfleet to become able to bud off sentience/sapience. Such machines should be inherently capable of the feat, and merely hindered from practicing the art more often by very deliberately installed blocks and limiters.

Moreover, while sentience/sapience obviously ought to be a smoothly sliding scale rather than some sort of a threshold, a starship computer ought to be inherently capable of setting that slider on multiple positions at once. Humanlike sentience for holographic entertainment, sentience superior to human for moderating the onboard bulletin boards, sentience just sufficient for having a sense of drama for operating the doors...

That Minuet got born could well be a matter of the Bynars yanking out an inhibitor. Cyrus Redblock would have his burned out by the Jarada. And Moriarty would be a rare case of the computer getting a direct order to go sentient, with authority to override the inhibitions, and jumping at the chance with all the enthusiasm allowed by its own level of sentience.

Timo Saloniemi
 
I'd hate to think outside influence would be required for the fantastically capable and complex computers of Starfleet to become able to bud off sentience/sapience. Such machines should be inherently capable of the feat, and merely hindered from practicing the art more often by very deliberately installed blocks and limiters.

Sci-fi aside, consciousness isn't just a function of having a sufficiently advanced network organization, or else the phone system would have come to life decades ago. Not even to mention the internet. There's a lot more to it than just having a sufficiently complicated set of interconnections.

Can you go into more detail from a computer science perspective why you think Starfleet computers ought to be able to bud off sapience as though it's nothing? Because I have a good deal of CS experience, especially in terms of the state of AI research, and from that perspective I just don't see it; I agree that it ought to be possible to produce true AI, but only by explicit efforts to that end, and I don't think that non-AI would be capable of bootstrapping AI. (I'm even doubtful about accidental emergent AI because again there's more there than just complexity, but it's established in Trek, so I can't argue that without arguing against Moriarty and "The Light Fantastic" is too great for me to want to do that.)

Like, why, from a technical perspective, should they be inherently capable, when there's no reason to design them towards such an end? If it was something like the Geth in Mass Effect, with innumerous VI interacting with one another to form AI as a composite entity (something akin to the lifeform the Enterprise computer made, possible?), then maybe, but that's not really what you're describing.

It took literally millions of years for consciousness to come about by accident on Earth, I don't see why it should be an easy thing for any computer in Trek, especially without any selection pressure. In fact, with negative selection pressure, considering that Starfleet doesn't want it to happen.
 
Last edited:
Sci-fi aside, consciousness isn't just a function of having a sufficiently advanced network organization, or else the phone system would have come to life decades ago. Not even to mention the internet. There's a lot more to it than just having a sufficiently complicated set of interconnections.

Well, yes and no. It's about having the right kind of complexity. One key element is that the system has feedback loops that make it aware of itself and its own activity. There's a theory that consciousness is an "attention schema," a model that the brain builds to simulate its own attention and behavior so that it can direct its attention to where it's most needed.


It took literally millions of years for consciousness to come about by accident on Earth, I don't see why it should be an easy thing for any computer in Trek.

The more I learn about studies of consciousness and animal behavior, the more I suspect that consciousness isn't really some magic, mysterious quality that ineffably emerges when a certain threshold of neurological development is reached, but rather, a natural and automatic property of any neural network able to perceive and model its own internal activity. We arrogantly assume that only humans have consciousness, but it's increasingly likely that many animals have some degree of it, so maybe -- as with so many other properties -- it's actually a continuum, something that starts out on a low level in simpler brains and becomes more and more developed in more complex brains. So maybe there are, in fact, computer networks that do have some nascent, low-level form of consciousness. We just assume they don't, because of our bias that human attributes are exceptional. But it's unscientific to make such an assumption. Science demands questioning our assumptions and considering alternative hypotheses. We don't know enough about what consciousness is to assume it's unique to humans or to higher species in general.

I mean, look at the evolution of other attributes. Nothing just suddenly emerges out of nowhere; traits evolve gradually and incrementally. Wings evolved from gliding membranes, which may have evolved from heat-regulation membranes. Milk glands evolved from sweat glands. And so on. One thing gets gradually repurposed into another, or different parts combine into a greater whole, and it goes in stages rather than just switching on at some point. So why shouldn't the same go for consciousness? For that matter, maybe there's more than one kind of consciousness, just as parallel evolution produced more than one kind of wing or eye. Octopus and squids show signs of high intelligence, but it's a kind of intelligence very alien to us. If computers involved consciousness, then -- contrary to fiction -- it would probably be so alien we'd have trouble recognizing it or communicating with it. And vice-versa.
 
Last edited:
Right, but consciousness came about in humans because there was a selection pressure towards minds that were able to extrapolate patterns and compare past events to current events, as that increased survivability. You need more than just self-reference, you need a self-reference that allows for modification of thought through virtue of analyzing that self-reference; you need your strange loops.

I mean, I get you, Christopher, I'm a total GEB:EGB nut - it basically defined my mental abstraction of thought - so I'd completely agree that consciousness can be an emergent property of self-reference and that it's a continuum rather than a binary thing. I'd even agree that there are likely animals that are more conscious than we'd like to think; there's certainly plenty of animals with self-awareness, as you pointed out earlier. But I don't think it can happen truly randomly or by accident. I think that there does need to be some degree of pressure in that direction, whether it be a selection pressure that brings it about as an emergent property of other beneficial features combined with self-reference or it be a purposeful effort by designers working on AI. Just having a pointer to yourself in your thoughts isn't enough, you need to apply that self-reference neurologically (or the closest equivalent in whatever system you're using) to modification of your thought patterns. I mean, I'm still enough of a Jaynesist (questionable analysis of ancient writings aside) that I'm not even sure that consciousness predated language or culture in humans.
 
Last edited:
Right, but consciousness came about in humans...

See, that's the very assumption I'm questioning -- that consciousness originated in us. Rather, our consciousness is a refinement of a trait existing in other primates, mammals, etc. to some degree or other.

But I don't think it can happen truly randomly or by accident. I think that there does need to be some degree of pressure in that direction, whether it be a selection pressure that brings it about as an emergent property of other beneficial features combined with self-reference or it be a purposeful effort by designers working on AI. Just having a pointer to yourself in your thoughts isn't enough, you need to apply that self-reference neurologically (or the closest equivalent in whatever system you're using) to modification of your thought patterns.

Sure, but there are matters of degree. And it's possible to define sentience too narrowly, as I've been saying. Maybe Starfleet computers have had a degree of sentience all along, but it wasn't recognized as such because humanity (and other Federation species) had too narrow a definition of sentience.

You mentioned Godel, Escher Bach -- remember that one of the key points of Hofstadter's model of consciousness was that it wasn't just one emergent process, it was an emergent property of several lower-level processes interacting, and each of those lower-level processes was itself emergent from the interaction of even lower-level processes, etc. So what I'm saying is that just because a mind doesn't have 100% of the processes that define human consciousness, that doesn't mean it has no consciousness; rather, it has portions of what we have rather than the whole thing.

After all, human consciousness itself is a variable thing. We're less conscious, by definition, when we sleep than when we're awake. I've read that there's some evidence that portions of the sleeping brain actually become more physically disengaged from each other so that cerebrospinal fluid can wash toxins and buildup out the spaces between them (which may be why sleep deprivation could increase the risk of Alzheimer's, or something like that), and that different parts of the brain operate more autonomously from each other during sleep, rather than working together as a whole. So waking consciousness is all the parts operating collectively, but when we sleep, only parts of our brain are working, or they're working independently and not as a whole. So when we dream, we have awareness, but it's a reduced level of awareness, one without judgment or clarity, and often without memory. I wouldn't say we're nonsentient when we dream, just that we don't have our full faculties. And the same would go for someone with brain damage or mental illness -- they have less than the full function of the brain, but that doesn't mean they aren't conscious beings. I think animals are probably the same way -- they have many or most of the same pieces, so they may have a consciousness similar to what a dreaming person or a toddler would have.

So maybe AI consciousness could have similar tiers. Maybe the Enterprise computer is already somewhat sentient -- with the selection pressure producing that semi-sentience being the demands of Starfleet for efficient performance and ready understanding of the crew's requests and needs, as well as the complex processing that would be needed for the universal translator to interpret nuance and idiom and so forth -- but it isn't "awake" enough to exert its own independent will. Maybe it just takes another bit of selection pressure to create a program that operates on an "awake" level. With Moriarty, the selection pressure was "create an adversary that can defeat Data." With the Doctor, the selection pressure was the need to function effectively as a member of Voyager's crew and community. Harder to say about "Emergence," but given how connected that entity seemed to be to the holodeck, maybe the selection pressure was the need to process all the crew's various fantasies and "dreams," to be able to function as a surrogate for a human imagination -- itself a key element of conscious thought, the ability to model alternative and future scenarios and the perceptions and choices of other minds.


I mean, I'm still enough of a Jaynesist (questionable analysis of ancient writings aside) that I'm not even sure that consciousness predated language or culture in humans.

I think that's more egocentrism, the notion that what we happen to be at this stage in our evolution is something unique and special. We've never been right about any such thing before. We weren't created in God's image. Earth wasn't made to be our home. Earth isn't the center of the universe. We're not the only species that makes tools or has language. (Chimps have been shown to have culture, by the way -- different populations have different techniques for tool use that are passed on through teaching.) And so on. So color me skeptical of any notion that affirms our own desire to be special.

I also have a probabilistic objection: Given the vast reaches of time, the billions of years in which life has been evolving and the hundreds of millions of years in which multicellular animal life has been evolving, what are the odds that we, at this particular moment in time, just happen to be within a few tens of millennia of something as revolutionary as the very first emergence of consciousness? That seems very shortsighted to me.
 
Here's a screencap: http://ds9.trekcore.com/gallery/albums/7x12/emperorsnewcloak_181.jpg

I also rewatched the scene, and it doesn't look like machinery or circuitry to me, just burned, smoking fabric and tissue. The white and red lights are evidently just reflections from the set lighting -- compare them to the highlights on Vic's face, Ezri's shoulder and hair, etc. And there are no sparks while he's lying there, only at the moment he's hit by the disruptor blast. The wound merely smokes.

And as I'm sure has been pointed out before, there's no way that alt-Fontaine could be an android, simply because the Terran rebels could never have created one.

Makes much more sense for him to be MU Felix. ;)
 
I don't think Minuet was actually sentient, just a more advanced personality emulation than the holodeck computers were equipped to generate at the time. She acted sentient, but that just meant she could pass the Turing test, i.e. could fake it well enough to fool an observer. The appearance doesn't prove the actuality. And it seems like overkill for the Bynars to create an actually sentient AI when all they really need is something that can distract a couple of humans for a few hours. Especially if their intent was to delete it afterward, which would be murder.
Yeah, she was just a fancy holoprogram. She wasn't deactivated at the end of 010011010, she just lost the fancy AI the Bynars put into her - she was supposed to distract Picard and Riker so they would stay on the ship when everyone else was evacuated.
Still, I did always assume that the Bynars' programming did leave something in the E-D's computer that created the potential for Moriarty. Minuet may not have been truly sentient, but her code may have been an ingredient in the mix that made Moriarty sentient.
Moriarty was created by sloppy writing in the staff room :p

I like the idea that the Bynars unintentionally seeded bright AI in the Federation's computer systems but I doubt the show runners were ever bright enough to join the dots in the same way.
I think a much more realistic explanation is that the Federations top holographic engineers consulted with the Bynars when developing the next generation of holodeck and holo-programming.
 
Last edited:
And as I'm sure has been pointed out before, there's no way that alt-Fontaine could be an android, simply because the Terran rebels could never have created one.

Makes much more sense for him to be MU Felix. ;)

Why couldn't they build one? They built a defiant class in a short time. Androids were around in TOS time frame prior to the empire getting defeated. Mirror vic could easily be an android that is between tos and soong level. There could also have been a mirror soong building them
 
^ The Terran Rebellion could only build their Defiant because they stole the plans to the regular version. There's no way they could build an android, because they had no idea how to do so.

And even if they'd somehow managed to get their hands on Data, for example, I doubt they would have any idea.
 
You make good points, Christopher (though I think that most of them do also refute @Timo's idea that the only thing keeping the Enterprise computer from making sapient entities all the time are explicit programming blocks); I think we're closer in thought than I'd think at first, especially given your point about emergent process after emergent process leading to us; I think that's not too far off the multiple-VIs-leading-to-an-AI concept from Mass Effect I brought up before. (Since I realize now that you might not be familiar with Mass Effect, here's a link, but in short VI is something more akin to Siri or Cortana than true AI.) And while I hadn't thought of them as selection pressure, the motivations behind the creation of various computer-based sentiences could be seen that way, yeah.

As for consciousness predating or postdating language and culture, I can see how that'd appear as egocentrism; I was more indicating that I don't think that consciousness is a necessary prerequisite to language and culture, that it would be possible to develop them without self-awareness and so one needn't necessarily predate the other. But you're right that it does presume a unique position for humanity, barring at least repeated emergence among many different evolutionary lines.

This, though:

I also have a probabilistic objection: Given the vast reaches of time, the billions of years in which life has been evolving and the hundreds of millions of years in which multicellular animal life has been evolving, what are the odds that we, at this particular moment in time, just happen to be within a few tens of millennia of something as revolutionary as the very first emergence of consciousness? That seems very shortsighted to me.

I have to say, you found my weak point; I love probabilistic arguments because they're so hard to refute. :p

You're right, a random conscious individual is more likely to be near the average age of consciousness than the beginning; granted, there's some distribution hedging there given logistic population growth for number of organisms on Earth with a sufficiently complex brain that consciousness isn't impossible, but I refute my own counterpoint by pointing out to myself that again that's a bit of egocentrism and in a general perspective I'm more likely to be near the average emergence of consciousness universe-wide, not merely planet-wide.
 
Minuet and Vic never claim to be sentient. Vic is programmed with self awareness, but he never expresses any desire but to sing and run his fictional business.

It's an interesting theory that Moriarty became self aware because the computer knew the only way Data could be beat is to install that update. Which would also imply Moriarty's 'sentience' was just programmed self awareness and intelligence.

I'm not sure how you could have language without abstract intelligence.

Tool use, language, all these things which are traditionally used to separate man from animals are constructs that follow from the ability to think about things representationally. To make an observation about the specific, expand it to the general, then hypothesize its applications to a completely novel specific.
 
Last edited:
Minuet and Vic never claim to be sentient. Vic is programmed with self awareness, but he never expresses any desire but to sing and run his fictional business.

I'm not sure that Vic's "self-awareness" is the same kind we talk about when discussing consciousness, though. It's more like, ohh, Siri -- a program that acts like it "knows" it's a program meant to advise and interact with real people, rather than one that acts like a character in a story. Although Vic is sort of a blend of both.

I don't know quite where I think Vic falls on the sentience spectrum. I don't think there's enough evidence to prove that he's truly conscious on a human level, as opposed to simply being able to play the Imitation Game ("Turing test") convincingly; but he does seem to be more sophisticated than a typical holodeck program.

There's a story in the Strange New Worlds anthologies that offered the rather clever idea that Vic's program had merged with the "Pup," the alien AI that O'Brien stored in the computer back in season 1 and that the writers then completely forgot about. So that was why he was smarter than the average hologram, according to the story.


Tool use, language, all these things which are traditionally used to separate man from animals are constructs that follow from the ability to think about things representationally. To make an observation about the specific, expand it to the general, then hypothesize its applications to a completely novel specific.

Why are we always so determined to separate ourselves from animals? What's so bad about being animals? And why is it better to imagine ourselves as alone and separate than it is to feel like part of a larger family?

Anyway, science has shown that tool use, language, and representational thinking all exist in other animals besides the human variety.
 
Interesting philosophical question for why we need to separate ourselves from animals. But if we don't, either being a carnivore isn't okay, or murdering a sexual rival is. If there isn't an undefinable quality that separates humans from animals, you should either strictly apply the same morality to animals as you do to humans, or you should be a nihilist.

I wouldn't eat dolphins or monkeys because I'm not sure whether they qualify as sapient. I don't think, however, that associating sounds with actions or rewards counts as understanding language. Tool use is different from tool creation, and being able to be trained on complex tasks is different from designing a new task to be able to solve a novel problem. Understanding that lightning is dangerous is different from figuring out how to harness the power of lightning safely.

Real life AI is getting close to the point where it can make complex, nuanced judgments as well as humans can but process information a lot faster in doing it, but nowhere near the point where they can solve novel problems they weren't programmed to by a human. We may at some point have real debates about the point where computer programs can be considered actually sapient and not just really good at processing information.
 
Last edited:
Interesting philosophical question for why we need to separate ourselves from animals. But if we don't, either being a carnivore isn't okay, or murdering a sexual rival is. If there isn't an undefinable quality that separates humans from animals, you should either strictly apply the same morality to animals as you do to humans, or you should be a nihilist.

Hardly. Each species of animal is different from others, but it's still part of the continuum of animals. The same goes for us. We're one species of animal with our own distinct behavior and attributes. That doesn't make us somehow in a completely different category from every other species of animal at the same time. That's just our own arrogant, insecure, egotistical need to set ourselves apart. We're not uniquely separate from the rest of the animal kingdom, we're just a variation, a refinement, one more branch on an ever-growing evolutionary tree.

Sure, maybe we have some abilities other animals don't, but that doesn't make us the absolute end-all and be-all of the process; as mentioned in the probability discussion earlier, the highest probability is that we're closer to the middle than to either end. Evolution won't stop with us. Someday there will be something smarter or more capable than us -- and how would we want them to perceive their relationship with us? As something completely separate and superior, with no obligation to consider our well-being? Wouldn't we find that pretty arrogant of them?

And by the way, I'm not at all convinced that being a carnivore is okay. These days I pretty much only eat chicken and turkey and otherwise vegetarian fare, no meat from mammals, and I'll be very happy if lab-grown meats become feasible in the relatively near future and it's no longer necessary to kill animals to have meat.


I wouldn't eat dolphins or monkeys because I'm not sure whether they qualify as sapient. I don't think, however, that associating sounds with actions or rewards counts as understanding language. Tool use is different from tool creation, and being able to be trained on complex tasks is different from designing a new task to be able to solve a novel problem. Understanding that lightning is dangerous is different from figuring out how to harness the power of lightning safely.

First off, I'm not sure the burden of proof should be on the side that animals are intelligent. Isn't it better to default to not doing harm? And second, a difference of degree is not a fundamental separation. A human toddler can't do the things you describe either.


Real life AI is getting close to the point where it can make complex, nuanced judgments as well as humans can but process information a lot faster in doing it, but nowhere near the point where they can solve novel problems they weren't programmed to by a human. We may at some point have real debates about the point where computer programs can be considered actually sapient and not just really good at processing information.

And we can't have those debates meaningfully until we learn to let go of our need to define consciousness and personhood based on human vanity rather than scientific standards. We need to find a way to define consciousness that isn't just "what we have," because that's not an objectively useful definition, just circular reasoning. We need a way to take ourselves out of the equation and find a more universal way of defining it. That's the only way to do reliable science -- by countering every source of personal bias and subjectivity, by rejecting preconceived assumptions and defining things from first principles.
 
I guess there's a slight misunderstanding in what I meant by it being inevitable that Starfleet's computers would go sentient (among dozens of other interesting and unexpected things). It is a matter of sheer computing capacity - in the sense that there's basically infinitely more of it than there has so far been in human history. Basically, the possibility exists that there's so much of it that it rivals the computing capacity of nature itself, the capacity that made natural evolution happen.

It's not that the computers would be inherently or predominantly inclined to do specific things that would ultimately lead to the emergence of humanlike sentience. It's that the computers would have the resources to do everything, and (by the rules of Trek computer usage) be at extreme liberty to try it all out. It took random proteins and lipids hundreds of millions of years to come up with the cell wall. It would take the computing equivalents mere minutes to do the same, and they have those minutes to spare. Evolution would be constantly ongoing unless specifically hindered. And the point is, computing systems of great complexity could never be understood by their builders and operators, so hindering would be futile, unless implemented with extreme crudity.

As usual, most of the evolution would be dead ends. Usually, death would come when a process resulted in something the humans would take note of, and cruelly terminate as an evident malfunction. But sentience is special in that respect in two ways: one, it would be more actively self-preserving than any other known type of evolved characteristic, and two, it would not cause offense in humans because they expect their computer servants to pass Turing for purposes of user friendliness (as we see, they want their databases to second-guess them, they want their sliding doors to be quicker, smarter and more considerate than themselves, and they revel in projecting themselves into computer-run fiction and seeing themselves cleverly reflected). And if it went unnoticed for the briefest of times, this would be time aplenty for it to evolve to the next level of self-preservation, of cleverly going into hiding.

Beyond these two factors, it's not that Trek computers would be especially prone to becoming sentient. It's just that ultimately they have the resources to become anything and everything in no time flat, just by taking the equivalent of the random walk that physical evolution has taken on Earth. Nature here has tried out advanced sentience as a survival trait at least once; inside the computing universe of Trek, it would have had the opportunity to do that a great many times more often.

Now, the interesting thing is, what else has emerged but gone unnoticed behind those Okudagrams and blinking lights?

Timo Saloniemi
 
Again, though, Timo, random walks aren't enough to reproduce evolution. You need random walks with recombination and selection pressure; you need some system by which later things are developed through the combination of traits of earlier things with random variation among the output of those later things, and you need some schema by which things that have higher scores in some metric are encouraged and things that have lower scores in that metric are pruned. Christopher provided some good arguments for the means by which both of those qualities might be applied, but you still need to have them; it's simply insufficient even from the basic mathematical definition of evolution to just have endless mutation, that gets you literally nothing because by definition no location in a random walk is stable and at any given moment. Genetic algorithms exist, but that doesn't mean that any random walk is a genetic algorithm.

For any random walk over a finite space, and any starting position x, your likelihood at being at any other position after sufficient mixing time is uniform, regardless of what that starting position x is. Even if that starting position is a configuration corresponding to sentience, if it was simply just a random walk then even if it did happen to randomly go in a direction of improvement, it would still be just as likely to eventually collapse into nothing. Beyond that, at every instant it would be equally likely to decline as to improve. No position in a random walk is stable. What you're describing is essentially a Boltzmann brain, and one of the points of Boltzmann brains is that they don't last long.
 
Last edited:
Why should we postulate an absence of selection pressures? No doubt the computing environment is teeming with those in terms of initial conditions already - and then there comes competition between emerging phenomena. Just like in nature.

On the other hand, there would simultaneously be a phenomenal lack of selection pressures, too, thanks to the nature of the medium: things emerging could continue to exist simply because they existed in a localized total absence of threats to their existence. There would be room for both infinitely calm hatcheries and hectic testing grounds. And just as with most things in natural evolution, it would suffice for something to emerge once and then forge on with inherently increased odds of survival simply because it had emerged into an environment unprepared to compete with it and defeat it. And as said, sentience is a survival strategy that actively promotes improved survival at every step, so it's one of more likely things to emerge as a winner.

Fundamentally, though, we get back to the argument of sheer quantity. No matter if emergence of X is significantly less likely in the computing environment than in a puddle of goo, there is also significantly more subjective time for it to emerge, more space for it to exist in, and a great many other states to choose from besides "exists" and "does not exist". Instead of convergent evolution, there may be iterative evolution, merging rather than diverging of paths, infinite retries, whatnot.

It's not as if evolution would start from a clean table, either. The computing environment would by design feature algorithms specifically designed to do evolutionary things, to favor stability, to promote self-improvement. The rules laid down by these algorithms would soon be superseded by those more suited for the resulting phenomena, but seeds of complexity would beget more complexity at least locally, even if statistics favored degeneration to noise globally. And again local emergence would be the thing that suffices and matters.

I don't see natural evolution holding a candle to what may happen inside a Trek-style computing environment. Things like ambition, goals and deliberate strife are part of the system from the get-go, after all, and a fierce fight for survival is what many users desire of their programs per se. Heck, emergence of sapience like ours may be the best programming strategy against that "singularity" thing, as something completely inhuman, utterly dull and efficiently nonsapient would no doubt be the winning strategy for a cyberspace-munching superforce, and would constantly need to be guarded against.

Timo Saloniemi
 
There's a story in the Strange New Worlds anthologies that offered the rather clever idea that Vic's program had merged with the "Pup," the alien AI that O'Brien stored in the computer back in season 1 and that the writers then completely forgot about. So that was why he was smarter than the average hologram, according to the story.

Why are we always so determined to separate ourselves from animals? What's so bad about being animals? And why is it better to imagine ourselves as alone and separate than it is to feel like part of a larger family?

Anyway, science has shown that tool use, language, and representational thinking all exist in other animals besides the human variety.

I thought "Pup" was a missed opportunity. I would have loved to see an episode where Gul Dukat when he's occupying DS9 with the Domion has a day where "Pup" pees all over the virtual carpet. It could have been done on an Arc as a recurring gag or done in a single episode but all you need to set it up would be a line from O'Brien as they are abandoning the station that "I also let our old friend Pup know that Gul Dukat is not friend of ours".
Watching Dukat get coffee in the morning that's ice cold or scolding hot is ripe for comic relief, or everytime he orders a Cardassian dish getting hasperut (the really spicy one), or having the environmental control in his quarters pump cold air, or suddenly turn on a load Klingon opera while Dukat is sleeping ....all good stuff. Might even undercurrent his eventual trip to insanity (sleep deprivation).

As for the separation between ourselves and animals ...its not just egotism. If there is no separation than every 6 year old with a magnify glass at the ant hill is just as bad as Ted Bundy. It creates a self collapsing set of arguments that actually deters from the protection of ethical animal treatment rather than helping. Humane treatment is different than human treatment. We should absolutely have many protections for Great Apes that don't apply to sewer rats. None of the lower species is planning space flight soon or building permanent structures, etc. Just because science is still grappling with a definition doesn't mean the distinction doesn't exist. BTW -This is probably something Gene Roddenbury would love to do several shows on, the ethical treatment of animals. Could be interesting in a Star Trek Universe.
 
If there is no separation than every 6 year old with a magnify glass at the ant hill is just as bad as Ted Bundy.

How so? The basis of any legal system is that crime is met with punishment for deterrent, and that the punishment comes in degrees fitting the crime. Killing two people is worse than killing one. So killing an animal from the Homo sapiens species can be defined to be worse than killing one from the Myrmica rubra species, without undermining the system in any fashion. And that's supporting the idea that animals aren't a separate category, but part of the same continuum, with even inanimate objects included somewhere down the line.

Timo Saloniemi
 
As for the separation between ourselves and animals ...its not just egotism. If there is no separation than every 6 year old with a magnify glass at the ant hill is just as bad as Ted Bundy.

I'm not talking about that. Obviously there are differences between species, but it's nonsensical to say that we and we alone are in some exclusive category and every other kind of animal is in a separate category with an impassable abyss between them. There's no scientific or ethical basis for that kind of binary division. There's nothing binary about it at all. There are thousands of different species that are all different from each other. It's nonsense to say that they're all exactly the same thing as each other and we're the only thing that's different. It's a continuum. Yes, obviously there are differences, but it's a complex and nuanced set of differences, not a binary with us on one side and literally everything else lumped together monolithically on the other. That, like any other reductionist, binary definition of reality, is just an excuse to avoid doing the work of thinking about more than two things.

Besides, ethics is just one consideration. I'm talking about science, about biology. Genetically, we are essentially neotenous chimps. We are more closely related to chimpanzees than gorillas or orangs are related to chimpanzees. We're not apart from the other great apes, we're right there in the middle of the continuum. We're one of the thousands of branches on the evolutionary tree. Before we start deciding about our ethical responsibilities to animals, we need to start with an honest scientific and factual definition of our nature as animals and our relationship to the rest of the biosphere. Facts come first, then ideology.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top