STAR TREK: PICARD - ROGUE ELEMENTS by John Jackson Miller

Discussion in 'Trek Literature' started by JJMiller, Feb 20, 2021.

  1. Christopher

    Christopher Writer Admiral

    Joined:
    Mar 15, 2001
    You make good points. There is a meaningful difference in their independence and mobility, at least until mobile emitter technology is developed. (And I always felt that must be something that happens not long after Voyager returns, otherwise you'd think Captain Braxton's agency would've confiscated the Doctor's emitter right away.)
     
    hbquikcomjamesl likes this.
  2. Charles Phipps

    Charles Phipps Rear Admiral Rear Admiral

    Joined:
    Sep 17, 2011
    Well we know the Romulans ban all AI according to Picard so I imagine the Zhat Vash would be against sentient holograms as well.

    However, you can adjust holograms very well to not being sentient AI and I assume that Starfleet would tweak the creation of holograms so they didn't become a self-aware slave race. Which is to say post-Ban, I wouldn't be surprised if hologram programs are much-much dumber and less able to improvise ala Westworld.

    (I assume this is the case with most anyway and the EMH is the most cutting edge singularity technology the Federation had at that point--They'd passed that point with the Doc and Vic Fontaine almost without fanfare because it was a thousand little steps rather than one giant leap ala Data)

    Which is an interesting "I don't think the writers intended this" point of the Synth Ban. Everyone is treating it as pure evil but you could argue Starfleet was horrified at the idea of the Synths committing the act because they were self-aware and preferred death to slavery.

    They might view the Synth ban as something to be pursued as a moral good.

    "What did we do!?"

    Which would make the objections to it seem anti-Federation while the Soong Colony went, "Isn't this just genocide by a slower mean?"
     
    AlexMC likes this.
  3. AlexMC

    AlexMC Lieutenant Red Shirt

    Joined:
    Sep 10, 2020
    Before I write the next way too long screed (I love this discussion SO MUCH!), a quick question for @JJMiller:
    Should we move this elsewhere?
    I have no idea what the protocol is over here. I come from fanfic writing, where the rule is (used to be?) you keep your meta and worldbuilding far away from anyone being paid to write things for the actual franchise, lest they feel compromised because they had the same ideas on their own and now worry they might be accused of plagiarism. It might not be that big of an issue anymore or not in Trek spaces in general, but like I said, I'm fairly new here, so I don't know.

    I mean... if you feel comfortable weighing in, I would aboslutely love to hear your thoughts! And how you view the issue of sentience/independence/etc. in Rios' holos (another topic I have A Lot Of Opinions on XD).
    But I at least wanted to make sure you're okay with this discussion happening here and wouldn't prefer we moved it somewhere out of your dedicated thread ;)
     
  4. Christopher

    Christopher Writer Admiral

    Joined:
    Mar 15, 2001
    I'm still not convinced that Vic Fontaine is actually sentient, just programmed to ignore the fourth wall. He's the hologram equivalent of Deadpool or She-Hulk, a fictional character whose gimmick is to appear aware that he's a fictional character. While Moriarty and the Doctor both aspired to grow beyond the limits of the roles they were created to inhabit, when Vic is left to his own devices, he just continues playing his preordained role in the open-world simulation he was built for.
     
    Charles Phipps likes this.
  5. Charles Phipps

    Charles Phipps Rear Admiral Rear Admiral

    Joined:
    Sep 17, 2011
    I dunno, while true that Moriarty and the Doctor both wanted more than their lives in a holodeck, the fact that Vic chooses to live in his artificial version of Vegas doesn't necessarily mean that his inner life is fundamentally less. After all, we're people obsessed with an artificial reality (Star Trek) and spend countless hours pouring over it. Vic even kicks Nog out of his holodeck because he knows it's not good for him despite his job being to entertain people in the artificial Vegas for as long as possible.

    Mind you, if his creator did think he was self-aware, it would mean that his "Easter Egg" mission was murder and possibly kills who knows how many Vics across the galaxy.
     
  6. Christopher

    Christopher Writer Admiral

    Joined:
    Mar 15, 2001
    Of course it's not proof all by itself, but I think the burden of proof is the other way around. It is not hard to program a computer to pass the Turing test and fool people into thinking there's a sentient mind addressing them. People mistake chatbots for real people all the time. So the burden of proof is on the position that an AI is conscious, not that it isn't. If you assume the appearance of intelligence actually is intelligence, then you're going to be vulnerable to a ton of false positives. You might as well assume Bugs Bunny is a sentient life form just because he sometimes talks to the audience and acknowledges that he's a cartoon character. So it's just a matter of maintaining healthy skepticism and demanding more evidence. "Prove to the court that I am sentient," Picard said once. It's not an easy burden to meet.

    But one thing that can help demonstrate self-awareness and genuine sapient thought is the ability to formulate ideas and goals that are not part of the AI's programming. Data, Moriarty, and the Doctor demonstrated that; Vic did not. Yes, he continues to operate autonomously and live out his everyday life when nobody is playing his game, but so do the NPCs in a lot of open-world MMORPGs, because it helps create verisimilitude if the inhabitants act out a convincing illusion of leading lives of their own. I'm not a gamer myself, but I gather there are games where the NPCs act out scripts like that, where you could just sit there and watch and they'd go about their lives and have conversations and such all on their own, even if you didn't interact with them. Presumably the 24th-century version would be an even more convincing illusion, but that wouldn't make it real.

    Of course, in-universe, it's more ethical to err on the side of caution and assume a being is sentient rather than assuming it isn't. That was actually Picard's point in "The Measure of a Man." But I'm speaking from our perspective in the real world examining the premises of a work of fiction.
     
    AlexMC and Charles Phipps like this.
  7. Charles Phipps

    Charles Phipps Rear Admiral Rear Admiral

    Joined:
    Sep 17, 2011
    True.

    Of course there's also the fact that there's a spectrum of sentience rather than an all in or all out too. Victor might be able to feel love, pain, fear, and self-awareness but never be able to change as a person. 2000 years from now, he might still be the same Frank Sinatra program he was when he started.

    Does that make him nonsentient?

    It's an interesting question that shows like Star Trek can examine. One of the funny early ST:O scenes is when you visit a colony of "freed photons" and meet the Vulcan Love Slave who is unable (physically) to talk about anything outside of her program and goes ERROR ZZZZT if she does. She was still freed because better to be safe than sorry.
     
    TheAlmanac and AlexMC like this.
  8. Christopher

    Christopher Writer Admiral

    Joined:
    Mar 15, 2001
    That's certainly true, and as I said, from an ethical standpoint one should err on the side of caution. If anything, I've increasingly come to suspect that consciousness is not some unique privilege of the highest orders of life, or something that doesn't exist at all until it's switched on when a certain threshold is reached, but might simply be a natural capacity of any kind of brain with internal feedback or self-monitoring capability, just in greater or lesser degrees. We like to flatter ourselves that there's something unique about us, but what we have might just be a variation on a universal theme. That might be just as true of inorganic neural networks as organic ones.

    But I'm approaching the question from a scientific standpoint, in which claims should be met with skepticism and alternative interpretations for the evidence should be considered. I'm just saying that Vic's nature is ambiguous, that there's no solid evidence of behavior that can't be explained as sophisticated programmed mimicry of intelligence. Sentience can't be ruled out, but there's insufficient reason to assume he falls into the same category as the Doctor, Moriarty, Data, etc.

    At least, we haven't seen him in the same kind of stories where his personhood is investigated or his rights defended, which suggests the writers didn't intend him to be the same kind of character as Data or the Doctor. Even when Vic's "life" was threatened in "Badda-Bing, Badda-Bang," and Sisko and Worf asked "Why do you care so much about a hologram?," the answer wasn't "He's a sentient being with rights," it was just "I like him and think of him as a friend." It was never presented as an ethical question, just a matter of sentiment. Sisko and Worf didn't have to learn to accept Vic as a person, just to sympathize with their friends' attachment to a favorite game character and agree to play along.
     
    AlexMC, Reanok and Charles Phipps like this.
  9. trampledamage

    trampledamage Clone Admiral

    Joined:
    Sep 11, 2005
    Location:
    hitching a ride to Erebor
    One of the rules of this forum is that discussions need to stay clear of story ideas - for the reason you state. So long as the discussion stays general and doesn't veer into specific scenarios, it won't break the rules for the forum. And that is a rule for the entire Literature forum, if you want to get into specifics then you will have to go to the fan fiction forum (or maybe the TV show forum, I'm not sure what they allow).

    What @JJMiller wants to discuss is up to him though, obviously :)
     
    Charles Phipps likes this.
  10. AlexMC

    AlexMC Lieutenant Red Shirt

    Joined:
    Sep 10, 2020
    Thanks for the clarification! :)
    (The question, of course, is when does general meta/worldbuilding veer into story ideas XD I gess I shall Proceed With Caution.)

    I've been thinking about this issue of holo-sentience a whole lot (mostly because of Rios's holos) and I absolutely agree with a ton of points you guys made.

    There are probably technical pre-requisites that make sentience more likely. Runtime, for one. The Doctor was in near-constant use after Voyager stranded in the Delta Quadrant, something his programme was never designed for. And the other Mk I EMHs in the mines were likewise used for non-stop mining work. That probably had something to do with their gaining enough "sentience" to develop an interest in reading Photons Be Free, as is implied by the ending of "Author, Author". And Fair Haven had been running continuously for a long time when the characters became self-aware.

    And then you have cases where the holograms got allotted way more processing power than they should have been (like Moriarty). Not to mention all the home-brew code updates and changes Tom and others kept making to Fair Haven. So there are special technical circumstances that seem to be necessary to achieve sentient, or at least truly self-aware holograms.

    But I don't think there is a very clear line dividing sentience and non-sentience in these cases.
    It might be more a case of... a list of diagnostic criteria. "If at least 60% of the following apply, your AI might be sentient." But it's a gradient, with a fuzzy zone somewhere in the middle.
    It's exactly like you said: establishing sentience, especially if you don't want to veer into te realm of the metaphysical, is extremely hard and requires a high burden of proof.

    If you look at the shows, there are some holograms that are probably completely incapable of attaining sentience barring massive interference. The Index in the Starfleet Archives, for example, will never come to life, no matter how long they use it for, because it's essentially a hopped up version of Siri. There is simply no way, without the addition of masses of code, for its programme to grow beyond the pre-set bounds.

    I would imagine it's a bit more complicated with holodeck NPCs. If they are given enough computing power, enough continuous runtime with cumulative experience and memories, enough quirky additions to their code, or all of the above, in rare cases they can achieve sentience. But they can also be made to look self-aware (see Vic Fontaine) and things can start to get a bit blurry around the edges.

    Emergency Holograms, on the other hand, I would expect to be most likely to attain sentience, since they are incredibly sophisticated. They need to be self-aware, they need to have enough social competence to interact with an organic crew in an emergency (not a great time for ill-timed jokes and miscommunications), they need to be able to improvise and access knowledge outside their narrow precepts, etc.
    (Say, for instance, an EMH finds themselves in a partially-collapsed sickbay, the replicators are down, and now they need to do triage. It would be pretty useless if your Emergency Medical Hologram would just stand there and go "No adequate antiseptic is available. Please provide the necessary supplies to procede." They need to be able to access, say, an engineering database to figure out that you can easily use that sonic steel brush to turn it into a crude but effective disinfectant.)
    So Emergency Holograms are probably never that far from attaining sentience, if you let them. And the question then becomes, what steps are being taken by the Federation so people won't let them? Because you don't want to accidentally end up with hundreds of sentient beings that are confined to ships and are owned(???) by Starfleet or by private captains.

    I really liked the idea of the Moriarty Protocols for holodeck characters brought up in the books, and the way Sirena's EH's start off very much walled off from one another (something they clearly overcome with time).
    There are so many amazing story ideas and fascinating philosophical quandries here!
     
    Charles Phipps likes this.
  11. Charles Phipps

    Charles Phipps Rear Admiral Rear Admiral

    Joined:
    Sep 17, 2011
    To blasphem the sacred technobabble, STAR WARS might actually be a good example of this. At the very least the West End Games rules that were derived from George Lucas' big book of notes that he gave them.

    1. Memory wipes existed so that droids didn't develop sentience (as well as avoided catastrophic mental failure) as the connections and motivations of being alive were required to develop along decades of behavior.

    2. The strength of the machinery involved. Protocol droids like 3PO are meant to be as human-like as possible with countless high end processors (as much as a 25,000 year civilization can do) that are infinitely more complex than you get for the vast majority of droids.

    3. Odd circumstances that cause them to move beyond their programming as we see with Anakin modifying both his Astromech droid and protocol unit.

    In the case of Star Trek, I also feel like we shouldn't ignore the fact that Federation technology is inherently prone to cataclysmic mutation. As a friend described to me, "Basically, the typical Enterprise is full of machinery that our protagonists have pushed to the cutting edge of what is possible and tied it together without completely understanding how they work. Its' not quite the Singularity but not far from it."

    I wouldn't be surprised if Zimmerman's EMH hologram is something that by virtue of being that absolute most advanced AI technology of 2371 was something that they hadn't realized was as close to true sapience as it was. The Doctor was certainly something that advanced itself to personhood but even then was probably closer than the Federation realized.

    (While ridiculous, it might explain why they are miners--someone noted how sapient they seemed and decided that simply deleting them all was a bad idea until they did a deep dive on their self-awareness)

    If Vic Fontaine is sentient, I imagine it would only be due to the fact the kind of super-tech of the EMH getting applied to something more mundane. He was the latest PS5 game version of what was originally cutting edge before the Federation realized how potentially ethically compromised this may be.

    It'd also be interesting that the Federation, upon realizing it HAD developed true AI, immediately made horrible restrictions to avoid making new AI because of fear of creating a slave race and thus unwittingly killing a potential new race.
     
  12. Christopher

    Christopher Writer Admiral

    Joined:
    Mar 15, 2001
    I refuse to believe that scene actually happened, because it's idiotic. There is no reason why a 24th-century civilization with mining phasers and transporters and robots and Hortas would waste energy running sophisticated humanoid holograms so they could chip away at rock walls with pickaxes like historical re-enactors. It's a ludicrously inefficient approach to mining. At most, maybe it's a daydream the Doctor had about his hopes for the book. It makes no sense as a literal reality.

    The Fair Haven characters did not become self-aware. That was never portrayed as anything more than a glitch in the program that altered the characters' perceptual filters so that they reacted to anachronistic, fourth-wall-breaking things they had previously been programmed to ignore. The episode never portrayed it as a matter of AI sentience, merely as a software error in a recreational scenario that the crew had become attached to and would've regretted losing. Yes, the characters were aware of things like people turning into cows and arches appearing in midair and whatnot, but they reacted to them in character as they were programmed to react.

    Like I keep saying, it's reckless to mistake the superficial appearance of human behavior for the real thing. Holodeck characters always behave like real people; that's the whole point of the illusion. They're programmed to react to stimuli in a way that convincingly mimics how a real person would react. The only thing that changed in "Spirit Folk" is which stimuli they were able to notice.


    I don't buy that, because holodeck NPCs would not be separate neural networks. They're just puppets controlled by a single game program. Again, we mustn't mistake outward appearance for underlying reality. There's one single, unified game program that creates the illusion of everything within it, from the scenery to the buildings to the pets and birds and cars driving by to the NPCs milling around in the background. A "Third Guy Walking Down the Street" hologram or a "Tavern Wench" hologram wouldn't have any independent processing going on, but would just be an empty puppet animated by the software to act out a pre-written script, unless a player approached them and spoke to them, in which case the game program would calculate a response to what the player says and does. But the thing doing the "thinking" is the same single program that's also making the trees rustle in the wind or the flames flicker in the chandelier.


    I think that might be truer of some EHs than others. For medical functions, you need something sophisticated, adaptable, and capable of creative problem-solving. For something like navigation, though, you might not need as much flexibility or range.


    Then have them in the main office overseeing the mining robots and phaser drills and ore freighters from their desks, not standing in a tunnel swinging pickaxes. Did. Not. Happen.


    Another reason why I'm skeptical of Vic's sapience. Even a sophisticated interactive game character like Vic doesn't need anywhere near the same level of adaptability, imagination, processing power, and database size as a medical tool designed to be able to cope autonomously with any conceivable medical emergency.


    That's sort of how it works in my Arachne-Troubleshooter Universe. Earth outlaws creating sentient "cybers" (AIs) to avoid the ethical conundrums, while the Striders (my term for Belters, since "Belter" is boring) do create and often exploit sentient cybers, because people mining or exploring the asteroids are often light-hours away from civilization and unable to take advantage of the instant access to expertise that the Internet provides, so they need comparable sources of expertise and creative problem-solving available locally. But Earth's ban has consequences that aren't beneficial to the cybers, as seen in my story "Murder on the Cislunar Railroad." It's kind of paradoxical to try to protect the rights of a group by outlawing their creation.
     
    Charles Phipps likes this.
  13. JJMiller

    JJMiller Writer Red Shirt

    Joined:
    Jul 22, 2013
    Location:
    Wisconsin
    No, it's fine. It's a debate the book inspired, so it fits!

    I can't say I thought too deeply about the synth ban as it related to the book, since it wasn't a place I needed to go. But it is sort of consistent with the whole element (no pun intended) in the book that the knowledge, functions, and capabilities of the intelligent subsystems are siloed within the ship's computer, for the well-being of those it carries -- even extending to protecting their privacy. The key (pun intended) comparison when it comes to the synths may be with the guys in the silo (pun again intended!) in Wargames. Let the WOPR think all it wants, but you never let it turn the key.

    The business with rebuilding the car ventures into this as well, since Tucker had tried to integrate the lighting and steering subsystems. But as we network more systems and make some of them autonomous, we'll run into the same questions. Just about every field may end up having its "Mars synth" moment.

    (I think a bigger question is why the Holodeck even allows its safety protocols to be turned off, given that it's antithetical to the ship's mission. But that capability had been shown several times on screen, so I went with it.)
     
    Last edited: Sep 15, 2021
    Charles Phipps and TheAlmanac like this.
  14. Charles Phipps

    Charles Phipps Rear Admiral Rear Admiral

    Joined:
    Sep 17, 2011
    Well it makes sense if you want to practice, say, bat'leth fighting or martial arts where the ability to get hit in the face is part of the appeal. I assume that there's a liability waiver you have to sign. It's a stupid idea but I know plenty of people who engage in extreme and dangerous sports that disdain safer alternatives.
     
    AlexMC likes this.
  15. Christopher

    Christopher Writer Admiral

    Joined:
    Mar 15, 2001
    Holodeck safeties aren't about preventing any pain, though, just about preventing serious injury or death. There is no valid reason to disable that system; it's tantamount to signing a waiver for permission to jump out of a plane without a parachute. No way would that ever be legal.
     
  16. Charles Phipps

    Charles Phipps Rear Admiral Rear Admiral

    Joined:
    Sep 17, 2011
    On a Klingon vessel? I doubt it.

    On the Enterprise, I imagine the staff have the authority to override anything.
     
  17. Christopher

    Christopher Writer Admiral

    Joined:
    Mar 15, 2001
    Since when were we talking about Klingon vessels? You mentioned a liability waiver, which doesn't sound very Klingon.
     
  18. Jinn

    Jinn Mistress of the Chaotic Energies Rear Admiral

    Joined:
    Dec 22, 2015
  19. Charles Phipps

    Charles Phipps Rear Admiral Rear Admiral

    Joined:
    Sep 17, 2011
    True, I meant the La Serina, which was previously owned by a Klingon smuggler. It was a clever twist, I felt, and helped explain why there was a holodeck onboard (the Klingon was discommandated and needed to make fake Klingon friends). As for a liability waiver in the civilian market, I suppose you're right that you would have to hack your holodecks to allow lethality.

    There's a great moment in NEW FRONTIER where the Chief of Security dies in what is blatantly a simulation of the Avengers (unnamed of course) and she gets killed by Thor's hammer. Shelby immediately bans turning off the safeties.
     
  20. JJMiller

    JJMiller Writer Red Shirt

    Joined:
    Jul 22, 2013
    Location:
    Wisconsin
    I'd imagine the safety systems, in fact, get most of the processing power. An environment capable of generating the attendant force fields for holographic content would theoretically be able to kill anyone on command, and not with a fake bullet. I suppose there's a story out there where someone was given a holographic embolism.
     
    TheAlmanac and Charles Phipps like this.