• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

When Geordi created the leah brahms hologram he created a sentient being

WildManWizard

Lieutenant Commander
Red Shirt
who knew she was a hologram, had the technical knowledge of an expert in warp field physics, and could interact with people intelligently. if they put holo emitters in engineering she would have been a perfect advisior to work with in desperate situations.
 
Somehow, between those (and Moriarty's) days and the days of the attempt to create a new "backup" EMH in Voyager's Message in a Bottle, it must have become a whole lot harder to create an advanced, interactive (and perhaps even sentient) hologram ...
 
First off, Geordi didn't create anything; he just gave the computer parameters to create it. And what the computer created was just an interactive simulation modeled on the real Leah Brahms's writings and recorded public appearances, no more sentient than any other holodeck character. It was the computer that had the knowledge of warp physics; the Brahms program was just a user interface, a way to let Geordi get a handle on the problem by pretending he was having a conversation with the expert behind the research, rather than just reading her words on a screen.

I mean, really, we have online bots today that can fool people into thinking they're live human beings. We should know better than to mistake a simulation of human behavior for actual sentience. A lot of fiction assumes that the Turing test (whether an AI can fool an observer into thinking it's human) is "proof" of AI consciousness, but it's nothing of the sort. Turing himself called it "the imitation game" -- it was specifically about whether a computer could convincingly mimic human behavior, which he believed would then make it a useful model for studying actual human cognition. The "test" was never meant to prove anything more than successful mimicry.

And a hologram "knowing it's a hologram" doesn't prove it's sentient; it just proves that it's been programmed to break the fourth wall. Is Deadpool sentient because he "knows" he's a comic book character? No, he's just written to act that way. The Brahms simulation was not programmed to be a character in a game, but to be an expert program for assisting Geordi in solving the current real-world crisis the ship was in. Therefore, it would've made no sense to program it to be unaware of reality.

I mean, the Enterprise computer "knows" it's a computer. It answers when you call "Computer," and it does the jobs the computer is designed to do. But it isn't sentient, just responsive.
 
Nope, I agree with Chris, Leah is no more sentient than a Bar Wench from a Mid eval program.. They interacte, even simulate feelings etc. but its just the computer watching the human and interacting. Even Moriarty is a maybe.. it may be just the computer expressing what it was asked of it.. beat Data, so he had to be aware of what he is, and want to be free.
 
Nope, I agree with Chris, Leah is no more sentient than a Bar Wench from a Mid eval program.. They interacte, even simulate feelings etc. but its just the computer watching the human and interacting.

One difference is that the Leah simulation is modeled on the documented writings and behavior of the real person, which is why it seems more like a real personality than just some invented game character. The way that kind of personality modeling works is that the computer does brute-force number-crunching to calculate patterns in a real person's behavior and use them to predict how that person would react in a given situation, not too differently from how computers document weather patterns and use them to predict future weather. But there's no intelligence behind it; it's just pure calculation performed at very high speed. If it has enough data on how a person behaves and thinks, what that person's values and goals and patterns of behavior are, then it can just run the numbers and make fairly accurate predictions of how that person would react.

Although as we saw in "Galaxy's Child," the Brahms simulation wasn't really that accurate. It modeled her scientific knowledge and insights accurately, since that was the goal of the program, but it was way off base when it came to the more emotional, interpersonal stuff -- no doubt because the computer was extrapolating from what was publicly recorded about Leah and thus wouldn't have had enough data on her private life to accurately model that side of her behavior.
 
My first comment was a joke.

I'm not sure what sentience is, or if we will even ever have a good, rigorous definition of it (i.e. a definition that does not resort to some measure of subjective evaluation to determine if "something" is "sentient" or not). Didn't really want to go into the "sentient" debate in the first place, that's why I parenthesized and "perhaps even"-ed it- not sure of it myself. I probably should have left the term out altogether.

As far as I'm concerned, if "it" does the number crunching and pattern recognition and what-haves-you sufficiently well, I'm not sure any search as to whether its self-awareness is real or simulated is really that interesting. Heck, I'm not even sure what it really exactly means to say *I* am self-aware (whatever this "I" is supposed to be, for all "I" know "my" brain is simply fooling "me", though "I" 'm perfectly willing to go with its deceit to be able to function normally, which is why in most everyday situations "I" take "my" existence as a given. Also it's a whole lot easier to write that way.). For this reason I'm not having too much confidence in my own ability to reliably tack on or deny this property to "something" else, and therefore I'm willing to settle for a phenomenological definition ('If it looks like a duck ....')

I'll leave the underlying metaphysical discussion to people who feel drawn to that kind of thing, either as a hobbyist of professionally, but I'd rather spend my time on other pursuits.

That said, it's still amazing how simple it seems to be to to create an interactive and seemingly self-aware ("sentient" or not, let's leave that out) hologram, such as a worthy opponent of Data, or a boyfriend for Janeway (just thought of him, so there are such examples in Janeway's time too), vs. how "hard" it apparently still is to create a fully-fledged EMH.
 
Last edited:
The word "sentient" literally means "feeling." It isn't about intelligence, but about self-awareness and qualia -- the ability to be aware of one's existence, to experience sensations. So by definition, it's not about how an entity appears to outside observers, but about its own internal awareness. Which is a hard thing to measure or prove, but that's the point -- that it can't be assumed to exist merely because an entity shows the outward appearance of it. You have to dig deeper and assess if there's more going on than just an outward performance.

What makes Moriarty and Voyager's EMH different from other holograms is that they surpass their programmed parameters and pursue goals of their own conception. Moriarty was programmed to be the character of Professor Moriarty in a Sherlock Holmes RPG, but he became aware there was more to the universe than that and decided that he needed to escape his confines and explore. The Doctor was designed to be a medical expert program, but he grew beyond that and pursued interests of his own. It's that individual initiative beyond their programming that's the strongest evidence that they're capable of genuine thought. Other holo-characters just stay within the roles they're designed for. Holo-Leah was designed to assist and support Geordi, and even if that support took a more romantic turn than intended, it was still within that parameter of serving Geordi rather than herself. Janeway's holo-boyfriend and the other Fair Haven residents may have been given new stimuli to react to, but never really expanded beyond their programmed roles of Irish-stereotype townsfolk. Even Vic Fontaine, for all his apparent awareness, is content to play out the role he was designed to play as a 1960s Vegas lounge singer. It's hard to tell whether Vic is truly sentient or just a fourth-wall-breaking interactive character.
 
^ True, Probably I should have used "sapient", though I think in this particular context the difference doesn't matter that much. As for proving there is "more" than an outward performance -- I doubt it can be done, but I'll leave it at that.

Janeway's boyfriend, and Leah, we don't know whether they couldn't have grown into that "more" - they simply didn't get the screen / development time... Had the EMH only been in the pilot (or keep behaving in the same way), we probably wouldn't be calling him self-aware today. I agree that Vic seems a border case. Which in itself already shows drawing sharp borders around the concept is very hard.
 
As for proving there is "more" than an outward performance -- I doubt it can be done, but I'll leave it at that.

My whole previous post was about how it could be done.


Janeway's boyfriend, and Leah, we don't know whether they couldn't have grown into that "more"

You're getting the burden of proof backward. Since we already know from present-day examples that it is possible for a non-sentient computer to create a convincing imitation of a sentient being, and since we know that most holodeck characters are not sentient, it therefore follows that the burden of proof is on the claim that a holodeck character is sentient, because that requires additional evidence beyond what is already established.

And even they could potentially have grown into sentience, the point is that they weren't sentient at the time they existed. What might have potentially happened in the future is irrelevant, because it never did happen.

But there is no reason to suspect that the programs in question had any more potential for sentience than any other random holodeck character, because there was nothing exceptional about their creation. Moriarty was an exception because he was created to exceed Data's neural complexity, which was explicitly a highly atypical parameter for a holodeck character. (Plus I've always suspected he absorbed some of the Bynars' leftover programming from Minuet.) And Voyager's EMH was an exception because he was run continuously for months instead of intermittently and thus had more opportunity to evolve as a neural network, growing to a more sophisticated level of complexity (and being run on a computer based on bioneural circuitry probably contributed too). But Leah was just a pretty basic expert program with a face and voice, and Michael Sullivan was literally just an ordinary holodeck character. There is no reason to suspect they're in any way unusual.
 
Geordi didn't create anything; he just gave the computer parameters to create it. And what the computer created was just an interactive simulation modeled on the real Leah Brahms's writings and recorded public appearances, no more sentient than any other holodeck character. It was the computer that had the knowledge of warp physics; the Brahms program was just a user interface, a way to let Geordi get a handle on the problem by pretending he was having a conversation with the expert behind the research, rather than just reading her words on a screen.

Exactly right. In some ways, Holo Brahms was just a 24th century equivalent of the old Microsoft Works 'paperclip' buddy.

I mean, the tell that she wasn't really defined as life is when the real Brahms comes along and 'interacts' with the replica. Holo Brahms has zero comprehension of what's going on. She's just an interface.
 
One difference is that the Leah simulation is modeled on the documented writings and behavior of the real person, which is why it seems more like a real personality than just some invented game character.
Just like Data's Einstein, Newton & especially Hawkins, who seems very much imbued with personality, for a simulation
Exactly right. In some ways, Holo Brahms was just a 24th century equivalent of the old Microsoft Works 'paperclip' buddy.

I mean, the tell that she wasn't really defined as life is when the real Brahms comes along and 'interacts' with the replica. Holo Brahms has zero comprehension of what's going on. She's just an interface.
That & of course when Geordi & Holo-Leah are bouncing ideas off one another, She speaks AS the computer at one point, reminding Geordi himself that she's just the ship's computer wearing a Leah Brahms suit.
 
The issue, if any, with these well-simulated "personalities" is that on occasion our heroes are impressed by it. And not in the sense of "fooled", because they generally expect to be fooled, and the machinery of the holodeck is specifically designed to pull the wool over the users' eyes and generally works well enough to achieve it - but in the sense of "surprised" or "shocked", as when they realize Moriarty or the Exocomps have a mind of their own.

Why would the heroes start considering Moriarty as anything more than a clever program set at "self-aware villain"? Because he can wax poetic about being a villain or a person or a program? That is, because he can simulate being aware of his surroundings, or even slowly growing aware of his surroundings? This might surprise the heroes initially, as they thought the program had been set at "non-self-aware villain", but when they realize they fumbled the controls, why do they still think in terms of this program being "one of them"? Sure, the best way to defeat/handle Moriarty is to stroke his ego by treating him as "one of them" - but outside his earshot, why are the heroes pretending to each other that Moriarty is a person? (More an issue of the Moriarty sequels, but still.)

That's sort of the groundwork for the "issue" at hand. The more interesting part of it is, why the need for the surprise? Many a time, our heroes interact with programs designed to appear sentient. Sometimes they create those themselves. They should be well acquainted with the idea that sentience is a matter of degree, and that one doesn't have to jump directly to a PhD there, there also being lesser degrees of sentience and therefore a very practical need to flexibly cope with the whole range, from dumb automaton to a machine that out-feels and out-selfcomprehends the heroes. And indeed the heroes on occasion are polite to their replicators, feel comfortable with Data, and often enough treat their wetware colleagues as handy automatons, as the situations warrant. So why the shock and awe with Moriarty and the Exocomps specifically? Or the "Emergence" thing? It's stuff built in to their starship, a known capability and part of their working environment.

"Booby Trap" / "Galaxy's Child" is more in line with my expectations, in that creating sapient and for all practica purposes sentient life for a task or for shits and giggles is a thing that is done, and then undone, without further ado.

Timo Saloniemi
 
The wiki on sentience has been edited since I last looked it.

Artificial intelligence[edit]
The term "sentience" is not used by major artificial intelligence textbooks and researchers.[9] It is sometimes used in popular accounts of AI to describe "human level or higher intelligence" (or artificial general intelligence).

It's still the same intent, that TNG got it wrong, taught 30 million children wrong, and now it's too late to put the genie in the bottle. A year or more ago the wiki said "Science fiction has an addendum where sentience means sapience, but only in science fiction."
 
Not every computer AI that can pass the Turing test is considered sentient.

I’m open to the possibility an AI could be sentient, provided it has the ability to generate its own priorities based on absorbing and adapting to completely unexpected information.

But seeming human and being smart don’t meet those requirements.
 
Not every computer AI that can pass the Turing test is considered sentient.

Right. As I said, it was never meant to be a test of sentience, just of mimicry. Turing thought that if a machine could be programmed to mimic intelligent behavior, it could assist in studying how human intelligence works, similarly to how computer simulations of the weather are used to learn about and predict the weather.

In the movie Ex Machina, when Oscar Isaac's and Domnhall Gleeson's characters talked about the Turing test at first, I was afraid the movie was falling into the usual trap of presenting it as proof of sentience, but then afterward they talked about the flaws in the idea and made it clear that it wasn't enough of a test on its own. By contrast, the surprisingly similar low-budget film The Machine from the previous year (with Caity Lotz in the title role) uncritically embraces the idea that the Turing test is the single definitive proof of artificial consciousness.


I’m open to the possibility an AI could be sentient, provided it has the ability to generate its own priorities based on absorbing and adapting to completely unexpected information.

I've never seen any reason to believe AIs couldn't be sentient. It's just a matter of the complexity of the neural network, regardless of what it's made of. And of course in Trek, it's a given that at least some AIs have been sentient -- Rayna Kapec, V'Ger, Data, Lore, Lal, the "Evolution" nanites, Moriarty, the Countess, the exocomps, the entity birthed by the Enterprise in "Emergence," the Voyager EMH, the Think Tank's AI member, mayyyyybe Vic Fontaine. Also probably the Shore Leave planet's computer, as seen in "Once Upon a Planet." And the Guardian of Forever, if that can be considered an AI.

Aside from Rayna, though, I don't think TOS portrayed its AIs as genuinely sapient beings; they were usually pretty rigidly constrained by their programming and incapable of flexibility or creativity. Of the Exo III androids, Ruk may have been sentient, but Andrea and dupli-Kirk seemed pretty limited, and even Korby seemed a slave to his programming at the end. If he was sentient, it was only by duplicating the original Korby's mind. The M-5 had some degree of humanlike thought, but again, it was by copying Daystrom's engrams. By contrast, while Landru was a computer simulation of its creator, it didn't seem to have any genuine intelligence, just rigid algorithms. And Mudd's androids seemed to be nothing more than drones operated by a single central mainframe with very limited intelligence and adaptability.
 
As opposed to the Piscopo which had no personality at all.
Contrarily, Moriarty's Geordi Laforge is so damn realistic, it fooled not only a human, but an android, & a holodeck expert. If it hadn't been for the left-handed glitch, they might never have figured it out in time
 
Not every computer AI that can pass the Turing test is considered sentient.

I’m open to the possibility an AI could be sentient, provided it has the ability to generate its own priorities based on absorbing and adapting to completely unexpected information.

But seeming human and being smart don’t meet those requirements.

The Turing Test cannot determine if a computer is smarter than all humans.
 
Not quite. The computer was asked to replicate Leah's personality based on known log files, of which the computer predicted 90 point something-whatever accuracy, which Geordi swallowed because we all know how statistics and percentages are cool to nerds like he and me. Unfortunately, correlation not being causation prevailing, if not pico fermi in general, when the real Leah waltzes in next season, her real personality is so grossly different to what the computer projected. The computer, based on the files, would react to all known information. Based on classified logs he couldn't see, the simulation knew when to give backrubs based on perceived emotional expressions - which was still imperfect since Geordi sighs in exasperation over the inappropriateness of the time involved (with the ship about to go glow in the dark from all that radiation cooking them like eggs on a sidewalk.)

In other words, Computer-Leah was a glorified Commodore 64 BASIC program where starting the subrouting via command GOSUB 8675309 wasn't quite polished yet and not because Geordi was asking the computer to do something that wasn't a typical game cartridge to be plugged in either...

And Lance pointed it out rather nimbly that when Real-Leah waltzes into the holodeck and sees Compuservice-Leah and all it does is stand there gawking like how a moviegoer in 2018 must do... nope, error 404 - no sentience found - just got tripped.

Note that "Booby Trap" is a brilliant story on many counts, but the Leah subplot was particularly steeped in nuance from the get-go and paved the way for a fantastic sequel, even if the space whale critters was a tad hard to believe. The Leah subplot more than made up for it...
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top