• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Are sentient androids really THAT hard to make?

Newspaper Taxi

Fleet Captain
Fleet Captain
The Federation and it's scientists have managed to do the following:

* Break the speed of light barrier -- once they even went Warp 10!

* Develop technology that takes a person apart molecule by molecule then puts them back together somewhere else

* Create devices that can create any item, material, or food that you want by assembling molecules the right way

* Deflector dishes (Which can do about anything.)


And so on. If you can do all of those things shouldn't making a sentient android be relatively simple? It doesn't involve breaking any impossible feats, such as, you know -- physics. It just takes hundreds of years worth of space-geniues and space technology.

Was there some behind the scenes reason why robots and androids were discouraged?

EDIT: This was meant to go to General Discussion. Can a mod move it, please? It seems I missed.
 
Honestly, they really shouldn't be, especially in the novels, after someone thought up the "holo-tronic brain."
 
I would think that creating artificial sentient machines presents a lot of story opportunities with conflicts since there are a lot of moral implications. Whereas, the creation of those other devices probably didn't have (or the writers didn't envision) many story opportunities that had a significant human element. So, having the development of artificial life as an ongoing thread throughout the series seems to make sense.
 
In reality, artificial intelligence research has lagged far behind what's been predicted. There are many AI researchers who are skeptical that true artificial sapience will ever be attainable. Of course, there are others who are convinced that it will happen within our lifetimes. But for the purposes of fiction, you can find a legitimate scientific argument to justify either approach.
 
True. And also, there are multiple definitions for artificial intelligence (machines that think and act like humans; machines that think & act rationally etc.) and I suppose depending on which part of that spectrum an AI researcher falls, s/he would be more or less inclined to believe in intelligent emergent behaviour. It's also not clear whether true (or strong) artificial intelligence (where the goals include generalized learning and application of that learning) would be possible without artificial life (where the goals include survival, adaptability and reproduction) since we know that intelligence grew out of simple life on our planet. In many ways, I think that A.I is an attempt to short-circuit the evolutionary process and to try to artificially develop what nature has taken eons to create, in a medium that's malleable (electronics) but perhaps needs to be made even more flexible.
 
The Federation and it's scientists have managed to do the following:

* Break the speed of light barrier -- once they even went Warp 10!

* Develop technology that takes a person apart molecule by molecule then puts them back together somewhere else

* Create devices that can create any item, material, or food that you want by assembling molecules the right way

* Deflector dishes (Which can do about anything.)


And so on. If you can do all of those things shouldn't making a sentient android be relatively simple? It doesn't involve breaking any impossible feats, such as, you know -- physics. It just takes hundreds of years worth of space-geniues and space technology.

Was there some behind the scenes reason why robots and androids were discouraged?

EDIT: This was meant to go to General Discussion. Can a mod move it, please? It seems I missed.

I totally agree. It's implausible that AI hasn't been created with gusto. We're closer to it NOW than we are to most other Trek concepts.

~String
 
Sentient androids are only rare in the TNG era. On TOS they were a dime a dozen, although most machine intelligences were susceptible to the James T. Kirk Logic Bomb attack. You think Microsoft would have patched that bug.
 
In reality, artificial intelligence research has lagged far behind what's been predicted. There are many AI researchers who are skeptical that true artificial sapience will ever be attainable. Of course, there are others who are convinced that it will happen within our lifetimes. But for the purposes of fiction, you can find a legitimate scientific argument to justify either approach.
I think it's hard to justify the former notion. Even if it turns out that sapience requires the same materials as our own brains, we can still--in principle--build designed, artificial systems using those materials.
 
Sentient androids are only rare in the TNG era. On TOS they were a dime a dozen, although most machine intelligences were susceptible to the James T. Kirk Logic Bomb attack. You think Microsoft would have patched that bug.

I don't think most of the androids encountered in TOS really qualified as sentient. Of the Exo III androids, Ruk seemed moderately sentient, capable of resisting his programming and acting on what seemed to be emotional impulses; but Andrea was little more than a programmed sex doll and Korby, Brown, and DupliKirk bore the downloaded minds of humans and thus arguably don't really count as pure AI. The Mudd's Planet androids were hardly sentient; they were just drone bodies controlled by a single central computer that was very rigid and limited in its responses. Sargon's androids were nothing more than mechanical bodies for housing incorporeal consciousnesses. Other than Ruk, the only unambiguously sentient android encountered in TOS was Rayna Kapec. And Immortal Coil posited that Noonien Soong was a student of Flint's, so Soong's androids may have been based on the same principles as Rayna. (Note the similarity between Rayna's fate and Lal's.)


I think it's hard to justify the former notion. Even if it turns out that sapience requires the same materials as our own brains, we can still--in principle--build designed, artificial systems using those materials.

Yes, and I've always suspected that Voyager's use of bioneural gel packs may have played a role in the Doctor's sentience. For that matter, Data was originally conceived as being pseudo-organic, kind of like a Moore-BSG Cylon, before he was retconned into a more conventional mechanical man. (Cf. his "Do I not leak?" speech in "The Naked Now.")

Still, one could question whether that would qualify as true artificial intelligence as opposed to merely a convoluted form of procreation. To me, the term "artificial intelligence" implies not merely creating a copy of something that possesses a naturally evolved intelligence (such as synthesizing a human brain), but designing and creating intelligence, the process of thought itself.
 
I never understood why it was hard to create sentient androids in the Trek universe. The EMH has feelings, humor and even an ego, from the very first episode, and that although it was designed to be an emergency medical program only.

Maybe it's just about the size of the brain, and Data is so much admired because he has all this functions in a human sized brain, while the EMH or Vic Fontaine need a larger computer to run.
 
^The Doctor gave the impression of feeling irritation and impatience in the first episode, but that was because he was programmed to simulate the responses of Louis Zimmerman. Characters in The Sims can simulate the appearance of emotion, but that doesn't mean they're actually thinking and feeling.
 
Still, one could question whether that would qualify as true artificial intelligence as opposed to merely a convoluted form of procreation. To me, the term "artificial intelligence" implies not merely creating a copy of something that possesses a naturally evolved intelligence (such as synthesizing a human brain), but designing and creating intelligence, the process of thought itself.

True. And this is what AI researchers are trying to do today albeit in narrow application-specific domains. True strong/general intelligence might require a medium that's closer to being as flexible as the human brain, that the designed intelligence needs to "run" on as it were, or that should be a part of the design process (much as our own learning and growth designs and develops our brains).

AFAIK, thought arises out of neuronal activation in the brain based on inputs. Rational thought arises out of the practice of having our neurons fire and connect a certain way and not others. (and that's why its possible for us to be completely irrational, selectively rational or as logical as a Vulcan). The process starts out of the basic needs of survival and reproduction, and refined by education (punishment and reward) and is always being reinforced or corrected as we live/change our lives.
 
I find it somewhat implausible that "organic" should in any way be connected to "sapient", or to "emotion". What could be the connection? All that is needed is a somewhat flexible and complex system, and all sorts of systems can be complex and flexible without featuring as much as a single carbon atom, let alone macroscopic amounts of soft goo.

The extensive mental capabilities of the EMH didn't seem to come as a surprise to anybody in VOY, really. His esoteric hobbies and interests did, but not his ability to think or emote. Data's abilities in those fields didn't seem to surprise anybody, either - rather, his inabilities were the subject of surprised and amused comments.

It doesn't seem all that difficult to assume, then, that sentient and feeling machines are rare not because they are difficult to make, but because they are such old news and of no interest to anybody. Perhaps AI was perfected in the 21st century already, and found useless, with all further development and use abandoned - except when it happened as an emergent property of home appliances.

Timo Saloniemi
 
^Organic intelligence might not be the only kind, but it could be the only kind we would recognize as anything like our own. Our consciousness is something that evolved in an organic species, driven by its survival needs and its biology. So if we define intelligence as something we could identify with and communicate with, it might have to be something that evolved biologically as ours did. A purely electronic intelligence, something with no grounding in cellular biology or the sensory feedback of a living body or the evolutionary incentives that shape behavior and perception, might be too profoundly alien for us to have any common ground with it.
 
Perhaps AI was perfected in the 21st century already, and found useless, with all further development and use abandoned - except when it happened as an emergent property of home appliances.
Or perhaps AI was developed, humanity hit the Singularity, and what we perceive to be Star Trek is, in fact, a simulation running in the Singularity groupmind.
 
A purely electronic intelligence would also be something dangerous to create. Doomsday scenarios like in Terminator, Matrix and I-Robot seem implausible in reality on the face of it, but may be possible. Such an intelligence would have to form its own goals and it cannot be assumed that these goals would be ultimately beneficial for mankind, having little or no common ground with flesh-and-blood intelligence, even if it were "educated" in some way or had a kill switch or had hardwired laws.
 
^Ah, but that lack of commonality is why I think there's little chance of such an apocalypse scenario. Such alien beings would have no common goals or needs with us so we wouldn't be competing with them for anything, and they might not have any interest in dominating us or controlling our lives. They might not even be aware of the physical world in the way that we are, perhaps existing primarily in cyberspace. It's possible that they could do things that were accidentally harmful to us without even realizing it, obligating us to stay out of their way; but I think scenarios of robots pursuing deliberate conquest or genocide are more the stuff of thrillers than plausible futurism.

And it's not I-Robot, it's I, Robot.
 
^The Doctor gave the impression of feeling irritation and impatience in the first episode, but that was because he was programmed to simulate the responses of Louis Zimmerman. Characters in The Sims can simulate the appearance of emotion, but that doesn't mean they're actually thinking and feeling.

Which leads to the question of when the Doctor stopped acting like he was sentient and started actually being sentient, and how to draw that line. I'd argue that there's a point when the simulation reaches such fidelity that it doesn't really matter. It's not like you couldn't pose the same question to a real person. A sophist or sociopath would easily accuse everyone of faking their emotional responses, and we really have no way to prove otherwise.

Actually, it's probably possible to at least confirm a holodeck characters' non sentience. I imagine they could be deliberately programmed to make their decisions along a series of scripted responses, like an infinitely more complex version of Eliza, rather than using an open-ended holistic algorithms that would more closely analogize with human thought.

That could be what separates your Doctors, Moriartys, and Vic Fontanes from your run-of-the-mill Dixon Hill bimbos and Fair Haven boy-toys. So I guess you could probably figure out the difference between genuine sentience and a convincing fake with a simple core-dump.

But, then again, that only works if there's a situation where the scripted responses would seize up (which we've seen, like in the early TNG holodeck malfunctions where the characters more-or-less ignored out-of-character behavior by the program's participants) when the program was led somewhere the rails didn't go. Otherwise, if such scripting was robust enough it could remain convincing no matter what bizarre, out-of-this-world thing happened to the characters (as we saw in some of the later Voyager holodeck malfunctions where characters started losing their shit when the crew did crazy futuristic things), then we come back around to the point where I'd start to argue again that if the outcome is indistinguishable either way, then the methodology is a difference that makes no difference and, thus, is no difference.
 
^@Christopher
I,Robot. Thanks.

IMO, an electronic intelligence doesn't have to be confined to cyberspace. And it would be possible for such an intelligence to compete with humanity for energy or other resources for survival. Sure, I'm not saying they would necessarily be malicious enough to want to conquer or eliminate humanity, but their need for survival might clash with ours (which is how the events of The Matrix started out; with benign robotic intelligences wanting to maintain their own society). Also, it's not necessary for two species that require some of the same resources to have common ground or understanding, especially if one of them is organic and the other isn't.

This is all highly speculative anyway. I'm a bit of a skeptic as regards to the singularity and true AI being born in our lifetime, even though I am an AI grad student. :)
 
Last edited:
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top