• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

The scientist planning to upload his brain to a COMPUTER

So the machine intelligences the humans interact with could be plugged into the Matrix too (a discussion for a whole different thread).

In The Matrix Online, the Machine emissary is a sentinal commander of some sort, physically existing in their city, connected to the Matrix.

But his appearance was shaped by the fact that his RSI was forced, trying to imagine a generic human body based on biological data only. So 'he' appeared human, but with the same distinct red eyes of the Machines and oddly disproportionate limbs and muscle mass.

They can connect and move about like the two other races (human and program) but they have the hardest time doing so apparently.
 
tNWupic.png
 
What an utterly incredible Thread. When we have the Forum Awards, I nominate this for Thread of the Forum.

I will be reading this again, and I am hopeful that I will have some thoughtful contributions to make. In the mean time, I thank each and every Contributor to making this a singular collection of point/counterpoint, compare/contrast, complete/compelling read.

One thing I am now sure of, however, and am very sad about:

So much for transporter technology. It is clear to me from this collection of contributions that it will never, ever Be. We would never be able to get the brain "right."

Another thing that intrugued me upon first read:
Even were we to get the "molecular reproduction/simulation" (I think were the words), of the brain to perfection, what of the electro-magnetic field and vacuum between the molecules?
 
While it is pleasing to see that after hundred thousand years people are starting to get excited about sonic atmotelepathy, I think quantum pseudo-telepathy is cooler.
 
Sorry for the delayed reply. Been busy.

I'm going to stop you there: I have not granted that PROGRAMS can be conscious.
Nor did I say that you did. To remind you, what I was commenting on was this hypothetical statement that you had made:

To be sure, even a simulation of AI consciousness would not itself be conscious.

That translates to: "Even if an AI were conscious, a simulation of that conscious AI would not be conscious." Clearly, you were entertaining the possibility of a conscious AI, even if only to try to assert a fundamental distinction between a thing and a simulation of it. But, to recap, there you made a fundamental error, because finite discrete processes such as running computer programs can be simulated exactly, by the process of emulation. This exact kind of simulation is not something that can—as far as we know—be done for arbitrary physical processes. And an AI is a kind of computer program, by definition.

So, the hypothetical statement of yours that I re-quoted just above is false. That does not mean that there is a conscious artificial intelligence. It does simply mean, though, that emulations (and therefore such certain kinds of exact simulations) of conscious artificial intelligences would themselves also be conscious, given that by hypothesis the program being emulated is itself conscious. It simply would not be possible to have a conscious artificial intelligence that lacked this property.

Far from it, I continue to state that consciousness can very well arise from a computer system under the right circumstances. Consciousness, however, is not software, nor can it be reduced TO software, since many of the components that make consciousness can only exist in the context of a healthy functioning brain.
It's unfortunate that you again betray a lack of comprehension of what constitutes a "computer system." If you are saying that consciousness can arise in a computer system, then what you are saying implies that consciousness can be an attribute of software. If you don't mean to be talking about software, you really shouldn't be talking about either computer systems or artificial intelligences at all. Or, you must be talking about something besides a modern digital computer, perhaps some sort of analog computer, hypothetical quantum computer, or something that you've made up in your imagination. Because when someone simply says "computer," it is reasonably taken to mean something of a very specific kind, namely a digital computer whose operation is—and I'll say it again—ultimately describable as a finite, discrete process, malfunctions notwithstanding.

Which, IF THAT WERE TRUE, would only apply to AIs. And that again is assuming that a piece of software in and of itself can actually be conscious across a vast diversity of hardware.
Strangely enough, you got it right this time.

That still would not apply to a simulation of HUMAN consciousness, however, since the simulation is the product of the AI's calculations and not a product of the the simulation's interactions with itself.
Well, here you've reiterated something that you've repeatedly hammered on throughout the thread that is quite deserving of refutation. Upthread, you said these, which I'll assume are two previous iterations of the same idea:

This is because the simulated brain doesn't actually do anything that influences its future state, the computer running the simulation does.

What I'm saying is that for the simulation to be conscious, the outputs from the simulator would have to be able to REALLY interact with each other in a way that is both fully consistent with the original and external to the processor. You can't just SIMULATE their interactions by imposing those states from an algorithm, they would have to arise "naturally", directly from each other. A simulation can't do that, EVERYTHING that it does is generated from a single source and there is no interactivity between its components.

To summarize, it seems that you are claiming that a simulation can never behave as a natural process, because the behaviors of the simulated components are dictated entirely by an algorithm instead of by natural interactions with each other.

Now, it may be true that a simulation of a certain natural process can't behave exactly as the natural process itself, but it is non-scientific to ascribe the reason for that as being because the simulated behaviors of the constituents are all determined by a single algorithm instead of mutual interactions, as I'll now show.

By definition, a simulation of a natural process is an approximation of the natural process made relative to all known natural laws that the natural process itself evidently comports to. The form of the algorithm is dictated by the mathematical formulations of those laws and the method of approximation admitted for the purpose of simulation. Therefore, and assuming that the simulation is bug-free, there are only two reasons, either singly or in combination, by which the simulation can differ from natural behavior:
  1. The natural process does not comport exactly to the assumed natural laws. This itself may occur for either of two reasons, either singly or in combination:
    • One or more of the assumed natural laws does not in fact hold for the natural process.
    • There are additional unknown natural laws that influence events.
  2. The method of approximation employed to solve the implied system of differential equations is not exact enough. (Remark: I'm admitting, as a part of the so-called "method of approximation," simplifying assumptions such as neglecting higher-order effects which would encompass ignoring details such as minor deviations from symmetric configurations. As is well known, such simplifying assumptions generally reduce the computational workload at a cost of accuracy.)
The simulation itself being an algorithm is not on the list of reasons why there might be divergence between the simulation and natural behavior. If the simulation being an algorithm is a problem, then it will be a problem arising from one of these causes listed above that makes rendering an algorithm impossible: for example, there might be an infinitude of natural laws having significant effect that cannot be summarized in a finite conjunction of finite expressions, one or more of the natural laws might have a form that does not admit accurate approximation, etc.

Although it's perhaps philosophical to say, I think that the following is worth noting. With respect to the idea that an infinitude of natural laws have significant effect that cannot be summarized in a finite conjunction of finite expressions, not only is that idea formulated by a fairly precise mathematical statement, but also that idea could be reasonably argued to be in correspondence with the intuitive idea that there is an ineffable essence to physical reality that cannot be described or even comprehended by humans. Certainly, if you work it the other way, the opposite idea that all natural laws can be summarized in a finite conjunction of finite expressions would seem to admit both the description and comprehension of all natural laws by humans, at least in principle, most especially when the right conjunction of finite expressions has been hit upon. (Remark: I mention this, Crazy Eddie, because I believe that it indicates possible circumstances under which what you evidently believe in would hold, whereby simulations would always ultimately fall short of reality. This is in contrast to the explanation that you give for that behavior, which as I said I do not believe to be sound.)

Actually no one in this thread has proposed a hard definition of consciousness that I've seen. Can you, or anyone, refer me to a post that does so, that I must have missed?

I did a couple of pages ago. I assume it wasn't "hard" enough for your liking or you overlooked it.
If you'd like me to comment on your definition, could you link to it, or better yet quote it, please?

Which is NOT what we're discussing, if the objective is a controlled experiment to test for the genesis of the simulation's behavior. The person conducting the experiment could very well cheat and get the results he wants, but that would tell us nothing.
My point was that your standard under discussion here is itself a cheat, given that even people seem to have comparable and/or similar vulnerabilities.
 
Last edited:
But, to recap, there you made a fundamental error, because finite discrete processes such as running computer programs can be simulated exactly, by the process of emulation.
At which point it is no longer a simulation, but a second instance of the system in question. Put another way: the only kind of computer system that could emulate a conscious AI is a system that is ITSELF capable of consciousness (ergo it can host a conscious AI).

Consciousness can very well emerge from an AI that is running a program emulated from from another system. But the PROGRAM isn't conscious, the computer is. Or in terms of the analogy you quoted: The simulation doesn't achieve consciousness, the simulator does.

It's unfortunate that you again betray a lack of comprehension of what constitutes a "computer system." If you are saying that consciousness can arise in a computer system, then what you are saying implies that consciousness can be an attribute of software.
It is unfortunate that you betray a lack of comprehension for the content of my posts. At the risk of being redundant, consciousness being an emergent property of programming activity (as opposed to brain activity in biological systems) means that consciousness is a property of hardware, not software. This is such because the software cannot display any meaningful activity independent of its hardware; strictly speaking, "software" is a mathematical abstraction that describes the interactions of hardware components within that system.

To say that consciousness can be an attribute of software is a bit like saying that memory can be an attribute of language.

To summarize, it seems that you are claiming that a simulation can never behave as a natural process, because the behaviors of the simulated components are dictated entirely by an algorithm instead of by natural interactions with each other.
Drop the word "natural" from that paragraph and you're close to the mark. A better word might be "genuine" or "causal." The point being, the simulated components never actually interact with each other, or with anything else for that matter, as their states are produced as outputs from the simulator's program. They are purely effects with no casual power.

You may also want to drop the condescending smartass tone and consider for a moment that you might not be the only person in this thread with a formal background in computer science.:bolian:

My point was that your standard under discussion here is itself a cheat, given that even people seem to have comparable and/or similar vulnerabilities.
Yes, and exploiting this vulnerability against a PERSON would generate a specific type of reaction that can be used to determine whether he is conscious or not (e.g. various ways of evaluating whether or not a patient is catatonic or in a persistent vegetative states). Testing against a simulated consciousness would illicit a reaction that could be consistent with the behavior of the simulator, or consistent with the behavior of the simulation independently. It should be easy to tell which.
 
You said, "(A)nd an AI is a kind of computer program, by definition."

I am not sure I agree with that. Isn't an AI a former collection of programs and hardware that has become self-aware ; in effect, conscious?
 
That sounds horrific. It could very well feel like being blind, deaf, mute and paralyzed. I wouldn't want my brain uploaded into a computer even if it was just a copy, not for anything in the world.
I highly doubt that if it were possible to upload a mind, that it would be like being a sensory deprivation tank. We know that in those cases, your brain just starts making stuff up to sense. I think they would have to figure out a way to simulate the senses in a virtual world (like the Matrix) or put you in a synthetic body (like in Ghost in the Shell).

Otherwise it would just be boring.
You'd lose any track of time without any sensory input. I'm not sure if the mind would actually be active at all. And not sure if you would be able to dream or hallucinate anything. So you probably wouldn't even know of anything until someone plugged sensors into the system.

Actually... how would you keep the artificial brain active? What controls the activitiy, what decides which region is active, when you are dreaming, or receiving sensory input, etc...? The natural brain hardware does all of that by itself in a way. But the artificial brain would need to emulate that behaviour, and the question is how. It's not as simple as "upload the mind, and it works". I guess.
 
It is unfortunate that you betray a lack of comprehension

Don't worry, I'm done engaging you on this.

You said, "(A)nd an AI is a kind of computer program, by definition."

I am not sure I agree with that. Isn't an AI a former collection of programs and hardware that has become self-aware ; in effect, conscious?
There can't be any "former" about it.

If it's ever exactly a "collection of programs and hardware," then that's all it will ever be.

Describing it as a "collection of programs and hardware" means that its behavior could be represented with perfect fidelity by a single monolithic program. If the system lacks this property, then it is not a digital computer system. Theoretically, the only program you ever really need is a universal machine (in the software sense). The operation of a universal machine is to emulate other programs, though, where the emulated programs and their data are represented as data, so it still makes sense to speak of a multitude of possible programs, but that's just a point of view.
 
That sounds horrific. It could very well feel like being blind, deaf, mute and paralyzed. I wouldn't want my brain uploaded into a computer even if it was just a copy, not for anything in the world.
I highly doubt that if it were possible to upload a mind, that it would be like being a sensory deprivation tank. We know that in those cases, your brain just starts making stuff up to sense. I think they would have to figure out a way to simulate the senses in a virtual world (like the Matrix) or put you in a synthetic body (like in Ghost in the Shell).

Otherwise it would just be boring.
You'd lose any track of time without any sensory input. I'm not sure if the mind would actually be active at all. And not sure if you would be able to dream or hallucinate anything. So you probably wouldn't even know of anything until someone plugged sensors into the system.

Actually... how would you keep the artificial brain active? What controls the activitiy, what decides which region is active, when you are dreaming, or receiving sensory input, etc...? The natural brain hardware does all of that by itself in a way. But the artificial brain would need to emulate that behaviour, and the question is how. It's not as simple as "upload the mind, and it works". I guess.
I would imagine you'd keep it active in the same way our brains keep active by giving it something to do. By default it would need sensory input or why would anyone sign up for it? You might as well freeze your head like Walt Disney did according to urban legend.

If I uploaded my brain, I'd plan on actually doing things. Not sit on a shelf waiting for the robot revolution to happen. Maybe go create a digital universe where I control every aspect and the only limit is my imagination or find a way to transmit myself across the actual universe at the speed of light. Maybe read more Charles Stross novels and mine them for ideas.
 
I highly doubt that if it were possible to upload a mind, that it would be like being a sensory deprivation tank. We know that in those cases, your brain just starts making stuff up to sense. I think they would have to figure out a way to simulate the senses in a virtual world (like the Matrix) or put you in a synthetic body (like in Ghost in the Shell).

Otherwise it would just be boring.
You'd lose any track of time without any sensory input. I'm not sure if the mind would actually be active at all. And not sure if you would be able to dream or hallucinate anything. So you probably wouldn't even know of anything until someone plugged sensors into the system.

Actually... how would you keep the artificial brain active? What controls the activitiy, what decides which region is active, when you are dreaming, or receiving sensory input, etc...? The natural brain hardware does all of that by itself in a way. But the artificial brain would need to emulate that behaviour, and the question is how. It's not as simple as "upload the mind, and it works". I guess.
I would imagine you'd keep it active in the same way our brains keep active by giving it something to do. By default it would need sensory input or why would anyone sign up for it? You might as well freeze your head like Walt Disney did according to urban legend.

If I uploaded my brain, I'd plan on actually doing things. Not sit on a shelf waiting for the robot revolution to happen. Maybe go create a digital universe where I control every aspect and the only limit is my imagination or find a way to transmit myself across the actual universe at the speed of light. Maybe read more Charles Stross novels and mine them for ideas.

It's been theorized that one of the uses for putative "brain uploading" would actually be interstellar space travel: humans upload their minds to a database which is then sent on a thousand-year-long voyage, and those minds are then uploaded to robotic bodies (or re-patterned onto cloned bodies) to begin the expedition. Call it a very extreme form of suspended animation.

Just to be clear, though, even if you COULD upload your brain into a synthetic system, you would still be You, because your existing brain is still experiencing consciousness and has a point of view different from that of the synthetic. You will never actually know what it is like to being an uploaded personality, and you can never know for sure that the copy is even conscious in the same way you are (and I again maintain that it wouldn't be unless the cyberbrain had certain fundamental things in common with a meatbrain).
 
It is unfortunate that you betray a lack of comprehension

Don't worry, I'm done engaging you on this.

You said, "(A)nd an AI is a kind of computer program, by definition."

I am not sure I agree with that. Isn't an AI a former collection of programs and hardware that has become self-aware ; in effect, conscious?
There can't be any "former" about it.

If it's ever exactly a "collection of programs and hardware," then that's all it will ever be.

Describing it as a "collection of programs and hardware" means that its behavior could be represented with perfect fidelity by a single monolithic program. If the system lacks this property, then it is not a digital computer system. Theoretically, the only program you ever really need is a universal machine (in the software sense). The operation of a universal machine is to emulate other programs, though, where the emulated programs and their data are represented as data, so it still makes sense to speak of a multitude of possible programs, but that's just a point of view.

Thank you, and I understand your answer well. Wow, it sure is something to get ones mind around (no pun intended). I can see us getting to the computing power and speed needed, in the distant future, but I still wonder about the "intangible"?!
 
I think the biggest hurdle will during the transfer when a window pops up and tells them Windows has performed and illegal operation and must now shut down. Or the Blue Screen of Death. Or maybe a ransomware virus; way more than a penny for your thoughts!
 
You'd lose any track of time without any sensory input. I'm not sure if the mind would actually be active at all. And not sure if you would be able to dream or hallucinate anything. So you probably wouldn't even know of anything until someone plugged sensors into the system.

Actually... how would you keep the artificial brain active? What controls the activitiy, what decides which region is active, when you are dreaming, or receiving sensory input, etc...? The natural brain hardware does all of that by itself in a way. But the artificial brain would need to emulate that behaviour, and the question is how. It's not as simple as "upload the mind, and it works". I guess.
I would imagine you'd keep it active in the same way our brains keep active by giving it something to do. By default it would need sensory input or why would anyone sign up for it? You might as well freeze your head like Walt Disney did according to urban legend.

If I uploaded my brain, I'd plan on actually doing things. Not sit on a shelf waiting for the robot revolution to happen. Maybe go create a digital universe where I control every aspect and the only limit is my imagination or find a way to transmit myself across the actual universe at the speed of light. Maybe read more Charles Stross novels and mine them for ideas.

It's been theorized that one of the uses for putative "brain uploading" would actually be interstellar space travel: humans upload their minds to a database which is then sent on a thousand-year-long voyage, and those minds are then uploaded to robotic bodies (or re-patterned onto cloned bodies) to begin the expedition. Call it a very extreme form of suspended animation.

Just to be clear, though, even if you COULD upload your brain into a synthetic system, you would still be You, because your existing brain is still experiencing consciousness and has a point of view different from that of the synthetic. You will never actually know what it is like to being an uploaded personality, and you can never know for sure that the copy is even conscious in the same way you are (and I again maintain that it wouldn't be unless the cyberbrain had certain fundamental things in common with a meatbrain).
Why is that a problem for so many people? A version of You lives on and as far as it is concerned it is You. There are just two different versions of You, one biological and one digital.
 
That's why you don't use Windows for something so important.

A sad Mac looks so much better on your digital gravestone. And it kinda resembles one.
 
I would imagine you'd keep it active in the same way our brains keep active by giving it something to do. By default it would need sensory input or why would anyone sign up for it? You might as well freeze your head like Walt Disney did according to urban legend.

If I uploaded my brain, I'd plan on actually doing things. Not sit on a shelf waiting for the robot revolution to happen. Maybe go create a digital universe where I control every aspect and the only limit is my imagination or find a way to transmit myself across the actual universe at the speed of light. Maybe read more Charles Stross novels and mine them for ideas.

It's been theorized that one of the uses for putative "brain uploading" would actually be interstellar space travel: humans upload their minds to a database which is then sent on a thousand-year-long voyage, and those minds are then uploaded to robotic bodies (or re-patterned onto cloned bodies) to begin the expedition. Call it a very extreme form of suspended animation.

Just to be clear, though, even if you COULD upload your brain into a synthetic system, you would still be You, because your existing brain is still experiencing consciousness and has a point of view different from that of the synthetic. You will never actually know what it is like to being an uploaded personality, and you can never know for sure that the copy is even conscious in the same way you are (and I again maintain that it wouldn't be unless the cyberbrain had certain fundamental things in common with a meatbrain).
Why is that a problem for so many people? A version of You lives on and as far as it is concerned it is You. There are just two different versions of You, one biological and one digital.

It wouldn't be a problem for me, except insofar as I am not always easy to get along with so me and DigiMe would have to set some ground rules first and then we would probably be able to work together pretty efficiently.

It's just that I will never actually know what it is like to be incarnated as copied personality, and there will always be that little doubt in the back of my mind as to whether or not the copy of me is really a conscious thinking person or just a behavior pattern mapped onto a really clever AI.
 
It's been theorized that one of the uses for putative "brain uploading" would actually be interstellar space travel: humans upload their minds to a database which is then sent on a thousand-year-long voyage, and those minds are then uploaded to robotic bodies (or re-patterned onto cloned bodies) to begin the expedition. Call it a very extreme form of suspended animation.

Just to be clear, though, even if you COULD upload your brain into a synthetic system, you would still be You, because your existing brain is still experiencing consciousness and has a point of view different from that of the synthetic. You will never actually know what it is like to being an uploaded personality, and you can never know for sure that the copy is even conscious in the same way you are (and I again maintain that it wouldn't be unless the cyberbrain had certain fundamental things in common with a meatbrain).
Why is that a problem for so many people? A version of You lives on and as far as it is concerned it is You. There are just two different versions of You, one biological and one digital.

It wouldn't be a problem for me, except insofar as I am not always easy to get along with so me and DigiMe would have to set some ground rules first and then we would probably be able to work together pretty efficiently.

It's just that I will never actually know what it is like to be incarnated as copied personality, and there will always be that little doubt in the back of my mind as to whether or not the copy of me is really a conscious thinking person or just a behavior pattern mapped onto a really clever AI.

Not that it'd matter if it/he/whatever, was sent on an interstellar voyage. Once it is Bon Voyage! I highly doubt you'd see him again. I don't think you'll ever even know if he and the crew reaches his destination.

Unless we make incredible advances in interstellar flight in the next few decades.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top