• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

The scientist planning to upload his brain to a COMPUTER

This is a good point, but I wonder if it would be better to word it that we have evolved in the personal sense or moved on rather than died, if you see what I mean. The "new" us has as its base all of the previous usses (I think I just made up a word). We wouldn't be what we are today without having been what we have been before.
 
...
What are you even basing this on? If we copy a person so perfectly that the copy believes that they are that person and seem self-aware, then how are they not self-aware?

For two major reasons:

First, emulating/simulating a thing is very different from making the thing itself. You can create a flight simulator as complex as precise as realistic as you want, it will never BE A PLANE!!!
Making a simulation of a person is a different feat from creating an identical copy. It would be more like creating an exact copy of a particular flight in exact detail. It still isn't the original flight, but if you could somehow enter the digital replica you would never know the difference. It's only different because we know the origin of the second version. We're not talking about basically creating a mask of a person, this is everything that makes a person unique being copied. Otherwise what would even be the point of it.

Second, self-awareness, whatever it is, is obviously something extremely complex and elaborate, before we can reproduce it we have to understand exactly how it works and why it appears in some creatures and not in others.(we know for certain that some creatures, though possessing a working brain are not self-aware)

Before we get to that stage, we are about as likely to create self-awareness as to create a working car by throwing randomly mechanical pieces up in the air and hoping that they'll fall into place.
No one other than the OP thinks we're close to this. But just because we don't know how something works does not mean that it's impossible to understand. It's possible that in time we do learn how consciousnesses works and can recreate it. We don't currently know how complicated it is. It may be impossible or there really is nothing special going on. That we're just a collection of instincts and processes reacting to stimuli but through some mutation acquired a false sense of self.

But really until we know more, it's just speculation at best.
 
This is possible but keep in mind that it wouldn't be the persons themselves but copies, that in most likelihood wouldn't even be self-aware. They would mimic self-awareness to a T but deep down, they would be as dead as a stone.

Actually we have no way to know that. We do not even have a theoretical framework of how this would be accomplished. I tend to think on a philosophical level that even if we master artificial ai to the point of self awareness, that a remote brain upload would at best be a copy of the person being uploaded, someone you could actually have a conversation with yourself. However, I think it could be as alive as anything if self awareness = alive.

However, I am not as philosophically certain that replacing portions of the brain, a bit at a time with perfectly functioning pieces would result in just a copy. Rather, I think it is possible that it could still end up being you. There would be no loss of continuity in your consciousness and whatever changes this would cause would be more like the changes caused by your brain aging and maturing than a change that makes it not 'you'. In both cases though, I think the possibility for self awareness is there. I don't see why not.
On a philosophical, totally unproven level. I think that the me of say one year ago (just to give an example not an actual figure) is dead/ no longer exists, he's been replaced gradually by the current me, one little bit at a time, so slowly that he didn't notice it until it was too late. I don't believe in a continuity of the ego. In people with a serious mental illness like Alzheimer's it is obvious. They are not the people we used to know. In us, it is not so obvious, but that doesn't make it any less real.

So don't fear death, because you've died already many times since the day you were born and you don't even know it.



I agree certainly that a purely psychical change to the brain can change or destroy a person.

Also, the me of now is a different me than I was the last time I was in this conversation on this board a few years back, but it is an evolving me with a continuity from point A to B to where I am now. I think that if we could, hypothetically, extend life for hundreds of years and slowly replace very small portions of your brain over the time span of say 200 years (arbitrary long number), then the resulting you would be no different from the you resulting in how you change when your body naturally replaces itself over time.
 
Last edited:
Actually we have no way to know that. We do not even have a theoretical framework of how this would be accomplished. I tend to think on a philosophical level that even if we master artificial ai to the point of self awareness, that a remote brain upload would at best be a copy of the person being uploaded, someone you could actually have a conversation with yourself. However, I think it could be as alive as anything if self awareness = alive.

However, I am not as philosophically certain that replacing portions of the brain, a bit at a time with perfectly functioning pieces would result in just a copy. Rather, I think it is possible that it could still end up being you. There would be no loss of continuity in your consciousness and whatever changes this would cause would be more like the changes caused by your brain aging and maturing than a change that makes it not 'you'. In both cases though, I think the possibility for self awareness is there. I don't see why not.
On a philosophical, totally unproven level. I think that the me of say one year ago (just to give an example not an actual figure) is dead/ no longer exists, he's been replaced gradually by the current me, one little bit at a time, so slowly that he didn't notice it until it was too late. I don't believe in a continuity of the ego. In people with a serious mental illness like Alzheimer's it is obvious. They are not the people we used to know. In us, it is not so obvious, but that doesn't make it any less real.

So don't fear death, because you've died already many times since the day you were born and you don't even know it.



I agree certainly that a purely psychical change to the brain can change or destroy a person.

Also, the me of now is a different me than I was the last time I was in this conversation on this board a few years back, but it is an evolving me with a continuity from point A to B to where I am now. I think that if we could, hypothetically, extend life for hundreds of years and slowly replace very small portions of the brain over the time span of say 200 years (arbitrary long number), then the resulting you would be no different from the you resulting in how you change when your body naturally replaces itself over time.
We can remove half the brain in seizure patients. Based on the children it's been done to, it doesn't really affect their memory or personality. Slowly replacing portions of the brain with artificial replacements could actually work. Have various sections slowly take over the functions of the remaining biological portion until it can be completely replaced. You may never notice any difference, at least until you can pick up wifi.
 
Makes you wonder if the same method might work on some of our half-brained politicians. :D
 
Makes you wonder if the same method might work on some of our half-brained politicians. :D

Considering that our half-brained don't consider you and me their constituents, no. They are corrupt, paid off, and they answer to the rich and powerful, so unless you mean some sort of mind control, don't count on it.:p
 
I've actually countered both those arguments multiple times.
And your counters were found to be flawed every single time. For example:

When one "S" curve finishes, it continues to the next paradigm.
Which doesn't address the fact that the NEW paradigm in of exponential growth may not have anything whatsoever to do with the power of computers. It could just as easily be an exponential growth in the power density of rechargeable batteries. You keep assuming that the paradigm shift would reestablish the previous growth curve of computerized growth when there is zero reason to believe it would do anything of the kind (it's unlikely that it would, actually, given that it is a PARADIGM SHIFT rather than a "momentary interruption of an orderly pattern").

On the paradigm issue, I continually mentioned the upper limits of exponential growth. I said that it is finite, but didn't find or have the time to post data, so I made a special effort to find it:

Firstly, Intel has changed the years which Moore's law would end, now in the 2020-2030 range..pushing it back several times. Another prediction is 2040. Lawrence Krauss pushed that possible date to 600 years from now based on "rigorous estimation of total information processing capacity of any system in the universe." So as Kurzweil states: "There are limits to the power of information technology, but these limits are vast." He estimates the capacity of matter and energy in the solar system to support computation up to 10^70 cps in one kg of matter. The universe 10^90 cps, which matches independent analysis. He predicts we will reach those limits in the 22nd century. Even if we use a smaller percentage of the total available matter in the solar system, at 1 20th of 1% we still get values that easily allow for the power needed for a Singularity. In other words, we don't need to come close to maxing it out!
You do realize this quote you just cut and pasted doesn't contain any actual DATA, right? These are supposition based on vague generalities. Even intel's "estimates" are exactly that, and you didn't even bother to directly reference them.

A post with the following link relevant to software issues:

A govt report from advisors on the rate of software growth, disproving the software misconception
Found that the QUANTITY of software applications is growing at an exponential rate. The power and capabilities of those applications (not to mention the efficacy thereof) is another question entirely.

The implication in Moore's law isn't merely that computers are getting more complex. It's that they're getting MORE POWERFUL for a given size. The government study you cited found the exact opposite is true of software: software applications are growing in number and in complexity even as their capabilities remain relatively unchanged. In fact, as a lot of users have begun to suspect, it's increasingly becoming the case that newer software applications are actually LESS powerful than their predecessors because their core programming -- that is, what that application has been designed to do -- has become bogged down with a lot of extra features that add complexity to the overall package that draws greater resources than actually required for the activity in question. We're fast approaching the point where you can't even run a basic word processor without high-speed internet, a cloud server, and at least 3GB of unused ram.

This has also been explained to you as part of the discussion as to why we probably shouldn't be so impressed by the explosion of recorded data on storable medium, given how much of that data consists of cat gifs and twitter posts.

We've already proved consciousness is derived from the brain, so your statement that personality can't be uploaded is supposition..
No it's a plain statement of fact, as your statement directly implies:
Consciousness is derived from the brain.

Consciousness is not known to be derived from computers, ergo there is no evidence that a simulation of a brain would actually achieve consciousness.

Similarly:
Alcohol is derived from fermentation.

Alcohol is not known to be derived from computers, ergo there is no evidence that a simulation of fermentation would actually produce real alcohol.

These are facts, not supposition. Furthermore, I already conceded that AIs will get better and better at emulating humans and that a high-fidelity imitation of human consciousness is perfectly achievable even with EXISTING technology. With clever enough programming, you can get an AI to imitate a human, or even for that matter a SPECIFIC human. Simulated consciousness, like simulated alcohol, is far from impossible.

But that is NOT brain-uploading. It is not, in fact, even the most efficient use for an AI, and is unlikely to ever be anything more than a really creepy, off-putting novelty by trans-humanists.
 
Science Fiction is here to remind us that sometimes the impossible is precisely what happens and all the things people thought WOULD happen turn out to be red-herrings.

You completely missed my last paragraph, didn't you? And science fiction doesn't "remind us" of anything, history does.
You have it backwards, history doesn't remind us of anything, it is what other things (like Science Fiction sometimes) remind us of. For example, a politician in his discourse will remind us of some historical moment "Four scores and seven years ago..."
Which is an HISTORICAL reference, not a scientific or a fictional one, hence you will usually find that quote -- and the rest of the speech, for that matter -- in a HISTORY book, not a science fiction book.

Also, science fiction is, by its very nature, fiction. None of what is depicted in science fiction novels will ever ACTUALLY happen, because it is fiction. Something SIMILAR to those stories may occur eventually, but we would once again read about those things in history books, not science fiction books.

H.G. Wells may have written a science fiction book about the use of atomic weapons, but he never wrote anything about the bombings of Hiroshima and Nagasaki and even less about the long term effects of radiation poisoning.
 
...
What are you even basing this on? If we copy a person so perfectly that the copy believes that they are that person and seem self-aware, then how are they not self-aware?

For two major reasons:

First, emulating/simulating a thing is very different from making the thing itself. You can create a flight simulator as complex as precise as realistic as you want, it will never BE A PLANE!!!
Making a simulation of a person is a different feat from creating an identical copy. It would be more like creating an exact copy of a particular flight in exact detail. It still isn't the original flight, but if you could somehow enter the digital replica you would never know the difference. It's only different because we know the origin of the second version. We're not talking about basically creating a mask of a person, this is everything that makes a person unique being copied. Otherwise what would even be the point of it.

Second, self-awareness, whatever it is, is obviously something extremely complex and elaborate, before we can reproduce it we have to understand exactly how it works and why it appears in some creatures and not in others.(we know for certain that some creatures, though possessing a working brain are not self-aware)

Before we get to that stage, we are about as likely to create self-awareness as to create a working car by throwing randomly mechanical pieces up in the air and hoping that they'll fall into place.
No one other than the OP thinks we're close to this. But just because we don't know how something works does not mean that it's impossible to understand. It's possible that in time we do learn how consciousnesses works and can recreate it. We don't currently know how complicated it is. It may be impossible or there really is nothing special going on. That we're just a collection of instincts and processes reacting to stimuli but through some mutation acquired a false sense of self.

But really until we know more, it's just speculation at best.

The only way to create an actual copy of a person the way the OP describes is to replicate the brain itself and then pattern that brain to carry the same mental states as the original on both an electrical and biochemical level. This WOULD be "brain uploading" in the sense that you are loading patterns from one brain into another. In fiction, I've dabbled with the idea of biomimetic "artificial brains" that function similar enough to the Real McCoy that they can be used as replacement tissue in traumatic brain injuries; a brain can be constructed ENTIRELY of this artificial tissue would be an artificial brain and could theoretically be patterned with the characteristics of an existing person.

Computers can't do this, however, because computers are not brains. It's not a question of "Too hard to do" or of human understanding or anything. It's like if I hand you a bucket of sand and ask you to build a snowman. You can't do it. Not because of your lack of snowman-building skill or sand-shoveling skill, but because building a snowman requires the presence of SNOW, which I have not given you.
 
To quote TOS: "Brain and brain, what is brain?"

The notion of p-code or some kind of brain dump running on a hypothetical CPU seems a long way away.

If the VM is buggy or wrong (even if its supplied with a perfect copy of a brain) the outcome might follow the laws of increased entropy and bad-things-happening or some form of the aging process.

Humans can deal naturally with things like paradox which is something we know binary computers cannot cope with without hacks, so I'm not sure future tech could cover enough of the fuzzy logic required to emulate what humans take for granted.

Quantum computers may help a great deal with those types of problems, and holographic storage may be dense enough to emulate our own neural net. Even if the hardware is built and ready I'm not sure humans alone could build an elegant programming language capable of constructing the modeling required to approach the question of 'what is brain?' let alone the answer...

Turning that process over to some automated AI brain scanner sounds like the solution, but if a computer is asking the question what is brain? won't the resulting outcome be more artificial than what we humans seem to desire? Plus how *do* we know if the machines know what they're doing, really.. if they are almost as smart as humans at that point then they're probably just as clueless as us too :-P

"You take chicken, for example, maybe they couldn't figure out what to make chicken taste like, which is why chicken tastes like everything"

A disciplined mind like Spocks may be more suitable for a brain to computer transfer, or at least provide a cleaner example to base a computerized brain model from. Human minds will need a lot more discipline and lot of shaolin-monk like meditation before they're ready for digital.

Personally, instead of brain transfer I'd opt for more stable, external upgrades. I'd love to see some form of digital storage that can be accessed by our noggins for recalling boring things like large tables of numbers and where I left my keys.
640k should be enough for everybody!

Oh and check out movies like Ex Machina, Transendence, The Machine etc for more ideas, thats the last few flicks I saw on the topic, the subject seems to be coming back into popularity again.
 
Last edited:
Personally, instead of brain transfer I'd opt for more stable, external upgrades. I'd love to see some form of digital storage that can be accessed by our noggins for recalling boring things like large tables of numbers and where I left my keys.
640k should be enough for everybody!
I'm telling you now, we're going to get to that point at least a century before we get anywhere near brain transfer/uploading. The technology for artificial augmentation of a living brain isn't that far off NOW, and we understand -- at least conceptually -- how to do it. More importantly, the technology only needs to work properly the way it's designed, and the brain can be trained/conditioned to interface with it properly.
 
Personally, instead of brain transfer I'd opt for more stable, external upgrades. I'd love to see some form of digital storage that can be accessed by our noggins for recalling boring things like large tables of numbers and where I left my keys.
640k should be enough for everybody!
I'm telling you now, we're going to get to that point at least a century before we get anywhere near brain transfer/uploading. The technology for artificial augmentation of a living brain isn't that far off NOW, and we understand -- at least conceptually -- how to do it. More importantly, the technology only needs to work properly the way it's designed, and the brain can be trained/conditioned to interface with it properly.

Yes, but sometimes it takes years and even decades for harmful side effect to become noticeable, and by then it's often too late for the trail blazers.
 
Personally, instead of brain transfer I'd opt for more stable, external upgrades. I'd love to see some form of digital storage that can be accessed by our noggins for recalling boring things like large tables of numbers and where I left my keys.
640k should be enough for everybody!
I'm telling you now, we're going to get to that point at least a century before we get anywhere near brain transfer/uploading. The technology for artificial augmentation of a living brain isn't that far off NOW, and we understand -- at least conceptually -- how to do it. More importantly, the technology only needs to work properly the way it's designed, and the brain can be trained/conditioned to interface with it properly.

Yes, but sometimes it takes years and even decades for harmful side effect to become noticeable, and by then it's often too late for the trail blazers.
Which is why they're still in clinical trials right now, so they can study the potential side effects as they begin to manifest.

Actually those trials have been ongoing for a number of years and are making some pretty rapid progress.
 
I think the statement upthread that it is wrong to expect the emergence of consciousness from some advanced computer capable of massive parallel processing is correct. Far more likely in my opinion is the whole-brain emulation that Kurzweil predicts within the virtual environments predicted by Niklas Boström. Molecule-level MRIs of living brains that are then modeled and deciphered, that are given virtual bodies similarly the product of such MRIs, and given virtual universes within which to exist. Twenty years, forty years... I don't know when. But there are no physical obstacles to this kind of emulation that I know of. If reality will be emulated to that degree of fidelity, why wouldn't the beings created within it manifest the same characteristics as beings in the reality being emulated? So the question becomes one not of creating an artificial intelligence out of some powerful computer, but of high fidelity scanning of existing physical structures to which we attribute the quality of consciousness and then having the raw computing power to model what is scanned.
 
I think the statement upthread that it is wrong to expect the emergence of consciousness from some advanced computer capable of massive parallel processing is correct. Far more likely in my opinion is the whole-brain emulation that Kurzweil predicts within the virtual environments predicted by Niklas Boström. Molecule-level MRIs of living brains that are then modeled and deciphered, that are given virtual bodies similarly the product of such MRIs, and given virtual universes within which to exist.
This requires an enormously high degree of scanning fidelity, modeling fidelity, and a sufficiently reliable software environment to retain those scans. While possible on a purely conceptual level (though not with an MRI or any diagnostic technology that currently exists), this still wouldn't give rise to genuine consciousness. It would give rise to a simulation of what the computer predicts a genuinely conscious with such and such parameters being would do next given such and such a circumstance. That is, you would be able to scan the brain of John, put John in a simulated room, place a naked Winona Ryder in front of John and the computer would, knowing everything there is to know about John, be able to tell you exactly how John is going to react to this situation.

The reason it will never derive genuine consciousness is because the only meaningful elements of the simulation are its inputs (the brain scan data and environmental factors being simulated) and its outputs (the behavior being simulated). For all practical purposes the simulation is no different than a recording in that it only captures the BEHAVIOR of John and displays those behaviors in repeatable format. At the end of the day it's just a highly complicated algorithm whose operational logic is that of a digital system and not, as you would imagine, the logic of a functional human brain. You could even put Sim John through a Turing Test all the same, and he'd probably pass it, but you can know as a matter of fact that the data process that creates SimJohn's answers is not the same as the original John, even if their answers are identical.

To expand on the recording analogy: I make a video of me asking you questions and you make a video of you answering those questions. Even if we play those videos in synch, side by side, on two separate TV screens, the image of me is not actually talking to the image of you.

why wouldn't the beings created within it manifest the same characteristics as beings in the reality being emulated?
Because there's no practical reason to emulate ALL of the characteristics of an existing human being. Unless, of course, you're planning to conduct social experiments of a type that would be unethical if you did them to real people; even in that case, a much lower-fidelity model would still be acceptable.

The practical uses for super-intelligent AIs include things that do not require humanlike intelligence or even, for that matter, sentience. When you consider that ultra high-fidelity brain scan emulators are also going to be stupendously expensive and difficult to develop, you then have to think of some real-world application that would justify that expense beyond mere transhumanist fascination.

So the question becomes one not of creating an artificial intelligence out of some powerful computer, but of high fidelity scanning of existing physical structures to which we attribute the quality of consciousness and then having the raw computing power to model what is scanned.
And why would we bother to do this in the first place, when artificial intelligence is ALOT easier to create, a lot more efficient in what we want it to do, and a lot less expensive to work with?
 
I And why would we bother to do this in the first place, when artificial intelligence is ALOT easier to create, a lot more efficient in what we want it to do, and a lot less expensive to work with?

I am not going to presume to answer for the future, but I assume those cheap AIs you mention might be interested in doing some of that unethical experimentation you also mention, just to better understand from whence they came. Or my grandchildren might want to run ancestor simulations. Or the same kind of folks that play Sim City now would play a much more sophisticated form of Sim City a hundred years from now.

As far as fidelity goes, isn't our universe "granular" if you take the Planck space to be a void? We have precisely one planet rendered with exquisite detail- when observed, with a practically uncrossable gulf between our star and others. There are characteristics to what we call reality that might be characterized as a low-fidelity model- from a certain perspective. That's the key- just what you consider to be "high fidelity" has a lot to do with your capabilities.

And as for the eventual cost of ultra-high fidelity brain scan emulators, I don't know. If you'd asked me in 1985 how much a hand held computer phone with millions of times the storage and speed of my Sanyo MBC 555 would eventually cost, I'd have said the same thing.
 
I And why would we bother to do this in the first place, when artificial intelligence is ALOT easier to create, a lot more efficient in what we want it to do, and a lot less expensive to work with?

I am not going to presume to answer for the future, but I assume those cheap AIs you mention might be interested in doing some of that unethical experimentation you also mention, just to better understand from whence they came.
lol what?

They would actually get better results by googling "history of AI research" and then reading in depth the biographies and/or autobiographies of the individual researchers themselves.

But that proceeds from the incredibly bizzare assumption that AIs would have any reason to "understand from whence they came." Unless somebody is developing a computerized philosophy teacher, such understanding is totally irrelevant to the tasks they are programmed to carry out.

Or my grandchildren might want to run ancestor simulations.
Which, again, would be more easily accomplished by a well-programmed AI.

Or the same kind of folks that play Sim City now would play a much more sophisticated form of Sim City a hundred years from now.
Which, AGAIN, would be more easily accomplished by a well-programmed AI.

This is almost like suggesting that some day production companies are going to replace television shows and replace them with shape-shifting androids that make live performances in your living room. It's actually ALOT simpler to just film and broadcast a performance. More importantly, with the inevitable growth of computing power it may soon be simpler to use CG animation than actors, especially once you can get a computer to render a truly lifelike human form in a truly lifelike environment.

Just to be clear on something: nobody is ever going to develop an AI with the intention of acting out a part in a screenplay. They WILL, however, develop an AI that is capable of writing a screenplay, based on the data the AI collects on just what kind of storylines sell best with audiences and what parameters (budget, subject matter, run time, rendering capabilities) it has to work with.

This AI may very well be asked to write a screenplay examining the origins of machine intelligence and to predict what the future may hold for AI evolution. A smart AI will probably download the entire collected works of Isaac Asimov and Robert Heinlein and dozens of others, indexing references to AI and machine intelligence and cross-reference themes from sci-fi/action movies of the 20th and 21st centuries to come up with a combination of winning story elements; it'll run a use-case analysis to avoid (what have become) tired cliches and it'll scan research articles and metadata to look for effective ways of inverting/subverting/modifying existing tropes in order to produce original material. At some point it'll produce a rough draft of its screenplay for the producers, who will like some of its ideas and ask for a rewrite on others.

The one thing such an AI is never going to do, at ANY level of sophistication, is examine its origins just for curiosity's sake. Even were it to do so, that "origin data" would be just that: data. Information it can use to accomplish its task.

As far as fidelity goes, isn't our universe "granular" if you take the Planck space to be a void?
Yes, but the universe doesn't suffer compression losses or single-bit errors when moving objects from one place to the next. Nor does the universe have a finite bandwidth for data transmission or a finite capacity for storage. So there is a practical upper limit to how much fidelity you need in order to simulate a particular thing.

When a child jumps into a snowbank, that action has billions upon billions of effects, from the macroscopic to the subatomic and everything in between. If your goal is to model the interaction between toddlers and snowbanks, then the toddler and the snowbank only need to be modeled with enough fidelity to capture the macroscopic behavior of the snowbank and the toddler who dives into it. If your goal is to model the behavior of individual snowflakes, then that's the lower limit of your simulation's rendering. You could, with a large enough computer, probably model the behavior of individual atoms within the snowflakes and the individual atoms within the toddler to get an almost-realistic simulation of that interaction, but even THAT simulation cannot account for all possible interactions since its data set grows more limited the further down you go in scale.

Now consider that the simulation you're talking about only has precise data on human brains. That means you can model the states of the scanned brain down to the limit of the scanner's resolution (which is inevitably much lower than reality, uncertainty principle being what it is). Since NOTHING ELSE in that simulation is so precisely modeled, for all practical purposes the only thing you've simulated is a disembodied brain in a digital jar (like the "Betas" in Alistair Reynolds "Revelation Space" novels. It's a good way of preserving the knowledge and experience of living people, but everyone knows Betas aren't real people).

And as for the eventual cost of ultra-high fidelity brain scan emulators, I don't know. If you'd asked me in 1985 how much a hand held computer phone with millions of times the storage and speed of my Sanyo MBC 555 would eventually cost, I'd have said the same thing.
And yet the Sanyo MBC555 cost a little under a thousand dollars when it was first released. The capabilities of such machines has increased a thousandfold over the years, but the class they belong to -- desktop computers -- hasn't gotten any cheaper.

The kinds of systems that could handle the simulations you're talking about would be the futuristic equivalents of IBM's Watson or Deep Blue. And yes, it would (and will) be highly interesting to see those computers managing to produce realistic simulations of existing people along with all the knowledge and experience they once possessed. But by that time, nobody will be wondering anymore if the simulated personalities are genuinely conscious or not, since the AIs that created them clearly aren't.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top