• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

The scientist planning to upload his brain to a COMPUTER

^ The required fidelity of a simulation may turn out to be much lower than expected. For example, you "see" with your brain, not with your eyes. The brain blanks out the movement of the eyes, known as "saccades," producing a smooth perception of the world. The lower frame rate of 24 fps film—even when converted through "step-down" to high refresh rate video—may be why some people feel that film is "more natural." The staccato motion just at the threshold of perception may be similar to saccades.

I've read many VFX white papers and interviews with VFX artists. One common concept is that there can be "too much detail" in an image. For example, the 8"-high action figures used in cloud cars for THE EMPIRE STRIKES BACK, or mine car figures in INDIANA JONES AND THE TEMPLE OF DOOM looked like low-detail paper mache dolls in person, yet looked like full-scale people in the on-screen effects. Dolls with too much detail tended to look fake. The VFX artists learned that the mind fills in lots of detail. All they had to do was provide "enough" detail to set the brain in motion.
 
^ The required fidelity of a simulation may turn out to be much lower than expected. For example, you "see" with your brain, not with your eyes. The brain blanks out the movement of the eyes, known as "saccades," producing a smooth perception of the world. The lower frame rate of 24 fps film—even when converted through "step-down" to high refresh rate video—may be why some people feel that film is "more natural." The staccato motion just at the threshold of perception may be similar to saccades.

All very good points, which is part of the reason I believe that ultra-lifelike simulations of real people can be just as easily accomplished with a cleverly designed AI. Come to think of it, that was the basic premise of "Caprica" a few years back: some computer genius with shadowy funding developed what was essentially a search engine that could hunt down and collect all the data relevant to a person's life and compile it into a duplicate personality that more or less approximated the original. An interesting plot point was the fact that the simulated personalities didn't have identities of their own until their originals were killed; the last piece of data they assimilated was the circumstances of the original's death, after which they started to DRAMATICALLY diverge from the original pattern.

This, again, mirrors the "Betas" in the Revelation Space novels. It's not actually that hard to create an AI that behaves like a person, it's just significantly harder to create an AI that behaves like a SPECIFIC person.

I've read many VFX white papers and interviews with VFX artists. One common concept is that there can be "too much detail" in an image. For example, the 8"-high action figures used in cloud cars for THE EMPIRE STRIKES BACK, or mine car figures in INDIANA JONES AND THE TEMPLE OF DOOM looked like low-detail paper mache dolls in person, yet looked like full-scale people in the on-screen effects. Dolls with too much detail tended to look fake.
Uncanny valley.

Actually, it's probably the case that AI simuloids would only be socially acceptable to the extent that they DO NOT attempt to pass themselves off as the real McCoy. As soon people find out that there's a weird creepy digital version of themselves sneaking around, the screenwriting AIs will start turning out the "Killed and replaced by evil robot doppleganger" screenplays and the industry will take a nosedive.
 
http://www.cnn.com/2015/01/21/tech/mci-lego-worm/

Some small though rather significant steps

Called the Open Worm Project, the research brings together scientists and programmers from around the world with the aim of recreating the behavior of the common roundworm (Caenorhabditis elegans) in a machine.

Academic Journal Features 'Mind Uploading' For The First Time(circa 2012)

http://www.33rdsquare.com/2012/06/academic-journal-features-mind.html

Just a matter of time folks.

RAMA
 
I've actually countered both those arguments multiple times.
And your counters were found to be flawed every single time. For example:

When one "S" curve finishes, it continues to the next paradigm.
Which doesn't address the fact that the NEW paradigm in of exponential growth may not have anything whatsoever to do with the power of computers. It could just as easily be an exponential growth in the power density of rechargeable batteries. You keep assuming that the paradigm shift would reestablish the previous growth curve of computerized growth when there is zero reason to believe it would do anything of the kind (it's unlikely that it would, actually, given that it is a PARADIGM SHIFT rather than a "momentary interruption of an orderly pattern").


You do realize this quote you just cut and pasted doesn't contain any actual DATA, right? These are supposition based on vague generalities. Even intel's "estimates" are exactly that, and you didn't even bother to directly reference them.

A post with the following link relevant to software issues:

A govt report from advisors on the rate of software growth, disproving the software misconception
Found that the QUANTITY of software applications is growing at an exponential rate. The power and capabilities of those applications (not to mention the efficacy thereof) is another question entirely.

The implication in Moore's law isn't merely that computers are getting more complex. It's that they're getting MORE POWERFUL for a given size. The government study you cited found the exact opposite is true of software: software applications are growing in number and in complexity even as their capabilities remain relatively unchanged. In fact, as a lot of users have begun to suspect, it's increasingly becoming the case that newer software applications are actually LESS powerful than their predecessors because their core programming -- that is, what that application has been designed to do -- has become bogged down with a lot of extra features that add complexity to the overall package that draws greater resources than actually required for the activity in question. We're fast approaching the point where you can't even run a basic word processor without high-speed internet, a cloud server, and at least 3GB of unused ram.

This has also been explained to you as part of the discussion as to why we probably shouldn't be so impressed by the explosion of recorded data on storable medium, given how much of that data consists of cat gifs and twitter posts.

We've already proved consciousness is derived from the brain, so your statement that personality can't be uploaded is supposition..
No it's a plain statement of fact, as your statement directly implies:
Consciousness is derived from the brain.

Consciousness is not known to be derived from computers, ergo there is no evidence that a simulation of a brain would actually achieve consciousness.

Similarly:
Alcohol is derived from fermentation.

Alcohol is not known to be derived from computers, ergo there is no evidence that a simulation of fermentation would actually produce real alcohol.

These are facts, not supposition. Furthermore, I already conceded that AIs will get better and better at emulating humans and that a high-fidelity imitation of human consciousness is perfectly achievable even with EXISTING technology. With clever enough programming, you can get an AI to imitate a human, or even for that matter a SPECIFIC human. Simulated consciousness, like simulated alcohol, is far from impossible.

But that is NOT brain-uploading. It is not, in fact, even the most efficient use for an AI, and is unlikely to ever be anything more than a really creepy, off-putting novelty by trans-humanists.

Yes, the quote (not cut and pasted btw) was a generality. In order to get the full details I suggest reading the book as I have suggested many times. There are a variety of other articles that I've looked at that I could have posted but didn't as to not be as link heavy as I have been, but basically the numbers for ultimate computing power are quite high and the the statement I made remains the same. The upper level of exponential growth has a limit but is extremely high. This is not really open for dispute as both physicists and mathemeticians agree with Kurzweil on this point.

Your other answers are wrong on several counts, not the least which is the consciousness claim, which recently has been proven. Look it up.

You are also wrong on the software claim you make, the goverment did in fact report on the increasing sophistication, complexity, and exponential nature of software. Here I defer to Kurzweil in is counter to Paul Allen (wherein he also counters many of the claims YOU make):

Allen then goes on to give the standard argument that software is not progressing in the same exponential manner of hardware. In The Singularity Is Near, I address this issue at length, citing different methods of measuring complexity and capability in software that demonstrate a similar exponential growth. One recent study (“Report to the President and Congress, Designing a Digital Future: Federally Funded Research and Development in Networking and Information Technology” by the President’s Council of Advisors on Science and Technology) states the following:

“Even more remarkable—and even less widely understood—is that in many areas, performance gains due to improvements in algorithms have vastly exceeded even the dramatic performance gains due to increased processor speed. The algorithms that we use today for speech recognition, for natural language translation, for chess playing, for logistics planning, have evolved remarkably in the past decade … Here is just one example, provided by Professor Martin Grötschel of Konrad-Zuse-Zentrum für Informationstechnik Berlin. Grötschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later—in 2003—this same model could be solved in roughly one minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms! Grötschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008. The design and analysis of algorithms, and the study of the inherent computational complexity of problems, are fundamental subfields of computer science.”

I cite many other examples like this in the book. [3]

RAMA
 
Your other answers are wrong on several counts, not the least which is the consciousness claim, which recently has been proven. Look it up.

The point you seem to be missing is that "biological robot" proponents believe in a purely materialistic world, while another camp believes that there is more to "mind" than the electro-chemical processes of the brain. You can quote all the computing power diagrams you want—no one doubts that a high performance AI will eventually be developed. But neither camp is likely to sway the other any time in the foreseeable future.

I'd be most interested in any claim that consciousness has been conclusively "proven" to be solely of the brain. Many biologists once believed that DNA was a "blueprint" for an organism. (It's definitely a data storage and handling system, but much more than a "blueprint.")

"Mind is more than brain" proponents also have empirical evidence for such belief. And we can be just as sketchy when debating on a sci-fi forum. ("Look it up.")
 
Yes, the quote (not cut and pasted btw) was a generality.
Accepted.

Where's your DATA?

In order to get the full details I suggest reading the book as I have suggested many times.
If you're referring to Kurzweil's books, you'll have to be more specific; I've read SEVERAL of his books but stopped considering them credible sources after their predictions repeatedly failed (sort of like Gerrard O'Neil's "High Frontier").

You've pedalaed quite a few books on this forum, but they all have one thing in common: "possessed of highly abstracted reality. Lovely visions, little data."

There are a variety of other articles that I've looked at...
I'm not looking for articles. I'm looking for DATA.

See, you're very excited about what you read, and you are very good at seeking out reading material that excites you. But at some level you really need to accept that you being excited about something and you knowing about something are two completely different things.

Unless, of course you're a science fiction writer, in which case they are interchangeable.

Your other answers are wrong on several counts, not the least which is the consciousness claim, which recently has been proven.
As a matter of fact, it has:

New Scientist said:
ONE moment you're conscious, the next you're not. For the first time, researchers have switched off consciousness by electrically stimulating a single brain area.

Scientists have been probing individual regions of the brain for over a century, exploring their function by zapping them with electricity and temporarily putting them out of action. Despite this, they have never been able to turn off consciousness – until now.

Although only tested in one person, the discovery suggests that a single area – the claustrum – might be integral to combining disparate brain activity into a seamless package of thoughts, sensations and emotions. It takes us a step closer to answering a problem that has confounded scientists and philosophers for millennia – namely how our conscious awareness arises.

Many theories abound but most agree that consciousness has to involve the integration of activity from several brain networks, allowing us to perceive our surroundings as one single unifying experience rather than isolated sensory perceptions.

Consciousness is derived from the brain. It thus follows that a non-brain thought engine will not produce consciousness.

You are also wrong on the software claim you make, the goverment did in fact report on the increasing sophistication, complexity, and exponential nature of software...
Reading the original again, and I'm AGAIN seeing that their report covers both the size and quantity of software apps. They made no attempt to measure the processing throughout of those applications with respect to the processing power of their given platform, nor COULD they, since that kind of test would require an experimental setup and could not be conducted meta-study style.

It again remains the fact that while both the size and quantity of software applications IS increasing exponentially (and is a direct consequence of the exponential growth of computers) while the QUALITY of those applications does not strongly correlate with that growth.

Here I defer to Kurzweil
Which you should probably stop doing if you want anyone to take you seriously, especially considering the government report that contains the Grötschel quote originally obtained that figure from a powerpoint presentation. In the powerpoint -- as well as the lecture that accompanies it (which I have had on file for a couple of years though I honestly do not remember why) Grötschel points out that a COMPUTER would be able to solve that model in about 28 days. 82 years is the figure for a human doing it by hand.

More to the point, Grötschel's presentation covers the practical use of algorithmic methods in modeling and problem solving. The government report YOU cited doesn't address algorithmic process -- which is purely mathematical and is NOT progressing exponentially -- nor does it analyze the relationship between improved processing algorithms (e.g. search engines, speech recognition, data mining, etc) and software capability. They are simply counting beans and mining for quotes, much as you are doing now.

And I've said it before, I'll say it again: you're better off making your OWN mistakes instead of mindlessly repeating Kurzweil's.
 
Last edited:
Great posts, Crazy Eddie. You have incredible patience.

The key point Singularitans miss constantly is that computers can only do what they are programmed to, and the state-of-the-art in AI research is absolutely nowhere near "creating human-like consciousness in a computer." People who promote Singularity theology assume computers will be capable of novelty simply by virtue of being very powerful and there is simply no reason to believe this.

There are times when computers do things their programmers don't initially understand, but upon analysis, it is always determined that the computer behaved exactly as programmed and in no way behaved "creatively." It simply sorted through its available data with its imbued code and solved the tasks it was given according to those constraints.

The bottom line is, computers aren't magic, programming them isn't magic, and people who actually work closely with them for a living see Singularity nonsense for what it is.
 
Great posts, Crazy Eddie. You have incredible patience.

The key point Singularitans miss constantly is that computers can only do what they are programmed to, and the state-of-the-art in AI research is absolutely nowhere near "creating human-like consciousness in a computer." People who promote Singularity theology assume computers will be capable of novelty simply by virtue of being very powerful and there is simply no reason to believe this.

There are times when computers do things their programmers don't initially understand, but upon analysis, it is always determined that the computer behaved exactly as programmed and in no way behaved "creatively." It simply sorted through its available data with its imbued code and solved the tasks it was given according to those constraints.

The bottom line is, computers aren't magic, programming them isn't magic, and people who actually work closely with them for a living see Singularity nonsense for what it is.

Indeed, this is the issue I have with the singularity hypothesis myself. Making a computer appear to be conscious is actually quite a bit of heuristic slight-of-hand and clever designing. Chatbots etc; I've written some pretty damn convincing ones over the years just to fuck with people.

From the outside looking in, you're tempted to wonder if the chatbot doesn't really possess some rudimentary understanding of human language, somewhere in the fuzzy space between lines of code; but if you slice it open and actually READ the code and see the subroutines that are used to generate those responses, suddenly the magic is gone. It's not consciousness, it's just math; it's an elaborate card trick performed on a microprocessor.

This is the point I was making upthread, actually, and I would say it's at least partially based on experience: it is not actually that hard to program a machine to simulate consciousness, if that's what you're into. But genuine consciousness of the type that is generated by a human brain cannot be generated by a computer. This has nothing to do with the limitations of computers or the complexity of brains, but merely because computers are not brains.

Put another way: You cannot upload your brain to a computer for the same reason you cannot build a working spaceship entirely out of legos. There are things a real spaceship needs in order to function in space, just as there are things a real brain needs in order to generate consciousness.
 
First of all thank you for taking the time to do all this Eddie. And secondly, thank for you saving the topic and bringing some actual hard info into it.
 
Consciousness is derived from the brain. It thus follows that a non-brain thought engine will not produce conscious

I'm assuming that by "non-brain thought engine" you mean anything that is not a living, conscious, biological brain. But if a molecule-resolution scan of this living, conscious brain can be accomplished, and then modeled in a virtual environment, what prevents the vaunted claustrum, to which is attached so much hope for solving a seemingly unsolvable question, from being modeled as well? If consciousness is a purely physical process, then why wouldn't an accurate model of the part of the brain that supposedly orchestrates consciousness be able to, well, orchestrate consciousness?

I could buy into the argument if the dualists are right and consciousness has one foot in the brain and one in some other reality altogether. Hard to simulate that. But that isn't what I understand you to be saying here.
 
Consciousness is derived from the brain. It thus follows that a non-brain thought engine will not produce conscious

I'm assuming that by "non-brain thought engine" you mean anything that is not a living, conscious, biological brain. But if a molecule-resolution scan of this living, conscious brain can be accomplished, and then modeled in a virtual environment, what prevents the vaunted claustrum, to which is attached so much hope for solving a seemingly unsolvable question, from being modeled as well?
Nothing whatsoever.

It's just that modeling the claustrum -- and the entire brain along with it -- will only result in a MODEL of consciousness. That is, it will be able to reproduce the external behavior of consciousness, but none of the genuine internal processes that cycle through it to form the EXPERIENCE of being conscious. Because the model is based on an algorithmic process that calculates independently the next appropriate state of every element in the simulation, there is no "stream of consciousness" taking place; the simulation is just a series of discrete states that are calculated by a computer from one moment to the next.

To be absolutely clear here: the SIMULATION isn't actually doing anything; it isn't thinking, it isn't responding, it isn't acting or reacting. The COMPUTER is in control of the simulated elements, and the COMPUTER is making the simulation act in a way consistent with the simulation's parameters. No matter how detailed the simulation is, it will never have personality states of its own.

Visualize this from the point of view of a SENTIENT computer. The AI is self-aware and knows it has been asked to simulate the brain states of a human being. From the AI's point of view, this is a lot like asking a human being to act a part in a movie. On some level, Chris Pine really "becomes" James T. Kirk when he slips into that roll and uses his characterization skills to make that character do things and say things on screen (the end result being a performance in the finished product of the movie) but James T. Kirk is not and will never be a real person with genuine consciousness of his own. James T. Kirk is actually a simulation produced by a combination of Chris Pine's acting skills and Bad Robot's VFX/editing systems. By the exact same token, a molecule-scale simulation pf Chris Pine's brain will never produce a conscious simulation of Chris Pine. It will produce more or less exactly the same thing the REAL Chris Pine produced: a very convincing moving picture of a non-real person.

If consciousness is a purely physical process, then why wouldn't an accurate model of the part of the brain that supposedly orchestrates consciousness be able to, well, orchestrate consciousness?
Because a computer model is a mathematical process, NOT a physical one.
 
The Kardashians? Wow. If you say so. Zhuangzi, Plato, Descartes and a few others along the way have contemplated the question as well. As far as your assessment of relative intellectual depth, I'll have to take your word on what you're reading these days. There only need to be two turtles.
 
Last edited:
Nor will school yard 'I know you are...' quips avail you in trying to avoid Crazy Eddie's points. Your question was simply a rabbit hole of extreme solipsism. And appeals to hoary wisdom won't change that.
 
Nope. I'm just tired of all the certainty being thrown around in this thread- on both sides of the question. What is consciousness? Do you know? I mean know? If you don't, don't behave as you do. A ton of words have been written in this thread without answering the core question of whether a scientist can even "upload his brain to a computer". Pompous strutting by know it alls that pretend to know what isn't yet known. And to draw conclusions on the basis of things the basis of which can't yet be ascertained. Be a little more fucking respectful of the opinion with which you disagree, and don't think you can cow everyone into submission with half facts spoken double loud.
 
Nope. I'm just tired of all the certainty being thrown around in this thread- on both sides of the question. What is consciousness? Do you know? I mean know? If you don't, don't behave as you do.
The sciences work fine and need no recourse to metaphysical conundrums.

A ton of words have been written in this thread without answering the core question of whether a scientist can even "upload his brain to a computer".
You're stridency seems to indicate you have an answer one way or the other. Yet, you fail to be convincing. Or, was that solipsistic paradox your idea of a clever response to both sides? One doesn't require koans in the sciences either.

Pompous strutting by know it alls that pretend to know what isn't yet known. And to draw conclusions on the basis of things the basis of which can't yet be ascertained. Be a little more fucking respectful of the opinion with which you disagree, and don't think you can cow everyone into submission with half facts spoken double loud.
Well, that wasn't terribly convincing.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top