• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Are We On The Brink of Creating a Computer With a Human Brain

Robert Maxwell,

Do you believe in the concept of materialism?

More or less, yes. There are no doubt aspects of the physical universe we have yet to understand, but I don't believe in gods or anything of that sort. Anything--even consciousness--can be explained scientifically, given enough study.
 
Let's attack this from another point of view.

As we can see, as time goes on the simulated brains become more and more sophisticated. Today a slow running partial brain, tomorrow a faster running partial animal level brain. At which point does it become a powerful enough simulation to become conscious? It's not like the researchers are going to say "Right! today we successfully ran a chimp brain simulation, tomorrow we crank up the Stephen Hawking simulation!".

More than likely it will be more like "Ok, we are up to 10 to the power of X number of neurons in the current simulation and getting good results, one more increase in factor and we will equal the number of neurons in the human brain. Let's see what happens!"
 
One often overlooked factor is that the human nervous system is effectively part of the brain. That means the input can be incredibly complicated, much less the processing.

Just trying to simulate the inputs to the model is going to be a major challenge in itself.
 
The honest answer is, we won't know if it's conscious unless and until we reach that point, and it all depends on us having a fully accurate model to begin with--which we cannot be entirely certain we do.
 
Robert Maxwell,

More or less, yes.

That's good. I figured I was just talking to a wall. If you said that you did not believe in materialism, I would have ignored anything you would have said after that point in this particular topic because IMHO -- it doesn't have a place here.

There are no doubt aspects of the physical universe we have yet to understand

Of course

but I don't believe in gods or anything of that sort.

Agreed. My outlook on the issue is that it would have been nice if there was one, but there isn't, and it's just that simple. I should note that I really don't have animus towards people who are religious (unless they were trying to kill me), or those who believe in a god -- I don't go around calling people who are believers as being stupid, crazy, weak-minded, delusional, and all that stuff. I knew many people who did that and it personally struck me as being very caustic and offensive, and it didn't strike me as being even remotely persuasive.

I should note that even though I do not believe in god, I don't think people should try to play god.

Anything--even consciousness--can be explained scientifically, given enough study.

Technically IMHO we already have sufficient evidence to explain consciousness as being a product of the central nervous system. Some people don't want to listen tot his fact, but there is already is conclusive evidence.

The honest answer is, we won't know if it's conscious unless and until we reach that point, and it all depends on us having a fully accurate model to begin with--which we cannot be entirely certain we do.

You have people of reasonable intellect raising the ethical issue, so it's not a variable that is completely unknown...


CuttingEdge100
 
Actually, it is an unknown, because we don't know at what point a simulation might attain consciousness--or even that it can do so in the first place.

We may realize we are getting close to such a point, and then would be a better time to grapple with the ethical questions, but at the moment your criticisms are more akin to people talking about landing on the moon and you worrying about the implications of warp drive. It's such a far-fetched scenario at this point to not really be worth discussing in any practical terms.

It's interesting from a philosophical standpoint, but let's not kid ourselves: science is nowhere near the scenario you're talking about. And by "nowhere near," I do mean we're at least decades, if not centuries, away from it. AI is one of those fields where the next big breakthrough is always "just around the corner." To help put that in perspective, AI proponents have been saying this since the 60's. And we just now--50 years on--have gotten to a point where we can simulate the mental capacity of an ant. Think on that for a while.
 
Robert Maxwell,

We may realize we are getting close to such a point, and then would be a better time to grapple with the ethical questions,

Actually we're getting fairly close now. Even if we can simulate a badly damaged rat-brain, that's still an amazing feat. We might not be at the point where we can simulate a properly functioning rat or human brain, but we aren't terribly far away.

I think it would be good to begin considering the ethics of this now rather than later.

at the moment your criticisms are more akin to people talking about landing on the moon and you worrying about the implications of warp drive.

Going to the moon vs warp-drive? I think that's a *very* gross exaggeration...


CuttingEdge100
 
Consciousness itself is mostly good, then, for situations that require particular concentration or making decisions that require up-front analysis. It is more of a long-term creature, trying to account for the big picture. I'm not aware of any animals that have the slightest idea about long-term planning.

That damn ant who wouldn't give me any of his food.:(
 
Robert Maxwell,

We may realize we are getting close to such a point, and then would be a better time to grapple with the ethical questions,
Actually we're getting fairly close now. Even if we can simulate a badly damaged rat-brain, that's still an amazing feat. We might not be at the point where we can simulate a properly functioning rat or human brain, but we aren't terribly far away.

I think it would be good to begin considering the ethics of this now rather than later.

We are not "fairly close now." Did you read anything I said? We are--at a minimum--decades away from having to worry about this. It also rests on a LOT of assumptions, such as the computer model being complete enough to feature emergent properties such as consciousness. We do not know if the model is accurate enough to do that.

With a very powerful supercomputer, they are simulating a brain the size of an ant's, at least 100 times slower than the real thing works. That is not "fairly close." There is a very long way to go.

What ethics would you propose, anyway? "Thou shalt not make a machine in the likeness of a human mind"?

at the moment your criticisms are more akin to people talking about landing on the moon and you worrying about the implications of warp drive.
Going to the moon vs warp-drive? I think that's a *very* gross exaggeration...


CuttingEdge100

No, I'd say it's pretty accurate. You're worrying about something that is quite far off. You keep insisting it's right around the corner. As someone who actually works with computers, understands how they work, and has looked at this research (among others), I'm telling you it's not. If this was really something to worry about right now, there would be scientists coming out asking about the ethical implications of this work. Where are they?
 
Okay, coming back to this to make the point a little clearer. Let's assume this particular simulation has linear complexity, meaning the processing time increases by a set amount as the number of neurons grows. In computer science, this is a pretty good case. You want something that is at least no worse than linear complexity. Logarithmic or even constant complexity is preferred, but rarely possible. So, let's just assume it's linear.

Since they can simulate 10,000 neurons "two orders of magnitude" more slowly than the real thing, it is minimally 100 times slower than reality. So, let's divide 10,000 by 100 and assume that it could accurately simulate 100 neurons in real-time. Again, this is best-case we're talking about.

Next, let us assume Moore's Law holds--the notion that computing power doubles every 18 months. Doing the math, it would take about 27 cycles (486 months) or 40.5 years to reach the point where it could simulate 11,000,000,000 neurons in real-time--the number of neurons in a human brain.

However, this rests on some very generous assumptions, such as the continued progression of computer power, and that the per-unit processing power required for 11,000,000,000 neurons won't be any higher than it was for 10,000. It may be possible to make efficiency gains to offset any disadvantages, but what I'm saying here is that 40 years is a highly optimistic estimate from extrapolating the available data.
 
Robert Maxwell,

We are not "fairly close now." Did you read anything I said? We are--at a minimum--decades away from having to worry about this.

At most we are a decade or two away probably...

What ethics would you propose, anyway? "Thou shalt not make a machine in the likeness of a human mind"?

What I would propose? I think it should be considered unethical, if not outlawed to create an artificially sentient being for experimentation purposes.

No, I'd say it's pretty accurate. You're worrying about something that is quite far off.

Honestly I think it's a decade away. I've even heard scientists say that by 2020 or 2030 we'll have the capability.
 
Honestly I think it's a decade away. I've even heard scientists say that by 2020 or 2030 we'll have the capability.

In 1960, scientists were saying we'd have intelligent machines in 10 years or so. A few of them are always saying that.

One day they may be right, but it won't be in 2020.
 
Robert Maxwell,

We are not "fairly close now." Did you read anything I said? We are--at a minimum--decades away from having to worry about this.
At most we are a decade or two away probably...

What ethics would you propose, anyway? "Thou shalt not make a machine in the likeness of a human mind"?
What I would propose? I think it should be considered unethical, if not outlawed to create an artificially sentient being for experimentation purposes.

No, I'd say it's pretty accurate. You're worrying about something that is quite far off.
Honestly I think it's a decade away. I've even heard scientists say that by 2020 or 2030 we'll have the capability.

Yeah, the scientists saying these things are always looking for endorsement and grant money, you know. :)

Read my previous post, which tries to put some numbers to this issue.
 
Honestly I think it's a decade away. I've even heard scientists say that by 2020 or 2030 we'll have the capability.

In 1960, scientists were saying we'd have intelligent machines in 10 years or so. There was a lot of excitement about AI when it was new, with everyone certain we'd have the thing licked in no time.

Well, it didn't turn out that way. And here's the thing----no one is even trying anymore. Not for real intelligence. The goal these days is to push AI to be better at making basic decisions, such as whether a given image contains a car or a boat. This has nothing to do with intelligence in the human sense----it's just about mathematical patterns, state space search, and logical inference.

There are still a few researchers in the "if we mimic humans maybe we'll get similar results" mindset, but the vast majority have given up on that angle as fruitless, and decided it's more useful to push computers towards the stuff they're actually good at, rather than trying to force them to be something they're not.

I just checked around the website of the group doing this simulation. From their Frequently Asked Questions page:
What computer power would you need to simulate the whole brain?


The human neocortex has many millions of NCCs. For this reason we would need first an accurate replica of the NCC and then we will simplify the NCC before we begin duplications. The other approach is to covert the software NCC into a hardware version - a chip, a blue gene on a chip - and then make as many copies as one wants.
The number of neurons various markedly in the Neocortex with values between 10-100 Billion in the human brain to millions in small animals. At this stage the important issue is how to build one column. This column has 10-100'000 neurons depending on the species and particular neocortical region, and there are millions of columns.
We have estimated that we may approach real-time simulations of a NCC with 10'000 morphologically complex neurons interconnected with 10x8 synapses on a 8-12'000 processor Blue Gene/L machine. To simulate a human brain with around millions of NCCs will probably require more than proportionately more processing power. That should give an idea how much computing power will need to increase before we can simulate the human brain at the cellular level in real-time. Simulating the human brain at the molecular level is unlikely with current computing systems.
To what extent will the computer give the same response as actual living brain tissue?


Our goal is not to build an intelligent neural network, but to replicate in digital form the NCC as accurately as possible. We will perform similar experiments on the virtual NCC as in the actual NCC and keep doing this until the virtual NCC behaves precisely the same in as many ways as we can measure, as the actual NCC. Once this replica is built, we will be able to do experiments that normally take us years and are prohibitively expensive or too difficult to perform. This will greatly accelerate the pace of research.
Do you believe a computer can ever be an exact simulation of the human brain?


This is neither likely nor necessary. It will be very difficult because, in the brain, every molecule is a powerful computer and we would need to simulate the structure and function of trillions upon trillions of molecules as well as all the rules that govern how they interact. You would literally need computers that are trillions of times bigger and faster than anything existing today. Mammals can make very good copies of each other, we do not need to make computer copies of mammals. That is not our goal. We want to try to understand how the biological system functions and malfunctions so that this knowledge can benefit mankind.
http://bluebrain.epfl.ch/page18924.html#1

It's a highly impressive, project, no doubt. But I don't think we need to be talking about ethics at this point.
 
Last edited:
Lindley,

It's a highly impressive, project, no doubt. But I don't think we need to be talking about ethics at this point.

Why not? I think it would be nice to discuss the ethical ramifications of the issue before the actual problem comes to pass...


CuttingEdge100
 
If we do manage to create a sentient intelligence, yes, we should make an effort to respect it as an intelligent being. However, if we fail to recognize it as such, it won't be the end of the world. It's not like we're immediately a horrible species because we made a mistake. Accidentally killing an artificial intelligence in the course of research is not the same as deliberately killing someone for no good reason--something humans do all the time.

So, let's hope any such intelligence is communicative enough to say, "Wait a minute! Don't kill me, bro!"
 
Robert Maxwell,

The purposes for creating a sentient being are not just to understand how the brain works, they also want to be able to simulate the effect of various brain diseases on this simulated brain.

This would be tantamount to giving birth to a baby and then giving it a disease to see how it runs it's course. This has been done in the past by certain doctors lacking medical ethics. It was viewed as totally repugnant.

As for shutting off the simulation and either never turning it back on, or deleting the software and destroying the hardware; that would be morally tantamount to murder. If you created it, experimented on it, then did that, it would be tantamount to a mother giving birth to a child to experiment on it, then killing it. If a mother did that, she would get the harshest penalty possibly in the state in which she resides, which could include death in certain states.

As for your statement that killing a simulation of a sentient being would not be as morally repugnant as killing a human, that's simply not true. If you had a sentient being, it wouldn't matter if it was a computer or a human being. It's sentient. That's the issue.

CuttingEdge100
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top