• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Are We On The Brink of Creating a Computer With a Human Brain

Sure, why not? It could bring tremendous scientific benefits, including helping us understand the nature of our own consciousness. There are various fields that could benefit, and I don't think we should allow a specious argument from morality to stop it. Even supposing we managed to create a self-aware computer, it's not a human being. Considering we don't have a problem killing real, live animals, it is peculiar that we would object to shutting off a synthetic intelligence.
 
Queen of the Borg,

Agreed


Robert Maxwell,

Because a perfect copy of a human brain would have the characteristics of a human brain and it would be tantamount to killing a human

Oh yeah we shouldn't allow an argument from ethics and morality to get in the way of research. Are you kidding me?


Sojourner,

Agreed
 
Queen of the Borg,

Agreed


Robert Maxwell,

Because a perfect copy of a human brain would have the characteristics of a human brain and it would be tantamount to killing a human

Oh yeah we shouldn't allow an argument from ethics and morality to get in the way of research. Are you kidding me?


Sojourner,

Agreed


Are you agreeing on the irony of your views in this thread in light of the OP?
 
Robert Maxwell,

Because a perfect copy of a human brain would have the characteristics of a human brain and it would be tantamount to killing a human

Oh yeah we shouldn't allow an argument from ethics and morality to get in the way of research. Are you kidding me?

For the millionth time, a perfect copy would be made of actual brain tissue. No one here has advocated that. You cannot make a "perfect copy" out of bits and bytes. You just can't. It wouldn't be a copy, then, it would be a substitute, a simulation, a facsimile.
 
Robert Maxwell,

If you made the simulation accurate enough that it would would work like a human brain and would possess consciousness
 
You have no way of knowing that. Neither does anyone else. We shouldn't halt research on the prospect of a very remote possibility.
 
How can you not?

Granted, if we could create a design capable of converging ever-closer to the functioning of a real human brain, such that with sufficient time and computing power it could become arbitarily similar in function and in input and in output, then after some point one would expect it to become essentially human.

But that's a fantasy. Not even remotely possible given our current understanding. Furthermore, it's circular reasoning: "If we can make an artificial human, it would be an artificial human."

In order to be able to make any predictions about likelihood of true AI coming out of a given experiment, you have to delineate the experiment in far more concrete terms than just "make something modeled after a human brain." That isn't specific enough to say what the outcome will be----and given the billions of possible interpretations of that phrase, and the very small number of them (if any) which would result in true AI, I don't see how (given that vague description) one can call the likelihood of true AI resulting anything more than "remote".

Anything more would be so incredibly speculative that it isn't worth discussing until you or someone else comes up with a link to these folks' actual proposal document. Assuming it's not classified, of course.
 
Lindley,

If I took say a computer. I made a device that mimicked exactly it's layout and the way it functioned -- How would it *not* function just like the original?

A human brain is basically a computer, a computer made out of protein and fat, but still basically a computer. If you copy it's configuration and functioning why would it *not* act just like the original?
 
That's the wrong question to ask.

The right one is: If a 1950s engineer got his hands on a 2009 high-end laptop and he wanted to reproduce it, and he tried to duplicate its exact layout and the way it functioned using components of the time, is he likely to be able to play Crysis?

The answer, of course is: Maybe. I rather doubt it, actually. But he'd certainly have a much better chance of doing so than we would of duplicating the exact workings of a human brain using silicon and copper.

As I said before, the hypothetical is circular since it presumes the possibility of creating a perfect simulation in the first place. That just isn't technically practical with today's tech, and there's no way of knowing when it will be since we don't even fully understand the thing we're trying to copy.
 
Lindley,

That's not really an accurate assessment.

Have you ever heard of a concept called "Reverse engineering"?

Here's kind of how it works, you basically take apart a piece of engineering, you then copy it in every way imaginable, you then take the stuff you copied and assemble it in exactly the same way the original piece of engineering was constructed.

Basically, the idea is that if you copy it close enough it works the same way.
 
Lindley,

That's not really an accurate assessment.

Have you ever heard of a concept called "Reverse engineering"?

Here's kind of how it works, you basically copy the way another piece of engineering works with the idea that if you copy it close enough it works the same way.

We have been trying to reverse engineer the human brain for some time now. Problem is, without a good metaphor for how the brain works, it is very difficult to implement any kind of accurate simulation.

The problem is not simulating the electrochemical processes within the brain. That stuff all falls within physics and we more or less understand it. What we don't have a good understanding of is how this massive collection of neurons, interacting through chemicals and electricity, gives rise to the emergent property of consciousness. We just don't know how that works, at all.

Psychologists are actually very interested in the work done by AI researchers, because producing a reasonable model of a human brain opens up a lot of research possibilities. However, a model is not the real thing--that's why it's a model. We aren't even close to having a good model, much less a prototype of the real thing.
 
Robert Maxwell,

Well, apparently the "Blue Brain" project did a pretty amazing job with the Rat Brain. Have you even read about their work?

They copied half a brain in every detail and ran it in a simulation. They even said they actually detected brain activity that would be considered consistent with thoughts. They copied it sufficiently accurately that the simulation was able to fucking think...
 
Are humans, in fact, concious? OR is the concept a social construct to control behavior? Perhaps the belief of conciousness is one of those hard wired concepts that somehow improved species survival, much like group belief in god(s) or "there must be something more". It need not be true to improve survival, just as it improved survival to believe there is always a lion in the grass when movement is percieved.

Perhaps conciousness is a conceit. As something undefinable, certainly it cannot be duplicated or simulated. Once you put solid definitions to it, you realize other species possess it to some degree, yet we have no issues turning them "off" as needed.

Certainly, conciousness is not a binary thing, it must exist on a spectrum. Even among humans, we certainly are not equally concious, are we? When an AI matches the least of us, is that enough to consider?

We are roughly the same humans as existed 10, 20 thousand years ago. Perhaps conciousness comes with more free time. Then, we spent all our time surviving, conciousness was not an issue. Now we have a lot of time to contemplate our navels... so conciousness seems related to free time, in which case our pets are who we should be watching, specifically when we make apes pets/servants ;)
 
Robert Maxwell,

Well, apparently the "Blue Brain" project did a pretty amazing job with the Rat Brain. Have you even read about their work?

They copied half a brain in every detail and ran it in a simulation. They even said they actually detected brain activity that would be considered consistent with thoughts. They copied it sufficiently accurately that the simulation was able to fucking think...

I went to their site and had a look at their information.

What they have right now is a model that functions at the level of a severely brain-damaged mouse. Over 8000 CPUs working together are capable of simulating a small part of a mouse brain, two orders of magnitude slower than the real thing. Yep, sounds to me like we're on the verge of simulating a human brain!

It's starting to look like you're the one who didn't do your research. Electrical impulses--simulated or not--are not our baseline for intelligence, or even life itself.

If it's artificial life you're worried about, by the way, you might want to be aware that Richard Dawkins created an artificial life program back in the '80's that displayed some interesting emegent properties. His artificial creatures didn't think, but they did reproduce and evolve.

In any case, just because you want to call these simulated impulses "thoughts" doesn't mean they are. Their system simulates about 10,000 neurons. For comparison, that's about how many an ant has. And, as I said above, it's two orders of magnitude slower than the real deal. That means, at a minimum, it's 100 times slower than a real one. So, pardon me if I'm not very worried about something that has the intellectual capacity of a crippled insect.

There is, however, great scientific value to found in such a model, as it can help us understand the workings of our own brains and brains and nervous systems in general. Your brand of alarmism is really unhelpful, and as someone else said it seems to be informed by B-movies more than real science.

Are humans, in fact, concious? OR is the concept a social construct to control behavior? Perhaps the belief of conciousness is one of those hard wired concepts that somehow improved species survival, much like group belief in god(s) or "there must be something more". It need not be true to improve survival, just as it improved survival to believe there is always a lion in the grass when movement is percieved.

Cogito, ergo sum, I suppose. :) There is some debate over exactly what consciousness is, to be sure. One of the more interesting theories I've read about is that of the "user illusion." Basically, your conscious awareness lags about half a second behind reality, that being the time it takes for your body to sense a stimulus and have it processed by your brain. What we think of as our consciousness doesn't always have a say in what we do--there are times when the brain reacts too quickly or can make decisions in the absence of conscious awareness. Think reflexes. Have you ever experienced highway hypnosis? I have. It's actually kind of interesting to think you can turn over the mechanics of a mundane, repetitive task to a lower level of brain function, leaving your consciousness free to do something else. That's why I think of it as a kind of "subprocess," rather than a "superprocess." It doesn't really control all the other activities in your brain, but rather observes and interacts with them. You can think of it almost like the user interface of a computer, which is actually where the term "user illusion" comes from.

Perhaps conciousness is a conceit. As something undefinable, certainly it cannot be duplicated or simulated. Once you put solid definitions to it, you realize other species possess it to some degree, yet we have no issues turning them "off" as needed.

Since we understand consciousness so poorly, I would agree that we'd have a very hard time simulating it. It seems like an emergent property, but at what point it emerges from a simulated collection of neurons, we have no idea. It's possible it won't emerge at all. There is some thought that the brain itself is more akin to a quantum computer, and therefore there are processes involved that we can't even model accurately, much less simulate on a significant scale.

I agree, though, that if we have no problem killing animals--for food, research, or any other purpose besides self-defense--it does come off as a bit reactionary to worry about "killing" a simulated ant.

Certainly, conciousness is not a binary thing, it must exist on a spectrum. Even among humans, we certainly are not equally concious, are we? When an AI matches the least of us, is that enough to consider?

Good point. Not only does it exist on a spectrum, there are those whose consciousness operates very differently from the rest of us. Schizophrenics, people with autism, etc. all interact with the world from a different frame of reference. And there are indeed those whose brains don't work so well, limiting their understanding of the world and their ability to control their interactions with it.

I suspect a large amount of what we do is pretty automatic. Smacking the alarm clock when you wake up in the morning to turn it off, taking a shower--how many of us really give that much thought to hygiene processes? Odds are it's very automatic, and you can mentally "check out" for the duration because it just doesn't require any real concentration.

Consciousness itself is mostly good, then, for situations that require particular concentration or making decisions that require up-front analysis. It is more of a long-term creature, trying to account for the big picture. I'm not aware of any animals that have the slightest idea about long-term planning.

We are roughly the same humans as existed 10, 20 thousand years ago. Perhaps conciousness comes with more free time. Then, we spent all our time surviving, conciousness was not an issue. Now we have a lot of time to contemplate our navels... so conciousness seems related to free time, in which case our pets are who we should be watching, specifically when we make apes pets/servants ;)

I did see one theory that is almost like Darwinism in reverse--instead of nature selecting those who are the "fittest," nature actually selects those that are the most efficiently lazy. If you think about it, a creature that can minimize the amount of time it spends hunting for food is indeed rewarded for being lazy. If you can provide for yourself in 1/10 the time of the next species, you have a distinct advantage. This also gives you time to rest and relax, and if you have a surplus of energy--play. And animals do play.

Humans, I think, have taken this notion to quite an extreme. :)
 
Thanks for the thoughtful reply Maxwell, I do tend to shotgun my thoughts. I don't disagree with any one particular of your response, except maybe...

'Survival of the fittest' is a great short form statement for evolution, but "for the situation" is often forgotten, and fittest, as you point out, can mean many, often contradictory, things depending on the situation. I would think "efficiency" could work in a species advantage, unless that effeciency is too great, wiping out the food source. Being overly cautious can be a survival trait, or a killer, again based on situation. I bring these up because we have many traits that we developed as hunter/gatherers based on false information that improved survivability, and conciousness may be one of those.

It seems your main issue is that this experiment was orders of magnitudes slower and smaller than a human brain. I think we already have the capability to "brute force" this problem just by adding more power, but not the will (or the $) to do so. Even so, it will take less iron to brute force this solution every year. At the same time, folks will be looking for elegent solutions, and the breakthru will occur when the two solutions join. I've read somewhere the timeline for this is about 2040.

While it is interesting, I don't think there have been any definitive studies indicating any quantum computing in the human brain, or that things on a quantum level even influence "real life" (as it were). But time, again, will tell. It may be there actually is a Loki, and quantum mechanics are his joke on physicists.

We don't need to know the inner workings of the brain in order to simulate it. We only need to understand the input and the output, and duplicate those. As I said earlier, early on we will simply brute force this and break it down into smaller peices. During this effort, two things will occur: 1) patterns will become apparent, and more elegent algorithms will be developed; and 2) Brute force will become smaller and faster, allowing more brute force to be applied to the problem.

It will happen, and sooner than most expect. But, will we recognize it, or argue it away?
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top