• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Are We On The Brink of Creating a Computer With a Human Brain

Maxwell,

If you produce an exact copy of a human brain you will cross that point.


CuttingEdge100

As FordSVT said, unless it's a biological replica you're in very fuzzy territory here.

To put it bluntly, we are nowhere near artificial sentience. Progress in AI over the last 50 years, in terms of creating truly human-like intelligence, has been abysmal. We've managed to create expert systems that are very good at what they do--data mining, playing chess, stuff like that. But non-deterministic systems that can approach problem-solving the way humans do? Not even close. In fact, most of the last 50 years was spent on the dead-end of expert systems. Emergent, evolutionary systems have gained some traction in the past decade or so but they are still infantile and have no practical use.

If we ever do manage to create a truly sentient AI, we'd be far more interested in duplicating it than destroying it.
 
Maxwell,

If you produce an exact copy of a human brain you will cross that point.


CuttingEdge100

An exact copy of a human brain would be a biological human brain grown in a vat, which wouldn't be very useful outside of medicine. Otherwise, it's not identical, not superior, and has all the strengths and weaknesses of our own brains. You might as well pull one out of a hobo. Human brains are not circuit boards and logic gates, there are many chemical, electrical and quantum-level processes going on in our brains that do not directly relate to things you can do with silicon, aluminum, gold and copper.

Which is a good point, if there is something innate about the unknown biological factors at play in our minds that creates "consciousness" we might never create truly sentient beings, merely ones who can simulate it. Not that there would be a way to tell, and not that there could be anything but a philosophical discussion about it. If it looks like a duck and quacks like a duck....

But computers intelligence doesn't necessarily need to be identical to human intelligence and sentience in order to be what we would consider outwardly sentient and perhaps superior to ourselves at many or even all mental functions. The possibilities of being able to view a million different possible outcomes to a situation while relying on a myriad of data inputs in only a single moment of time would lead to startling intellectual outcomes.
 
FordSVT,

Yeah, but in order for the simulation to work they would have to give the brain activity. Once you give it activity that matches a regular human, you have regular human consciousness and everything
 
^If you give it some memories and sensory input to work with, maybe. Without it you just have a computer without an operating system.
 
We don't have any idea how closely we'd need to approximate the workings of a human brain before consciousness emerges. The degree of approximation required might well be far beyond our present science.
 
FordSVT,

Yeah, but in order for the simulation to work they would have to give the brain activity. Once you give it activity that matches a regular human, you have regular human consciousness and everything

I think you're missing the point, and what you just wrote is so vague as to be scientifically meaningless. "Give it activity"? "Matches a regular human consciousness and everything"? Friend, we don't even know what human consciousness IS, so it's rather premature to definitively say what we can and cannot do with computers.

A perfect copy of a human brain would be a human brain. A simulation of a human brain is a simulation. It's not perfect and it can't be said with 100% certainty that a computer sufficiently advanced to mimic the electrical wiring of the human brain would behave like a human brain. We just don't know enough about consciousness to say that for certain. Maybe you're right and maybe you're not, but if you are it's not because of what you claim to know to be true at this point in time.

As I said, maybe we'll build machines sufficiently "smart" enough to gather inputs and relate to them and be able to formulate what we would consider to be intelligent outcomes, but how we go about building consciousness into a machine is still a scientific unknown. A computer might be able to forumlate a brilliant new political philosophy based on 3000 years of history but not be able to appreciate beauty or have any self-motivation or desire or consciousness.
 
As much as I'd love to say otherwise, computer science is just nowhere near the level CuttingEdge fears. We're at least decades, and more likely centuries away, unless we have some kind of massive breakthrough rather than incremental improvement.
 
FordSVT,

If a simulation was sufficiently advanced to mimic the wiring of a human brain, why would it *not* behave just like a human brain?

Consciousness is not something that's magic contrary to what many people would like to believe
 
^Because it hasn't been taught to behave like a human brain. Just because you have the wiring, doesn't mean you have the program.

How much of our brain is shaped by the information we receive and how it is received?

And "consciousness" does not equal "activity".
 
FordSVT,

If a simulation was sufficiently advanced to mimic the wiring of a human brain, why would it *not* behave just like a human brain?

Consciousness is not something that's magic contrary to what many people would like to believe

I never said it was magic, maybe it's biological and quantum in nature and not able to be simulated by circuit boards as we know them.

You're not really listening, you seem scared to death that Judgement Day is around the corner and the robots are going to start harvesting our flesh any time now.
 
FordSVT,

If a simulation was sufficiently advanced to mimic the wiring of a human brain, why would it *not* behave just like a human brain?

Consciousness is not something that's magic contrary to what many people would like to believe

You're right, consciousness is not "magic," however it does seem to arise out of self-reference. We are conscious because we believe ourselves to be so. In general, we do not doubt that we're conscious, sentient beings.

A computer, on the other hand, is nowhere near capable of this. We aren't conscious because we're told we are, we know we are. Unless a computer can determine for itself that it's conscious, it isn't.
 
Robert Maxwell,

We are conscious and sentient because of our brain's structure, and it's activity, which is of course based partially on activity from within and from received input
 
The problem is that this "activity" in the brain you speak of is largely unknown to us and we have no idea when we will figure it all out.
 
Considering the ethical ramifications that will come from the possibility of creating a sentient being that would be trapped in a computer solely to satisfy one's curiosity, I think it would be better to use less extreme but highly capable brain-imaging technology (fMRI's and such) to help us map the brain rather than creating a simulation of a whole human brain
 
You seem to have a very B-movie view of what's technologically possible at present.

I mean, how would one even begin trying to design a simulation without a complete mapping?
 
Lindley,

Dunno, but the guys who did the BlueBrain project simulating half a mouse brain, are planning on making a simulation of a human brain.

I think the potential drawbacks are so morally repugnant that they outweigh the benefits. There is nothing that says that any and all scientific experiments are necessarily ethical.


CuttingEdge100
 
Dunno, but the guys who did the BlueBrain project simulating half a mouse brain, are planning on making a simulation of a human brain.

That's the simplistic layman's version of what they're doing. One might even call it "sensationalized". I'd have to see a white paper or at the least a technical summary before I'd assume that what they're doing has even the remotest chance of leading to computer intelligence in anything broader than a machine learning sense.
 
BTW: For the record, in case anybody is wondering, I am not opposed to abortion, am pro-choice, so long as the fetus is aborted within the first trimester, or is aborted solely to save the mother's life -- I thought I should say this so this topic will not degenerate into pro-life or pro-choice...
I find this statement ironic in the sense this thread has basically devolved into into the crux of the abortion argument: When does consciousness begin?

Double irony in that you are prochoice for humans but prolife for machines!

For the record, I too, am prochoice under the same conditions as you.
 
This discussion makes me glad I'm reading GEB. :lol:

I'm actually at odds with some of the ideas presented, such as the notion that meaning exists without a consciousness to point it out, but I am trying to keep an open mind.

One of the premises of GEB would seem to be, however, that an adequate simulation of consciousness or intelligence is as good as the real thing. In the end, who are we to decide that something isn't "conscious" because it's made up of circuits and wires? We're made of meat and we don't doubt our own consciousness.

The essential component of any consciousness is a concept of self. And the self is aware of its own existence, is ever-changing, and explores its limitations through the experiences available to it. In other words, if we want an AI that's not blind, deaf, and dumb, we need to offer a vast array of truly interactive experiences to it. Hook up cameras, microphones, sensory devices of all kinds.

Alternately, we could have it inhabit an entirely virtual world, however our virtual models tend to be woefully inadequate compared to the real world.

Experience is required to develop a sense of self, which is required to be conscious (by our definition.) The "easiest" experiment we could do is to make a robot with arms and cameras, put it in front of a mirror, and just let it move. The moment it realizes it is looking at itself, it is conscious. It is capable of recognizing itself as an entity. It may be a limited and minimal consciousness (a "small soul" as Hofstadter would call it) but it would qualify.

Agree completely. Given the limitations of our knowledge about how intelligence and consciousness work, I doubt that this experiment will work, but since it is possible for it to, they should leave it running for a while, at least a few months to a few years.

Human infants aren't very self-aware either--indeed, our progeny are quite stupid for some time after activation.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top