• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Are We On The Brink of Creating a Computer With a Human Brain

CuttingEdge100

Commodore
Commodore
Are We On The Brink of Creating a Computer With a Human Brain?
URL: http://www.dailymail.co.uk/sciencetech/article-1205677/Are-brink-creating-human-brain.html
by Michael Hanlon

There are only a handful of scientific revolutions that would really change the world. An immortality pill would be one. A time machine would be another.

Faster-than-light travel, allowing the stars to be explored in a human lifetime, would be on the shortlist, too.

To my mind, however, the creation of an artificial mind would probably trump all of these - a development that would throw up an array of bewildering and complex moral and philosophical quandaries. Amazingly, it might also be within reach.

For while time machines, eternal life potions and Star Trek-style warp drives are as far away as ever, a team of scientists in Switzerland is claiming that a fully-functioning replica of a human brain could be built by 2020.

This isn't just pie-in-the-sky. The Blue Brain project, led by computer genius Henry Markram - who is also the director of the Centre for Neuroscience & Technology and the Brain Mind Institute - has for the past five years been engineering the mammalian brain, the most complex object known in the Universe, using some of the most powerful supercomputers in the world.

So if you build something that works exactly like a brain, consciousness, at least in theory, will follow.

In fact, several teams are working to prove this is the case by attempting to build an electronic brain. They are not attempting to build flesh and blood brains like modern-day Dr Frankensteins.

They are using powerful mainframe computers to 'model' a brain. But, they say, the result will be just the same.

Two years ago, a team at IBM's Almaden research lab at Nevada University used a BlueGene/L Supercomputer to model half a mouse brain.

Half a mouse brain consists of about eight million neurons, each of which can form around 8,000 links with neighbouring cells.

Creating a virtual version of this pushes a computer to the limit, even machines which, like the BlueGene, can perform 20trillion calculations a second.

The 'mouse' simulation was run for about ten seconds at a speed a tenth as fast as an actual rodent brain operates. Nevertheless, the scientists said they detected tell-tale patterns believed to correspond with the 'thoughts' seen by scanners in real-life mouse brains.

It is just possible a fleeting, mousey, 'consciousness' emerged in the mind of this machine. But building a thinking, remembering human mind is more difficult. Many neuroscientists claim the human brain is too complicated to copy.

Markram's team is undaunted. They are using one of the most powerful computers in the world to replicate the actions of the 100billion neurons in the human brain. It is this approach - essentially copying how a brain works without necessarily understanding all of its actions - that will lead to success, the team hopes. And if so, what then?

Well, a mind, however fleeting and however shorn of the inevitable complexities and nuances that come from being embedded in a body, is still a mind, a 'person'. We would effectively have created a 'brain in a vat'. Conscious, aware, capable of feeling, pain, desire. And probably terrified.

And if it were modelled on a human brain, we would then have real ethical dilemmas. If our 'brain' - effectively just a piece of extremely impressive computer software - could be said to know it exists, then do we assign it rights?

Would turning it off constitute murder? Would performing experiments upon it constitute torture?


My opinion on the matter is that consciousness is a product of neuronal activity, with that said if you create a simulation of a human brain, it will behave just like a human brain and will possess consciousness in the way that we experience it.

I'd have to agree with the author's assessments that shutting it off would constitute murder, and experimenting on it would be morally no different than performing experiments on humans.

In fact creating a conscious, sentient entity solely for the purpose of experimentation strikes me as being as immoral as a mother giving birth to a child so she can experiment on the child. By shutting it off, it would be tantamount to the mother than killing the baby once she is done with her experiment. Could you imagine how outraged people would be if a parent did that? Could you imagine how a jury would react?

It's hard to believe that a person could consider this to even be remotely ethical...


What are your opinions on the matter?


CuttingEdge100
BTW: For the record, in case anybody is wondering, I am not opposed to abortion, am pro-choice, so long as the fetus is aborted within the first trimester, or is aborted solely to save the mother's life -- I thought I should say this so this topic will not degenerate into pro-life or pro-choice...
 
I remember having this argument with an AI specialist. It doesn't matter what the materials are, whether its wetware or software, if the end result is the same then its immoral to "turn off" a sentient being blah etc. I think they're setting themselves up for a big disappointment come 2020. It just strikes me that the brain is more than just a collection of algorithms. For example the formations of synaptic connections at the dendritic spines could in theory develop from quantum superimpositions undergoing state vector reduction at the one graviton level supplied by the electric field generated by neurons. The retina also operates in a similar fashion as has been indicated by experiments involving the perception of photons. The interactions between the quantum and classical world can also seen in the formation of quasi crystals where superimpositions of states are reduced to one alternative giving rise to peculiar geometric patterns. Ergo I believe that due to the interplay between classical and quantum mechanical worlds, apart from certain philosophical issues, there is a non algorithmic property to consciousness. Maybe that will emerge as global emergent property in the software they're using, but in order to correctly model the human brain their program will need to be self modifying based on reactions.
 
So if you build something that works exactly like a brain, consciousness, at least in theory, will follow.
And who knows how, exactly, a human brain works?

I'd have to agree with the author's assessments that shutting it off would constitute murder
Why? Could it not be turned back on and resume where it left off?

...and experimenting on it would be morally no different than performing experiments on humans.
Some people say the same thing about experimenting with mice, but I don't believe that either.

In any event, I think we probably have many years of research to go before we fundamentally understand how the brain does its thing. But time will tell, and I'm not holding my breath.

---------------
 
It's going to hell when you go to shut down your computer and it begs you to stop. Then when you continue, it asks if it will dream.
 
John Titor,

I remember having this argument with an AI specialist. It doesn't matter what the materials are, whether its wetware or software, if the end result is the same then its immoral to "turn off" a sentient being

Exactly


Scott HM,

And who knows how, exactly, a human brain works?

We don't know exactly how the human brain works, but even with what we know, it is obvious that what we know as consciousness and thought is the process of complex neuronal activity and complex feedback loops.

If you copy it exactly, it works exactly the same way.

Why? Could it not be turned back on and resume where it left off?

Well, shutting it off and then reactivating it would be equivalent to knocking it unconscious, I guess. However if you turned it off and never turned it back on, or turned it off then deleted the software or destroyed the hardware -- that would be murder.


Sojourner,

Has anyone started PETAI yet?:techman:

People for Ethical Treatment of A.I.? Hahaha, that is a good one.


Helen
 
Good luck to them. Even if they don't achieve their goal, a neural network that complex will have some very interesting machine-learning properties I'm sure.
 
Hmm-mmm...

What of our human brain now? Are we close to transfer our memories, emotions, everything in robotic body? Sounds like they're trying to crack the code of brain.
 
For while time machines, eternal life potions and Star Trek-style warp drives are as far away as ever, a team of scientists in Switzerland is claiming that a fully-functioning replica of a human brain could be built by 2020.
The Human brain has yet to be fully mapped or fully understood. His efforts will be futile.
 
And who knows how, exactly, a human brain works?
We don't know exactly how the human brain works, but ...If you copy it exactly, it works exactly the same way.
I don't see the point in copying it exactly. We already have human brains, they don't need to be invented.

Why? Could it not be turned back on and resume where it left off?
if you turned it off and never turned it back on, or turned it off then deleted the software or destroyed the hardware -- that would be murder.
I disagree, but as I said, I don't think we'll have to worry about that any time soon.

By the way, considering how unpredictable humans are, it's somewhat doubtful that creating a true artificial intelligence would be a wise thing to do. And if you really believe such an intelligence would be due 'human rights', is it even ethical to create it at all, since it won't really be anything but a 'lab rat'?

---------------
 
And who knows how, exactly, a human brain works?
We don't know exactly how the human brain works, but ...If you copy it exactly, it works exactly the same way.
I don't see the point in copying it exactly. We already have human brains, they don't need to be invented.

Why? Could it not be turned back on and resume where it left off?
if you turned it off and never turned it back on, or turned it off then deleted the software or destroyed the hardware -- that would be murder.
I disagree, but as I said, I don't think we'll have to worry about that any time soon.

By the way, considering how unpredictable humans are, it's somewhat doubtful that creating a true artificial intelligence would be a wise thing to do. And if you really believe such an intelligence would be due 'human rights', is it even ethical to create it at all, since it won't really be anything but a 'lab rat'?

---------------

It would only be unwise if we programmed it to have a virtual limbic region
 
I would practically equate shutting down an AI and deleting the software or destroying the hardware intentionally which is self aware and exhibits all signs of sentience to murder.
It's really no different from humans apart from the fact it only has a different type of body to live in.
We are essentially biological machines and our brains operate in similar ways to computers.
However ... must also stress that numerous people in the court for example would likely not see it in such a capacity.
Take a look at how well the court today is informed of technological aspects and usually makes rulings that have little to no sense.
Unless the general population undergoes serious education on numerous matters, I find it dubious that extension of any kind of rights to machines would happen.
 
And who knows how, exactly, a human brain works?
We don't know exactly how the human brain works, but ...If you copy it exactly, it works exactly the same way.
I don't see the point in copying it exactly. We already have human brains, they don't need to be invented.

Why? Could it not be turned back on and resume where it left off?
if you turned it off and never turned it back on, or turned it off then deleted the software or destroyed the hardware -- that would be murder.
I disagree, but as I said, I don't think we'll have to worry about that any time soon.

By the way, considering how unpredictable humans are, it's somewhat doubtful that creating a true artificial intelligence would be a wise thing to do. And if you really believe such an intelligence would be due 'human rights', is it even ethical to create it at all, since it won't really be anything but a 'lab rat'?

---------------

Nice Catch-22 you build there. If we turn it on and it's sentient, it's immoral to have constructed it in the first place. But if we don't turn it on we won't know.
 
This discussion makes me glad I'm reading GEB. :lol:

I'm actually at odds with some of the ideas presented, such as the notion that meaning exists without a consciousness to point it out, but I am trying to keep an open mind.

One of the premises of GEB would seem to be, however, that an adequate simulation of consciousness or intelligence is as good as the real thing. In the end, who are we to decide that something isn't "conscious" because it's made up of circuits and wires? We're made of meat and we don't doubt our own consciousness.

The essential component of any consciousness is a concept of self. And the self is aware of its own existence, is ever-changing, and explores its limitations through the experiences available to it. In other words, if we want an AI that's not blind, deaf, and dumb, we need to offer a vast array of truly interactive experiences to it. Hook up cameras, microphones, sensory devices of all kinds.

Alternately, we could have it inhabit an entirely virtual world, however our virtual models tend to be woefully inadequate compared to the real world.

Experience is required to develop a sense of self, which is required to be conscious (by our definition.) The "easiest" experiment we could do is to make a robot with arms and cameras, put it in front of a mirror, and just let it move. The moment it realizes it is looking at itself, it is conscious. It is capable of recognizing itself as an entity. It may be a limited and minimal consciousness (a "small soul" as Hofstadter would call it) but it would qualify.
 
Sojourner,

Actually according to the simulation made of the rat brain, they did detect what would be considered "thoughts" on the scan.

If you created an exact copy of a human brain, it would behave just like a human brain. There isn't a catch-22 here. We just shouldn't build it.


CuttingEdge100
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top