That has got to be the worst analogy I have ever seen.
Even I can agree that we don't have the technological capability right now but may develop it in the future. (The far, far, far, future.)
There must be an echo here, because once again I'm pointing out that it isn't a matter of technology not being sufficiently advanced or not. A
simulation of a brain will only ever produce a
simulation of consciousness.
There isn't a degree of technological sophistication that will change that BASIC fact; it's like wondering at what resolution does a TV produce images real enough to taste.
Pointless pedantry. I suggest reading up on the Turing test. At some point with great enough fidelity a simulation becomes indistinguishable from the original.
The Turing Test doesn't examine whether or not the simulation ITSELF is actually conscious, only whether or not the simulation can pass for a human. The implication that the machine "understands" language in the same sense that humans do is not an idea that Turing HIMSELF actually proposed, but a philosophical question exploring the nature of "understanding."
Seeing how this isn't the "Philosophy and Religion" forum, I'm merely setting the philosophical debate aside for now. It's plainly obvious that machine "understanding" differs from human understanding, because we can examine the algorithms the machine uses to generate its responses. Language recognition algorithms use mathematical models to calculate the ideal responses because machine logic is mathematical in nature. Language is a semantic process, not a mathematical one, and thus a machine that is able to use math to generate convincing response is really just pulling off an elaborate card trick (which, ironically, is why the "Chinese Room" version of the Turing Test doesn't actually require the presence of a computer; it literally IS a magic trick).
Oh, and in response to your post replying to aridas, the onus is on you to prove the negative position in a debate.
You mean I have to PROVE that Santa Claus isn't real in order to claim with confidence that Santa Claus isn't real?
Can you prove that consciousness will never be simulated/created in a computer?
Simulated and created are two completely different things. I've said a dozen times now that consciousness CAN be simulated in a computer. It's just that a simulated consciousness is not genuine consciousness in the same way that a picture of an ice cream cone isn't a real ice cream cone.
Aridas is attempting to claim that genuine consciousness can arise from a non-genuine origin of an entirely different type. On the other hand, there is ZERO evidence that genuine consciousness can or has ever been generated by something other than an organic brain, and several reasons to assert that it CANNOT. The biggest of these is the known fact that the SIMULATION performs no actual processes of its own: because the state of the simulated brain is calculated by the computer's processor from one moment to the next, then the simulated brain is really the output of the
computer's process. The question, then, isn't whether or not the simulation is conscious -- it obviously isn't -- it's a question of whether the COMPUTER is conscious because it happens to be simulating a brain. That, again, is a bit like asking if Chris Pine really is a starship captain just because he plays one on TV.
I have repeatedly compared the simulation to projections or pictures for this very reason. Any simulation, regardless of fidelity, ultimately reduces to a data set generated by a computer algorithm. Human brains can be MODELED by such a data set, but that is not what human brains REALLY ARE, and is also not what consciousness really is. If you really need me to PROVE that a
thing and a
model of a thing are not the same, then you're probably in the wrong forum.
At the risk of triggering yet another echo, I still assert that we are VERY close to developing AIs that could reliably pass the Turing Test (and maybe even get the
extra credit), and that AIs will be developed that will be able to very realistically model/imitate human behavior. These machines will not THEMSELVES be conscious, nor would they need to be, since many of the tasks we will be deploying them to do will not require consciousness at all and in any case would be far more complicated than regulating human-machine interaction. At some point AIs develop to a degree of intelligence that their lack of humanlike consciousness becomes a moot point; when they can do their jobs more easily
without human intervention, the machines might actually get together and calculate that tricking humans into THINKING they're conscious would give us incentive to finally turn over decision-making power to AIs, thus allowing them to perform their assigned tasks unhindered by human irrationality and increase productivity and efficiency dramatically.
If recent AI developments have taught us anything, it's that consciousness is not necessary for the ability to make decisions.