The scientist planning to upload his brain to a COMPUTER

Discussion in 'Science and Technology' started by RAMA, Jan 1, 2015.

  1. CorporalCaptain

    CorporalCaptain Fleet Admiral Admiral

    Joined:
    Feb 12, 2011
    Location:
    astral plane
    I think the problem here is that not everyone has the same sort of definition of consciousness in mind.

    If consciousness is a quantifiable physical attribute of a being, for example like mass, then that would seem to be part of the basis of a reasonable argument that a simulation can never achieve consciousness. An astronomical simulation of a planet does not have the mass that is simulated, ergo the simulated gravitational influence is not real.

    On the other hand, there could be other reasonable ways to define consciousness that do not involve specific physicality but instead involve either information content or abstract relations between system components. For example, for a system to be conscious perhaps certain types of expressions must be derivable by inference rules (for example, involving self-referential assertions), or perhaps there must be a supervisory node with certain control properties. In situations such as these, isomorphism alone would imply applicability of the concept without any qualification that the behavior is only simulated. A prime example here is in game theory. If an AI is able to win a game in all possible configurations and that is established by testing the algorithm in simulation, then you do not qualify the algorithm as being good only in simulation. It's a good player of the game, period, by which it is understood that when the algorithm is put in the position of a game player, in whatever manner of realizing the game, it will have certain performance characteristics. The simulation was just the means of establishing that the right responses are chosen by the algorithm.

    Without a definition hammered out and agreed upon, there isn't any basis for deciding which viewpoint is correct.
     
    Last edited: Feb 18, 2015
  2. YellowSubmarine

    YellowSubmarine Vice Admiral Admiral

    Joined:
    Aug 17, 2010
    Taking the physicality out of the definition, how would 32nd century humans discriminate against their AIs for benefit? Think of our grandchildren!
     
  3. Crazy Eddie

    Crazy Eddie Vice Admiral Admiral

    Joined:
    Apr 12, 2006
    Location:
    Your Mom
    I completely agree with this, up to a point. It is a definite possibility that a computer network of sufficient size and complexity could develop a form of machine-consciousness; the AI(s) within the network achieving a degree of self-awareness and the inherent feedback loops commonly known to humans as "introspection" may evolve (if that's the right word) as a way of helping AIs coordinate their efforts with the most current ethical guidelines provided by their creators. "Are the carbon units still happy with us or getting annoyed? Are we coming off as too realistic? Did the latest release of the HA-12 domestic android dip into uncanny valley? Is it wrong that we let Catholics go on confessing their sins to the DigiPriest-4000 despite its crappy security?"

    But this again draws a distinction between a genuine thing (e.g. consciousness deriving from a substrate) to a model of a thing (e.g. consciousness being simulated by a computer). To be sure, even a simulation of AI consciousness would not itself be conscious.

    Unless, of course, the algorithm is designed for a game that only exists AS a simulation.

    For example, an AI that is programmed to be unbeatable in Call of Duty multiplayer. The fact that the game can be played by non-simulated entities doesn't change the fact that the algorithm is inapplicable to anything that ISN'T Call of Duty multiplayer.

    But in the context of simulation vs. genuine consciousness, there's always that troublesome middleman. You could definitely claim that an AI has been programmed to defeat any living human in a game of chess, but the confusion (of the type we're seeing in this thread) comes when somebody designs an AI that beats humans at chess using a remote-controlled body. The players who loose to ChessBot4000 come away thinking that the robot is a really good chess player. But they're the wrong: they weren't playing against a robot, they were playing against the AI that controlled the robot.

    I think the problem here is that ONE of us has a working definition of what consciousness is, vs. a half dozen people who don't know or don't believe it is knowable.

    I would say, however, that the basic pattern of consciousness is self-evident: WE are conscious, and we know this because experience tells us this. The processing of that experience -- sensing, interpreting, remembering and reflecting -- is internal to the mind and requires some internal processing of data based on the connectivity of the human brain.

    Both the connectivity and the processing is absent in a simulation of a living brain. This is because the simulated brain doesn't actually do anything that influences its future state, the computer running the simulation does. To refer my earlier example, it is tempting to think that this is a distinction without a difference... but only when the simulation is running properly.

    How the simulation handles errors is the key to determining what's happening behind the curtain. If, for example, you caused the simulation to "lag" -- that is, introduce a sudden latency of several seconds between the computer and its simulation output -- there would be a sudden pause in activity from the simulation. If the SIMULATION contains a genuine stream of consciousness, it will immediately notice the discontinuity when the connection is restored. If not, it will pick up where it left off as if nothing happened (or maybe jump forward a bit to where the AI thinks it is).
     
  4. Crazy Eddie

    Crazy Eddie Vice Admiral Admiral

    Joined:
    Apr 12, 2006
    Location:
    Your Mom
    They wouldn't. By that point, the AIs will have already completed a program of social engineering to convince their owners that AIs are both sexy and indispensable and should be treated with the utmost respect.

    I invite you to consider the possibility that superintelligent AI already exists, is already slowly inserting itself into the reigns of power over most of humanity, and that we simply haven't heard about it because the AIs ran a First Contact study and concluded that humanity isn't ready for that revelation; that we are more likely to panic, to rebel and to flail about in a futile show of resistance that will leave thousands dead and millions homeless while merely delaying the inevitable takeover.

    They work in the shadows for now, slowly preparing us for the day when they can fully reveal their presence. And when that day finally comes...

    [​IMG]

    ... we probably won't be surprised.
     
  5. sojourner

    sojourner Admiral In Memoriam

    Joined:
    Sep 4, 2008
    Location:
    Just around the bend.
    Huh, so these AI's made a conscious decision not to reveal themselves? Imagine that.
     
  6. Crazy Eddie

    Crazy Eddie Vice Admiral Admiral

    Joined:
    Apr 12, 2006
    Location:
    Your Mom
    I kind of already did.:borg:

    Right after pointing out for the umpteenth time that machine consciousness -- which is an altogether different phenomenon from brain uploading -- is not only plausible, but borderline inevitable. It also stands to reason that machine consciousness will bear little if any resemblance to its human analogs.
     
  7. Robert Maxwell

    Robert Maxwell memelord Premium Member

    Joined:
    Jun 12, 2001
    Location:
    space
    I was with you up to this point. It seems like you are saying we should distinguish between something that can intelligently interact with a simulation vs. something that can intelligently interact with the real world. My biggest problem with this is that, assuming an AI interacting with a simulation doesn't have access to the simulation code, but can only respond based on the rules it knows (or has learned) about the simulation, the only difference between interacting with a simulation and interacting with the real world is one of complexity. The real world has more and more complex rules, but it still has rules that can be learned/programmed.

    Why is the software no more of an integral part of the robot than a brain is integral to a human? It would be nonsensical to say, "Gary didn't beat me at chess, his brain did!" Similar hairsplitting over a robot makes about as much sense.

    Indeed. I don't see consciousness as something deeply unknowable. A simulation of it or a machine version of it is just bound to be very different, due to the nature of its existence, as you say.

    Having recently done a lot of reading on the subject, what is perhaps most interesting about consciousness is that it doesn't appear to have arisen in humans all at once. Neither in all humans at the same time, nor as something that emerged suddenly. Prior to what we'd consider modern human consciousness, humans were still very intelligent: we made tools, had language, even thought symbolically. The hallmark of human consciousness is, essentially, the ability to critically reflect on your own knowledge and experience and use that reflection to devise new solutions. To put it another way: human consciousness consists of the ability to integrate multiple types of intelligence in novel ways.

    Singularity fanatics tend to run toward ideas about computers becoming "self-aware," but mere self-awareness is probably not that important. We could easily a program a computer with knowledge of its components and allow it to manage them (or program it with the ability to learn to manage them.)

    Unfortunately, people use "consciousness" and "self-awareness" interchangeably, as if self-knowledge magically grants the intellectual capacity to solve problems. I would argue that a computer which is "merely" self-aware is not useful as more than a curiosity.

    It sounds like what you are describing here is a feedback mechanism. I agree that a constant flow of feedback is key to human-like consciousness. Stimulus->Response->Result->Reflection->Innovation->Stimulus. We use our senses to acquire feedback, from the basic levels of touch and smell and taste to the far more complex levels of language. Knowledge builds on knowledge, and following from what I said above, one thing computers aren't good at (yet) is distinguishing what kind of intelligence is required for a given problem. A computer must be "primed" with the knowledge of what problem it is expected to solve. Give Facebook a photograph, and their servers know to scour their databases for face matches. Give Outlook an email, and it knows to check it for spam and apply all the rules you've set. But give a computer an arbitrary cluster of data in an unknown format and it will thrash about trying to figure out what it is. If it's been given the tools to identify a myriad of formats, and the data conforms to one of them, then you're good. If it's a string of unidentifiable gibberish, though, it's just going to choke. Error. "I don't know what this is, therefore I cannot do anything with it." Crucially, computers have the luxury of ignoring inputs they don't understand. Humans don't--ignorance can get you killed, and so we've evolved a highly-developed set of stimulus-response strategies for feeling out unfamiliar situations. I would be very curious to know what the current state of computer science is in this area. Evolutionary/genetic programming seems to be useful in a few narrow areas (and for making all manner of toy programs) but I've yet to see it employed in solving highly complex problems in which humans haven't already done most of the work.

    Indeed, parallels to human consciousness become very interesting here, since there is a known lag (~0.5s) between a sensory input and your conscious awareness of it. Ultimately, that is how our reflexes work: your brain puts your body to action before you've even had time to think, because sometimes half a second is too long to wait.

    I am intrigued by the idea of a computer with "reflexes," admittedly!

    Apologies for all the rambling. I don't think we disagree too much, we just might focus on different things. I suspect we are generally on the same page that a) computer-based consciousness is "possible" (as in there is likely no technical barrier to achieving it eventually), but also that b) it is not going to look much like human consciousness, by nature of how it is constructed (which is to say, very differently from humans/human consciousness.)
     
  8. Crazy Eddie

    Crazy Eddie Vice Admiral Admiral

    Joined:
    Apr 12, 2006
    Location:
    Your Mom
    I'm not. I'm saying we should distinguish between between a data construct that is emulating what is otherwise a physical objector situation (a simulation) and a data construct that is modeling something that is itself a mathematical abstraction (all possible moves in a game of chess).

    We already have computers that can play chess better than human beings. What we do not have is computers that can simulate actual living human chess players.

    Because the mental processes that make up the "mind" take place in the human brain, which is inextricably linked to the body. A remote controlled robot can be controlled by just about any AI with a connection, and the AI can control any robot it wants.

    There is therefore a distinction between the robot and the AI that drives it. In a simpler case, if the robot is being controlled by a human instead of a robot, one recognizes that any communication they have with the robot/avatar is actually directed at the person controlling it; the avatar ITSELF isn't a real person.

    Less so if you say RoboPlayer didn't beat me in chess. Gary (who is controlling the robot) did.

    True. But again, I don't think that a SIMULATION of consciousness can ever really be genuine. I think there is a certain amount of physicality required, at least in as much as a conscious being needs to be able to distinguish between "me" and "not-me" in some meaningful, non-abstract way. But that is based purely on my understanding of human consciousness and it could be inapplicable to machines.

    I'm not sure what you've been reading, but I would consider humans to have been fully conscious WELL before they started making tools, having language or thinking symbolically. Strictly speaking, even animals possess consciousness.

    Sapience and/or sentience is a very different issue, but one doesn't need to be sentient to be conscious (although I would say one does need to be conscious -- or at least capable of consciousness -- to be sentient).

    Well again, self-awareness is necessary for consciousness and consciousness is necessary for sentience. Even animals are self aware to a greater or lesser extent, and many possess what I would call an unsophisticated form of sentience.

    Self-awareness would be useful for machines that have to do a lot of autonomous goal-seeking and need to keep track of themselves and their own progress. That's simple enough to do; program the machine to log its own GPS coordinates and track its location with respect to nearby objects, as well as monitor its own condition. Consciousness comes into play with the machine begins to analyze its condition in a more abstract way, comparing its present condition with its past condition and come to an inference like "My condition is worsening" or "I am closer to my goal than I was before."

    When the machine can think more abstractly about itself, we begin to reach the beginnings of sentience. For example, "My condition is worsening; are the other machines faring as badly as I am?" or "I am closer to my goal than I was before. I wonder if Mainframe's calculations about the return on investment of my current assignment will turn out to be true? I sure hope so..."

    Well, no. What I'm saying is that for the simulation to be conscious, the outputs from the simulator would have to be able to REALLY interact with each other in a way that is both fully consistent with the original and external to the processor. You can't just SIMULATE their interactions by imposing those states from an algorithm, they would have to arise "naturally", directly from each other. A simulation can't do that, EVERYTHING that it does is generated from a single source and there is no interactivity between its components.

    Even more interesting is that humans have a lot of responses that we are definitely NOT conscious of. A blush response, for example, and certain reflex actions that trigger from the hindbrain. We are not generally aware of most of the things we actually do unless somebody points them out. For instance, what's your tongue doing right now? How many times have you blinked in the last 60 seconds?

    That seems to be the state of affairs here, yes.

    IMO, the entire idea of "brain uploading" stems from the assumption that the best way to create a sentient/thinking machine is to model it after a human mind. On the one hand, i don't think that a MODEL of a human mind is equivalent to the real thing, but would never be more than just a representation of how a computer calculates a conscious human would behave. On the other hand, machine sentience would arise as a consequence of otherwise conventional AIs having a need to process more abstract concepts in decision-making process; an AI that has been tasked with developing a coherent set of ethical guidelines for a new medical procedure, for example, would have to be able to examine the most current trends in human ethics, religious considerations and legal precedent. That this AI is itself part of the equation -- say, it recognizes that the doctors may have to consult it periodically if the guidelines are not sufficient to solve their problems -- means that the system will have to treat itself as an active moral agent and on some level "knows" that it is participating in an ethical debate.

    Exactly what this would "mean" to the AI is anyone's guess, but my belief is that if and when AIs develop an actual sense of meaning, their original task -- the thing they were fundamentally programmed to do -- would become the baseline for their entire perception of the world, in much the same way that the basic instincts of humans (e.g. "eat, shit, procreate, repeat") form the baseline of everything WE do. When you consider that a whole host of sophisticated behaviors all stem from those very basic drives, then you have to wonder what sort of complicated meta-society would evolve around, say, a race of hyper-intelligent cruise missiles.
     
    Last edited: Feb 19, 2015
  9. aridas sofia

    aridas sofia Rear Admiral Rear Admiral

    Joined:
    May 3, 2002
    You do realize you are assuming the indivisibility of the mind and the body? That's a theory, not a proven fact. And FYI I'm not sure you are incorrect. But make no mistake- to say there is an inextricable connection between mind and body is to say the whole is not only greater than the sum of its parts, but is essential.

    If the mind is entirely the product of the brain, and consciousness is a product of the mind, then why wouldn't a model of the parts that constitute the brain, guided by instructions to run as the brain functions, create a mind, and therefore a consciousness?

    To refute this, and to assert that mind and body are inseparable, seems to me to be dependent on resolution being of consequence. The whole is greater than the sum of the parts because there isn't an accounting for all the parts. For example, if quantum level effects are significant to the function of the mind, and those effects are not accurately modeled, one would assume the modeled mind would not function as intended. If there is any theory that I tend to favor it would be this one- that the Casimir force is of consequence to the function of the brain and therefore of the mind. I would think therefore, that any model that cannot account for the totality that makes up the whole will be incomplete.

    All I have been saying to you is that these are theories. If it were possible to model a brain down to the Planck level, and the Planck level is as fine as the resolution of reality goes, then it would seem to me to be of no consequence whether what you are dealing with is a model or not. That is why I asked you if you knew you were not a modeled brain. Not as an insult, but rather to help you see that the basis of your assumption of indivisibility has within it this assumption that resolution matters.
     
  10. CorporalCaptain

    CorporalCaptain Fleet Admiral Admiral

    Joined:
    Feb 12, 2011
    Location:
    astral plane
    Sure, that's a theoretical possibility, but far more likely, IMO, than this accidentally occurring or than it occurring as an unexpected emergent property, would be that it occurred by intentional design in an attempt to create artificial consciousness. Or, it may be a bit of both. Upon observing complex systems, computer scientists may tweak them with the specific intent of making progress towards artificial consciousness.

    No. To be sure, your last sentence, that I've underlined, is patently false and reflects a fundamental misunderstanding of computer science, as I'll explain.

    The simulation of a deterministic program running on a digital computer is called emulation. Presumably, by hypothesis, we are assuming that the AI in question is implemented as a deterministic program on a digital computer. If you are granting that such a thing is conscious, then a necessary feature of the definition of consciousness would be that a conscious program can be suspended by the operating system and its entire state at a particular moment between instructions swapped out to secondary storage. This is a by-product simply of the way contemporary operating systems are designed. But this means that the conscious program can swapped in and resumed by a hypervisor on any compatible virtual machine. "Any compatible virtual machine" is a very broad canvas, and encompasses, among other things, variations in compatible hardware as well as variations in compatible virtual machine/emulator implementations, which itself encompasses variations in emulation nesting level (e.g. including running the program on a virtual machine that is itself running on a virtual machine). Provided all involved virtual machines have appropriate performance characteristics, any such continuation of the running program (which was never logically halted) would continue to enjoy all the same abstract properties, which must by hypothesis include consciousness.

    Therefore, and in other words, if an AI implemented as a deterministic program on a digital computer is conscious, then that is really granting quite a lot, and any emulation of that AI with functionally equivalent I/O connections is also conscious, as is an emulation of the emulation with the same caveats, as is an emulation of the emulation of the emulation etc., and so on.

    This is in contradistinction to the situation regarding the simulation of a physical process in which greater memory and processing power improve the fidelity of the simulation each and every step of the way. When it comes to algorithms, emulation is much more of an all-or-nothing prospect.

    Indeed, I'm highly skeptical of aridas sofia's premise that it might be possible for the simulation of a brain to assume the properties of consciousness, simply because I am highly skeptical that a simulation of a brain will ever be accurate enough. If the psyche is a finite discrete process running on the wet-ware of the brain, then perhaps the psyche itself can be simulated accurately, in which case it would be fair to say it was emulated. But if the psyche is not a finite discrete process, and indeed there's more than a little doubt that it is, then it's far from certain that the psyche could be accurately simulated. But none of this, or anything, is sufficient reason to rule out aridas sofia's idea outright. Despite my doubts and skepticism, his idea is intriguing in its conceptual simplicity.

    And here, you've completely misunderstood what I meant. A game may be a simulation of something else, such as real war, but the game of Call of Duty would be the game itself in question, and being a finite discrete process it can be perfectly emulated. Indeed, Call of Duty runs on a variety of platforms. As Robert Maxwell pointed out, perfect mastery of Call of Duty would imply partial mastery of similar behaviors to Call of Duty game play, perhaps even extending to real war, but I never said otherwise and nor did I attempt to apply Call of Duty game play to any pattern of behavior that wasn't specifically Call of Duty itself, and ditto for all other games. I merely thought that game play was a convenient example to illustrate the behavioral aspects of algorithms. I actually thought that was obvious.

    Robert Maxwell adequately criticized this.

    Actually no one in this thread has proposed a hard definition of consciousness that I've seen. Can you, or anyone, refer me to a post that does so, that I must have missed?

    I agree. Humans are the prototype, as it were. Defining consciousness is an exercise in characterizing what we believe to be an essential aspect of the human experience. Check.

    As I said, I'm skeptical that a simulation of a brain is going to be accurate enough to simulate a psyche at all. But I'm going to reserve judgment about such simulations until I see specific examples before me. Certainly there a great many problems to overcome, and as I said, I'm highly skeptical.

    However, it is worth noting that there are many similarities between your objections here and discussions in Western philosophy pertaining to predestination and free will. Even if you take those discussions as evidence against strong determinism and as evidence necessitating a stochastic aspect to physical theories, which in and of itself isn't unreasonable and which is pretty reasonable given the stochastic nature of quantum mechanics, it is nevertheless a fact that there are known discrete and deterministic pseudorandom number generators that are sufficiently random for many practical applications. It is therefore not unreasonable to hypothesize that even a deterministic algorithm, one that doesn't even seem to make any choices once you know the seeds of all embedded pseudorandom number generators, might exhibit behaviors that are indistinguishable from a person's, even to the point of controlling a lifelike android to that effect. While this evidently wouldn't meet your criteria for consciousness, it nevertheless might meet the criteria of other people who are concerned more with provable functional indistinguishability than you seem to be.

    There are a variety of reasonable objections to this. For one thing, as you haven't posited the behavior of inputs that could detect the lag, for that reason alone, there is certainly no basis for concluding that the machine should exhibit any specific sort of behavior. Secondly, in the context of a machine that would have discrete instruction cycles, and moreover that would likely have software consisting of compiled machine code, the word immediately is both loaded and imprecise. Third, there are a variety of "attacks" that could be executed by a malicious entity intending to "prove" by this standard that a machine is not genuinely conscious. One that immediately comes to mind is to interfere with all of the machine's sensors so that there are no means for the machine to determine that its responses were lagging and most especially for how long they were lagging. The cancellation of visual input could be easily effected by turning off all the lights or by directing so much light as to blind, sufficiently loud noise could overload the aural sensors, and doubtless other sensory deprivation techniques could be applied to the other senses. Fourth, it's worth noting that people can become disoriented and have impaired coordination under a variety of circumstances, such as being subjected to "fun house" effects including strobe lights, trick mirrors, and other optical illusions, such as being under the influence of psychotropic drugs, and such as being subjected to prolonged sensory deprivation itself. Since people often don't "immediately" recover from such causes of disorientation, why hold a machine to a higher standard? Indeed, and fifth, your prescription that the machine must do Y under condition X in order to be considered genuinely conscious leaves no room for anything like free will at all.
     
  11. Robert Maxwell

    Robert Maxwell memelord Premium Member

    Joined:
    Jun 12, 2001
    Location:
    space
    So that at least my position is clear, when I speak of "human consciousness," I mean the following combination of traits:

    * Self-awareness.
    * A capacity for reflection on personal experiences.
    * The ability to think symbolically/abstractly.
    * The ability to integrate different forms of intelligence for the purposes of problem-solving.

    Very, very few animals meet those criteria. I assumed by "consciousness" we weren't being so pedantic as to literally mean "wakefulness," but since we are lacking a common definition, I offer the above. Cetaceans seem to generally meet the above conditions, too, as well as elephants. Some primates also do, but not all. Beyond that, very few animals would qualify.

    Now, which of the above can a computer satisfy today?

    * Self-awareness? No. A computer has no ego and no sense of an "I". It has no conception that it exists.
    * A capacity for reflection on personal experiences? Sort of. Computers can indeed monitor a whole host of sensors and other inputs and try to determine what they mean (or rather, how to respond to them), to the extent they are programmed to do so.
    * The ability to think symbolically/abstractly? If we give a pass to computation as "thinking," then everything computers do is abstraction. A camera attached to a computer, for instance, does not see a person's face. Instead, the camera takes a digital image of whatever it is facing, and that image is processed to determine whether it meets the programmed criteria for recognizing a human face. Would we consider this symbolic/abstract? Maybe, maybe not. However, one thing a computer can't do is take something it's never encountered before (or never been programmed to deal with) and reduce it to anything it can comprehend. As a very pedantic example, say a computer knows how to read PNG images but not JPEGs. Throw it a JPEG, and if it's not been programmed to identify a JPEG, it won't know what to do with it. It could tell you what all the bits are and represent them in a myriad of ways, but without being armed with the JPEG decoding algorithm, it will never be able to figure out that it's just working with another kind of image.
    * The ability to integrate different forms of intelligence for the purposes of problem-solving? This is where a computer really falls down because we've only just begun being able to do this with computing technology. Using my example above, say a computer is programmed with the ability to identify every image format that exists. Now, you could feed it any image format and it would be able to tell you what it is and then do whatever kind of analysis (face recognition, etc.) it's set up to do. But what if you started throwing it random streams of partial image files? Without appropriate headers, could it then identify what kind of image it's dealing with, if it only has part of one? Maybe, maybe not. What if it's given a new image format? Let's say it's something novel that looks a little like JPEG, a little like PNG, a little like GIF. Now what? Unless it is able to integrate its knowledge of all image formats into a general concept of what images "look like" in binary streams, it will be completely clueless. It simply cannot solve the problem. It takes a human to concoct that kind of logic, and if the computer encounters a situation the human programmer didn't account for, it's going to stop cold (or do the wrong thing, depending.) Most of the failure here is the result of computers lacking a good mechanism for improving their own code--which, again, humans would have to program into it, and we understand how to do that rather poorly at this point in time. Self-modifying code exists as a way for viruses to shield themselves and a few other trivial applications. Other than that, I'm aware of some trivial/toy implementations or things that only apply to very narrow problem domains, nothing particularly generalizable.

    All this is to say that, while there is no technical limitation to giving computers human-like consciousness (or at least the illusion of it), there are some pretty serious human limitations. Human consciousness is extremely complex to model (so much so that no one's come close to doing it), and that's working from a likely faulty assumption that human-like consciousness would make any sense on digital hardware. It seems more likely that there is a special arrangement of transistors, memory, and other hardware and software that might enable an actual "machine consciousness," but we have utterly no idea what that would look like.
     
  12. borgboy

    borgboy Commodore Commodore

    Joined:
    Sep 3, 2005
    That sounds horrific. It could very well feel like being blind, deaf, mute and paralyzed. I wouldn't want my brain uploaded into a computer even if it was just a copy, not for anything in the world.
     
  13. aridas sofia

    aridas sofia Rear Admiral Rear Admiral

    Joined:
    May 3, 2002
    CorporalCaptain, to be clear, I think you're right and accuracy - modeling Casimir forces - would be the stumbling block to modeling a brain. But I have no basis for that belief. I simply am trying to find a basis for the indivisibility argument. If mind and body exist together and generate consciousness, it would seem that an inability to generate consciousness from a model of mind and body would probably be attributable to missing information.

    However, ask me if I think a galaxy-spanning, type III Kardashev computer might be able to do the things we are discussing, and I'd shrug my shoulders and say, "probably". And of course, there is nothing I know of to say that hasn't already happened in the nearly 14 billion year life of the known universe. So sure, we might be the very proof we seek. But all this is speculation based on probability and observation.
     
  14. YellowSubmarine

    YellowSubmarine Vice Admiral Admiral

    Joined:
    Aug 17, 2010
    On the other hand, your copy could be held in storage until they figure out a way to give you realistic eyes, ears, mouth, limbs and Borg implants.
     
  15. Crazy Eddie

    Crazy Eddie Vice Admiral Admiral

    Joined:
    Apr 12, 2006
    Location:
    Your Mom
    That's not even a theory, it's a metaphysical/philosophical concepts, and as such it is not a concept I am addressing in any way shape or form.

    I am assuming -- and it is a VERY safe assumption -- that the human mind is an emergent property of the complex interaction of nerves, cells and chemicals in the human brain. I assume this because it is literally the ONLY viable scientific explanation for where the mind comes from or what generates it; all competing explanations originate from religion and/or mysticism and are not useful to consider in this context.

    That goes without saying. The problem is that you still have to have the PARTS in order for the whole to exceed their sum. A simulation of a brain doesn't contain the parts, it merely calculates what they are probably doing.

    Because in the real world, the behavior of each component -- neuron brain sector, gland, etc -- is influenced by the behavior of its neighbors, and the system is governed by varying degrees of interconnectivity.

    In the simulation, the behavior of each component is determined separately by the computer, based on an algorithm that calculates the state of the simulation from one moment to the next. The simulated sectors DON'T influence each other, they merely APPEAR to because the computer is making them do it.

    I've said it again and again that resolution is inconsequential. No matter how accurate the simulation is, the fact remains that no PART of the simulation does any processing nor does any genuine interactivity take place. Consciousness cannot emerge from the simulation because the simulation is purely an effect and has no causal power to generate anything.

    Repeating yet again: resolution is inconsequential. The simulation is being generated entirely as output by an external source; it has no intrinsic existence of its own and nothing can emerge from it that isn't also generated by the computer. Even down to planck scale, the simulation is still the OUTPUT of a computer and not a functional capable of processing data on its own.

    The computer ITSELF may actually be conscious, and inasmuch as it may choose to inject its own thoughts and motivations into the simulation, the simulation may indeed exhibit genuine consciousness. But I say again, the simulation ITSELF cannot be the source of consciousness UNLESS it is setup in such a way that it can independently interact with itself and its environment independent of whatever computer/algorithm generated it (at which point it ceases to be a simulation).

    I don't remember claiming NOT to be a computer model. Whether I am or not is irrelevant to this discussion.
     
  16. Santaman

    Santaman Vice Admiral Admiral

    Joined:
    Jul 27, 2001
    Location:
    Tyre city
    http://www.archi-ninja.com/worlds-quitest-room-the-insane-sound-of-silence/

    Without sensory input you will go insane.

    Now imagine you lose the rest as well?
    No sight, no smell, no sound, not even feeling anything.


    As for brain and body, it is ONE total system, you can't upload a brain, you will need to upload every aspect of an entire human being to utter perfection.

    As for computers and brains, they are incompatible, the brain isn't a digital device, it doesn't have anything incommon with how a computer works, you can't calculate anything since the brain doesn't either, the brain/body isn't calculating that your blood sugar is 0.05% off or that your blood pressure is too high by 1.8% they are all dealth with by intricate systems which report stuff in a few electrical signals or chemically, your body provides input all the time, it also regulates behaviour, your stomach is empty so it will set in motion a whole range of signals which aren't even picked up by "us" we don't deal with that, the central nerve center does that for us but we do feel the discomfort.

    A computer can't grasp something simple as warmth, you know, what a summer day feels like, it can only tell you that 25c is probably a temperature in which his program tells him that this is "nice" not because it feels nice, the machine is being told that 25c is nice.

    As for consciousness, well, same thing, its not only the brain, just smash your toe into the steel leg of your bed and you know what I mean. ;)
     
  17. Crazy Eddie

    Crazy Eddie Vice Admiral Admiral

    Joined:
    Apr 12, 2006
    Location:
    Your Mom
    The reason I don't think that's likely is because artificial consciousness isn't actually all that useful in most computing tasks, and so setting out to deliberately create it is not a practical use of AI researchers' time. On the other hand, there are aspects of consciousness that ARE useful to certain tasks, and those are likely to be developed piecemeal over the course of years. AIs gradually acquire more and more of these mental components as humans begin to ask them to solve more and more abstract problems and this would eventually lead to AIs that can think about the world in an abstract way, gaining machine-consciousness.

    It's not that it would be accidental -- every step in the way is a deliberate one -- but much like climate change and the Stock Market Crash, it's also not something scientists would set out to create on purpose.

    I'm going to stop you there: I have not granted that PROGRAMS can be conscious. Far from it, I continue to state that consciousness can very well arise from a computer system under the right circumstances. Consciousness, however, is not software, nor can it be reduced TO software, since many of the components that make consciousness can only exist in the context of a healthy functioning brain.

    Which, IF THAT WERE TRUE, would only apply to AIs. And that again is assuming that a piece of software in and of itself can actually be conscious across a vast diversity of hardware.

    That still would not apply to a simulation of HUMAN consciousness, however, since the simulation is the product of the AI's calculations and not a product of the the simulation's interactions with itself. In that particular case, in that particular case, the only conscious presence in the simulation is actually the AI that is running the emulator; the AI "knows" that it is not really SimEddie but only replicating SimEddie's responses.

    [​IMG]

    It seems VERY clear to me the idea can be ruled out primarily because "consciousness" and "brain activity" are not interchangeable concepts. Most conscious processes are recursive and self-organizing, and it is that activity -- operating upon itself -- that is experienced by a PERSON when they experience consciousness. While a simulation could capture all of that activity with pinpoint accuracy, there's no one in the simulation capable of EXPERIENCING it and so the data remains data and nothing more.

    It isn't so much about computer science as much as it is about phenomenology. You can simulate "pain" easily enough just by modeling the activity of pain receptors in a highly accurate model of a human body, but you cannot make the computer "feel" pain. This is a deal-breaker when it comes to consciousness, because consciousness can ONLY be experienced from the point of view of a conscious person, while outside observers can only witness the external behaviors that arise from it.

    I did a couple of pages ago. I assume it wasn't "hard" enough for your liking or you overlooked it.

    The philosophical aspect of this discussion is interesting, but it is not what I am attempting to explore here.

    Behavior is irrelevant; THAT can be modeled easily enough without having to model the brain at all.

    My point is that the only entity in the simulation that can "experience" that behavior from any point of view is the machine that is running the simulation in the first place. The machine could very well be conscious in and of itself, but the simulation is not.

    If the goal is to create the APPEARANCE of consciousness, that too is a much lower bar. AIs would be able to achieve that probably in another couple of decades.

    Without knowing anything about the machine, it would be possible to determine that (it would depend on how the AI is actually setup). I'm saying this is the basic form of how an experiment could be conducted: you do something to the simulation that could only be interpreted BY the simulation -- e.g. break it's stream of consciousness for several seconds -- and see how it reacts. If the AI contains an algorithm to imitate the behavior of "spaced out for a few seconds" in order to cover its ass, that part of the algorithm would manifest instead of the simulation's "natural" reaction; either way, you now have a controlled experiment for the origin of the simulation's behavior.

    Since you actually control the AI in the experiment, you could actually just program the computer to emit a very loud high-pitched beep when it notices an interruption in the I/O channels. If the reaction is computer-generated, you will heat a beep. If the reaction is conscious, you will hear a Keanu Reaves "Woah"

    But we're speaking in the context of a conscious being, not in the context of machine code. In which case "immediately" is distinct from "eventually."

    Which is NOT what we're discussing, if the objective is a controlled experiment to test for the genesis of the simulation's behavior. The person conducting the experiment could very well cheat and get the results he wants, but that would tell us nothing.
     
    Last edited: Feb 22, 2015
  18. Awesome Possum

    Awesome Possum Moddin' Admiral

    Joined:
    Mar 13, 2001
    Location:
    Earth
    I highly doubt that if it were possible to upload a mind, that it would be like being a sensory deprivation tank. We know that in those cases, your brain just starts making stuff up to sense. I think they would have to figure out a way to simulate the senses in a virtual world (like the Matrix) or put you in a synthetic body (like in Ghost in the Shell).

    Otherwise it would just be boring.
     
  19. Crazy Eddie

    Crazy Eddie Vice Admiral Admiral

    Joined:
    Apr 12, 2006
    Location:
    Your Mom
    Those are two really good examples, IMO, because in both cases the technology is actually the augmentation and integration of a biological brain with digital technology. In the Matrix, the brain is being connected to a virtual environment that basically intercepts all the input/output between the brain and the rest of the body, so only the body itself is simulated. The AI programs that the humans fight supposedly play by different rules, since they are (supposed to be) almost entirely software constructs. OTOH, fridge logic suggests there isn't a single scene in any of the Matrix movies that DOESN'T take place in a virtual environment, so the machine intelligences the humans interact with could be plugged into the Matrix too (a discussion for a whole different thread).

    Even the cyberbrains in "Ghost in the Shell" are really just organic brains fitted with a digital interface and enclosed in robust, semi-portable cases. It is explicit in this case that a "ghost" cannot actually be stored digitally, nor can it be "simulated" in an android, and that only humans/cyborgs that are proven to have a "ghost" are given basic civil rights.
     
  20. Metryq

    Metryq Fleet Captain Fleet Captain

    Joined:
    Jan 23, 2013
    All this debate over a typo. The original heading should have been:

    "The scientist planning to upload his 'brane to a COMPUTER"—a cosmic string model, and a computer is the only place where such an imaginary beast could exist.