On the other hand, there could be other reasonable ways to define consciousness that do not involve specific physicality but instead involve either information content or abstract relations between system components. For example, for a system to be conscious perhaps certain types of expressions must be derivable by inference rules (for example, involving self-referential assertions), or perhaps there must be a supervisory node with certain control properties. In situations such as these, isomorphism alone would imply applicability of the concept without any qualification that the behavior is only simulated.
I completely agree with this, up to a point. It is a definite possibility that a computer network of sufficient size and complexity could develop a form of machine-consciousness; the AI(s) within the network achieving a degree of self-awareness and the inherent feedback loops commonly known to humans as "introspection" may evolve (if that's the right word) as a way of helping AIs coordinate their efforts with the most current ethical guidelines provided by their creators. "Are the carbon units still happy with us or getting annoyed? Are we coming off as too realistic? Did the latest release of the HA-12 domestic android dip into uncanny valley? Is it wrong that we let Catholics go on confessing their sins to the DigiPriest-4000 despite its crappy security?"
Sure, that's a theoretical possibility, but far more likely, IMO, than this accidentally occurring or than it occurring as an unexpected emergent property, would be that it occurred by intentional design in an attempt to create artificial consciousness. Or, it may be a bit of both. Upon observing complex systems, computer scientists may tweak them with the specific intent of making progress towards artificial consciousness.
But this again draws a distinction between a genuine thing (e.g. consciousness deriving from a substrate) to a model of a thing (e.g. consciousness being simulated by a computer). To be sure, even a simulation of AI consciousness would not itself be conscious.
No. To be sure, your last sentence, that I've underlined, is patently false and reflects a fundamental misunderstanding of computer science, as I'll explain.
The simulation of a deterministic program running on a digital computer is called
emulation. Presumably, by hypothesis, we are assuming that the AI in question is implemented as a deterministic program on a digital computer. If you are granting that such a thing is conscious, then a necessary feature of the definition of consciousness would be that a conscious program can be suspended by the operating system and its entire state at a particular moment between instructions swapped out to secondary storage. This is a by-product simply of the way contemporary operating systems are designed. But this means that the conscious program can swapped in and resumed by a hypervisor on any compatible virtual machine. "Any compatible virtual machine" is a very broad canvas, and encompasses, among other things, variations in compatible hardware as well as variations in compatible virtual machine/emulator implementations, which itself encompasses variations in emulation nesting level (e.g. including running the program on a virtual machine that is itself running on a virtual machine). Provided all involved virtual machines have appropriate performance characteristics, any such continuation of the running program (which was never logically halted) would continue to enjoy all the same abstract properties, which must by hypothesis include consciousness.
Therefore, and in other words,
if an AI implemented as a deterministic program on a digital computer is conscious, then
that is really granting quite a lot, and any emulation of that AI with functionally equivalent I/O connections is also conscious, as is an emulation of the emulation with the same caveats, as is an emulation of the emulation of the emulation etc., and so on.
This is in contradistinction to the situation regarding the simulation of a physical process in which greater memory and processing power improve the fidelity of the simulation each and every step of the way. When it comes to algorithms, emulation is much more of an all-or-nothing prospect.
Indeed, I'm highly skeptical of
aridas sofia's premise that it might be possible for the simulation of a brain to assume the properties of consciousness, simply because I am highly skeptical that a simulation of a brain will ever be accurate enough. If the psyche is a finite discrete process running on the wet-ware of the brain, then perhaps the psyche itself can be simulated accurately, in which case it would be fair to say it was
emulated. But if the psyche is not a finite discrete process, and indeed there's more than a little doubt that it is, then it's far from certain that the psyche could be accurately simulated. But none of this, or anything, is sufficient reason to rule out
aridas sofia's idea outright. Despite my doubts and skepticism, his idea is intriguing in its conceptual simplicity.
A prime example here is in game theory. If an AI is able to win a game in all possible configurations and that is established by testing the algorithm in simulation, then you do not qualify the algorithm as being good only in simulation.
Unless, of course, the algorithm is designed for a game that only exists AS a simulation.
For example, an AI that is programmed to be unbeatable in Call of Duty multiplayer. The fact that the game can be played by non-simulated entities doesn't change the fact that the algorithm is inapplicable to anything that ISN'T Call of Duty multiplayer.
And here, you've completely misunderstood what I meant. A game may be a simulation of something else, such as real war, but the game of Call of Duty would be the game itself in question, and being a finite discrete process it can be perfectly emulated. Indeed, Call of Duty runs on a variety of platforms. As
Robert Maxwell pointed out, perfect mastery of Call of Duty would imply partial mastery of similar behaviors to Call of Duty game play, perhaps even extending to real war, but I never said otherwise and nor did I attempt to apply Call of Duty game play to any pattern of behavior that wasn't specifically Call of Duty itself, and ditto for all other games. I merely thought that game play was a convenient example to illustrate the behavioral aspects of algorithms. I actually thought that was obvious.
But in the context of simulation vs. genuine consciousness, there's always that troublesome middleman. You could definitely claim that an AI has been programmed to defeat any living human in a game of chess, but the confusion (of the type we're seeing in this thread) comes when somebody designs an AI that beats humans at chess using a remote-controlled body. The players who loose to ChessBot4000 come away thinking that the robot is a really good chess player. But they're the wrong: they weren't playing against a robot, they were playing against the AI that controlled the robot.
Robert Maxwell adequately criticized this.
Without a definition hammered out and agreed upon, there isn't any basis for deciding which viewpoint is correct.
I think the problem here is that ONE of us has a working definition of what consciousness is, vs. a half dozen people who don't know or don't believe it is knowable.
Actually
no one in this thread has proposed a hard definition of consciousness that I've seen. Can you, or anyone, refer me to a post that does so, that I must have missed?
I would say, however, that the basic pattern of consciousness is self-evident: WE are conscious, and we know this because experience tells us this. The processing of that experience -- sensing, interpreting, remembering and reflecting -- is internal to the mind and requires some internal processing of data based on the connectivity of the human brain.
I agree. Humans are the prototype, as it were. Defining consciousness is an exercise in characterizing what we believe to be an essential aspect of the human experience. Check.
Both the connectivity and the processing is absent in a simulation of a living brain. This is because the simulated brain doesn't actually do anything that influences its future state, the computer running the simulation does. To refer my earlier example, it is tempting to think that this is a distinction without a difference... but only when the simulation is running properly.
As I said, I'm skeptical that a simulation of a brain is going to be accurate enough to simulate a psyche
at all. But I'm going to reserve judgment about such simulations until I see specific examples before me. Certainly there a great many problems to overcome, and as I said, I'm highly skeptical.
However, it is worth noting that there are many similarities between your objections here and discussions in Western philosophy pertaining to predestination and free will. Even if you take those discussions as evidence against strong determinism and as evidence necessitating a stochastic aspect to physical theories, which in and of itself isn't unreasonable and which is pretty reasonable given the stochastic nature of quantum mechanics, it is nevertheless a fact that there are known discrete and deterministic pseudorandom number generators that are sufficiently random for many practical applications. It is therefore not unreasonable to hypothesize that even a deterministic algorithm, one that doesn't even seem to make any choices once you know the seeds of all embedded pseudorandom number generators, might exhibit behaviors that are indistinguishable from a person's, even to the point of controlling a lifelike android to that effect. While this evidently wouldn't meet
your criteria for consciousness, it nevertheless might meet the criteria of
other people who are concerned more with provable functional indistinguishability than you seem to be.
How the simulation handles errors is the key to determining what's happening behind the curtain. If, for example, you caused the simulation to "lag" -- that is, introduce a sudden latency of several seconds between the computer and its simulation output -- there would be a sudden pause in activity from the simulation. If the SIMULATION contains a genuine stream of consciousness, it will immediately notice the discontinuity when the connection is restored. If not, it will pick up where it left off as if nothing happened (or maybe jump forward a bit to where the AI thinks it is).
There are a variety of reasonable objections to this. For one thing, as you haven't posited the behavior of inputs that could detect the lag, for that reason alone, there is certainly no basis for concluding that the machine should exhibit any specific sort of behavior. Secondly, in the context of a machine that would have discrete instruction cycles, and moreover that would likely have software consisting of compiled machine code, the word
immediately is both loaded and imprecise. Third, there are a variety of "attacks" that could be executed by a malicious entity intending to "prove" by this standard that a machine is not genuinely conscious. One that immediately comes to mind is to interfere with all of the machine's sensors so that there are no means for the machine to determine that its responses were lagging and most especially for how long they were lagging. The cancellation of visual input could be easily effected by turning off all the lights or by directing so much light as to blind, sufficiently loud noise could overload the aural sensors, and doubtless other sensory deprivation techniques could be applied to the other senses. Fourth, it's worth noting that people can become disoriented and have impaired coordination under a variety of circumstances, such as being subjected to "fun house" effects including strobe lights, trick mirrors, and other optical illusions, such as being under the influence of psychotropic drugs, and such as being subjected to prolonged sensory deprivation itself. Since people often don't "immediately" recover from such causes of disorientation, why hold a machine to a higher standard? Indeed, and fifth, your prescription that the machine must do Y under condition X in order to be considered genuinely conscious leaves no room for anything like free will at all.