Sorry for the delayed reply. Been busy.
I'm going to stop you there: I have not granted that PROGRAMS can be conscious.
Nor did I say that you did. To remind you, what I was commenting on was this hypothetical statement that
you had made:
To be sure, even a simulation of AI consciousness would not itself be conscious.
That translates to: "Even if an AI were conscious, a simulation of that conscious AI would not be conscious." Clearly, you were entertaining the possibility of a conscious AI, even if only to try to assert a fundamental distinction between a thing and a simulation of it. But, to recap, there you made a fundamental error, because finite discrete processes such as running computer programs can be simulated exactly, by the process of emulation. This exact kind of simulation is not something that can—as far as we know—be done for arbitrary physical processes. And an AI is a kind of computer program, by definition.
So, the hypothetical statement of yours that I re-quoted just above is false. That does not mean that there is a conscious artificial intelligence. It does simply mean, though, that emulations (and therefore such certain kinds of exact simulations) of conscious artificial intelligences
would themselves also be conscious, given that by hypothesis the
program being emulated is itself conscious. It simply would not be possible to have a conscious artificial intelligence that lacked this property.
Far from it, I continue to state that consciousness can very well arise from a computer system under the right circumstances. Consciousness, however, is not software, nor can it be reduced TO software, since many of the components that make consciousness can only exist in the context of a healthy functioning brain.
It's unfortunate that you again betray a lack of comprehension of what constitutes a "computer system." If you are saying that consciousness can arise
in a computer system, then what you are saying implies that consciousness can be an attribute of software. If you don't mean to be talking about software, you really shouldn't be talking about either computer systems or artificial intelligences at all. Or, you must be talking about something besides a modern digital computer, perhaps some sort of analog computer, hypothetical quantum computer, or something that you've made up in your imagination. Because when someone simply says "computer," it is reasonably taken to mean something of a very specific kind, namely a digital computer whose operation is—and I'll say it again—ultimately describable as a finite, discrete process, malfunctions notwithstanding.
Which, IF THAT WERE TRUE, would only apply to AIs. And that again is assuming that a piece of software in and of itself can actually be conscious across a vast diversity of hardware.
Strangely enough, you got it right this time.
That still would not apply to a simulation of HUMAN consciousness, however, since the simulation is the product of the AI's calculations and not a product of the the simulation's interactions with itself.
Well, here you've reiterated something that you've repeatedly hammered on throughout the thread that is quite deserving of refutation. Upthread, you said these, which I'll assume are two previous iterations of the same idea:
This is because the simulated brain doesn't actually do anything that influences its future state, the computer running the simulation does.
What I'm saying is that for the simulation to be conscious, the outputs from the simulator would have to be able to REALLY interact with each other in a way that is both fully consistent with the original and external to the processor. You can't just SIMULATE their interactions by imposing those states from an algorithm, they would have to arise "naturally", directly from each other. A simulation can't do that, EVERYTHING that it does is generated from a single source and there is no interactivity between its components.
To summarize, it seems that you are claiming that a simulation can never behave as a natural process, because the behaviors of the simulated components are dictated entirely by an algorithm instead of by natural interactions with each other.
Now, it may be true that a simulation of a certain natural process can't behave exactly as the natural process itself, but it is
non-scientific to ascribe the reason for that as being because the simulated behaviors of the constituents are all determined by a single algorithm instead of mutual interactions, as I'll now show.
By definition, a simulation of a natural process is an approximation of the natural process made relative to all known natural laws that the natural process itself evidently comports to.
The form of the algorithm is dictated by the mathematical formulations of those laws and the method of approximation admitted for the purpose of simulation. Therefore, and assuming that the simulation is bug-free, there are only two reasons, either singly or in combination, by which the simulation can differ from natural behavior:
- The natural process does not comport exactly to the assumed natural laws. This itself may occur for either of two reasons, either singly or in combination:
- One or more of the assumed natural laws does not in fact hold for the natural process.
- There are additional unknown natural laws that influence events.
- The method of approximation employed to solve the implied system of differential equations is not exact enough. (Remark: I'm admitting, as a part of the so-called "method of approximation," simplifying assumptions such as neglecting higher-order effects which would encompass ignoring details such as minor deviations from symmetric configurations. As is well known, such simplifying assumptions generally reduce the computational workload at a cost of accuracy.)
The simulation itself being an algorithm is not on the list of reasons why there might be divergence between the simulation and natural behavior. If the simulation being an algorithm is a problem, then it will be a problem arising from one of these causes listed above that makes rendering an algorithm impossible: for example, there might be an infinitude of natural laws having significant effect that cannot be summarized in a finite conjunction of finite expressions, one or more of the natural laws might have a form that does not admit accurate approximation, etc.
Although it's perhaps philosophical to say, I think that the following is worth noting. With respect to the idea that an infinitude of natural laws have significant effect that cannot be summarized in a finite conjunction of finite expressions, not only is that idea formulated by a fairly precise mathematical statement, but also that idea could be reasonably argued to be in correspondence with the intuitive idea that there is an ineffable essence to physical reality that cannot be described or even comprehended by humans. Certainly, if you work it the other way, the opposite idea that all natural laws
can be summarized in a finite conjunction of finite expressions would seem to admit both the description and comprehension of
all natural laws by humans, at least in principle, most especially when the right conjunction of finite expressions has been hit upon. (Remark: I mention this,
Crazy Eddie, because I believe that it indicates possible circumstances under which what you evidently believe in would hold, whereby simulations would
always ultimately fall short of reality. This is in contrast to the explanation that you give for that behavior, which as I said I do not believe to be sound.)
Actually no one in this thread has proposed a hard definition of consciousness that I've seen. Can you, or anyone, refer me to a post that does so, that I must have missed?
I did a couple of pages ago. I assume it wasn't "hard" enough for your liking or you overlooked it.
If you'd like me to comment on your definition, could you link to it, or better yet quote it, please?
Which is NOT what we're discussing, if the objective is a controlled experiment to test for the genesis of the simulation's behavior. The person conducting the experiment could very well cheat and get the results he wants, but that would tell us nothing.
My point was that your standard under discussion here is itself a cheat, given that even people seem to have comparable and/or similar vulnerabilities.