• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Randomn Numbers

Re: Random Numbers

Personally, I believe that Penrose may be on the right track in his hypothesis that microtubules play a role in creating consciousness. I don't agree, however, that this precludes the possibility of creating an AI -- it just makes it more technologically difficult.

I think Penrose is onto something there, too - I've been trying to remember who came up with that - thanks for the post!

And I also agree with you it doesn't preclude AI - Inman Harvey at Sussex has done some amazing things on Genetic Algorithms in Field Programmable Gate Arrays(FPGAs) where he has demonstrated physical FPGAs solving problems for which they lack the requisite number of gates to solve classically. The FPGAs found ways to use the physical properties of the chip to solve problems - somehow. And some of that work was more than 13 years ago.

http://www.cogs.susx.ac.uk/users/inmanh/Publications%20by%20Inman%20Harvey.html
 
Re: Random Numbers

Could you provide a more specific link to that research?

Check these for some interesting stuff on learning and complex interactions between environment and learning. Some of these are quite old - the Discover Magazine article is a good start:

http://discovermagazine.com/1998/jun/evolvingaconscio1453

http://www.cogs.susx.ac.uk/users/inmanh/brain_cog.pdf

http://www.cogs.susx.ac.uk/users/inmanh/ER2001.pdf

http://www.cogs.susx.ac.uk/users/inmanh/IzquierdoHarveyECAL07.pdf
 
Re: Random Numbers

Well, I haven't read any of those in detail, but from what I did see it sounds like it's just talking about genetic algorithms. Neat, certainly, and useful for approximating solutions to NP-Hard problems, but not likely to produce a true AI any time soon.
 
Re: Random Numbers

Well, I haven't read any of those in detail, but from what I did see it sounds like it's just talking about genetic algorithms. Neat, certainly, and useful for approximating solutions to NP-Hard problems, but not likely to produce a true AI any time soon.

I wouldn't be too sure. In his original 1997 work, Harvey notes some unusual things:

1) He could evolve a GA to solve a problem, then copy the algorithm digitally to another identical FPGA, and it wouldn't work. Similarly, the GA wouldn;t work in all of the different areas of the same FPGA.

2) He could evolve a GA that would work, but would then fail to work if the temperature changed - for no apparent reason.

3) Some of his GA's did not use sufficient gates to solve some of the mathematical problems they were able to solve. He also taught one to recognize two different tones - even though it lacked any sort of reference clock - which is a neat trick.

Somehow, these solutions used emergent properties of the silicon, rather than the integrated logic gates to compute. In other words, a simple input/output logic gate came up with non-linear responses that could not be predicted, which hints at the function of a neuron...
 
Re: Random Numbers

You are referencing very specific things here. I'll ask again for a link to the paper discussing these specific results. Not a list of related stuff: just the specific paper with these results.
 
Re: Random Numbers

You are referencing very specific things here. I'll ask again for a link to the paper discussing these specific results. Not a list of related stuff: just the specific paper with these results.


OK - from the Discover interview of the two, which is the easiest to find all of these things in one place:

http://discovermagazine.com/1998/jun/evolvingaconscio1453

was looking for the best way of letting evolution loose on the electronics, he says. So it’s not told anything about what’s good and what’s bad or how it achieves the behavior. Evolution just plays around making changes, and if the changes produce an improvement, then fine. It doesn’t matter whether it’s changing the circuit design or using just about any weird, subtle bit of physics that might be going on. The only thing that matters to evolution is the overall behavior. This means you can explore all kinds of ways of building things that are completely beyond the scope of conventional methods. I allow evolution to write all the design rules.

With this laissez-faire philosophy, Thompson has evolved a circuit that distinguishes between two tones, two electric signals that, if fed into a stereo speaker, would produce two notes. One has a frequency of 1 kilohertz, the other 10 kilohertz. If you were going to hear them, says Thompson, they sound like medium high pitched and very high pitched. To make it difficult, he gave evolution only a tiny part of the fpga to play with—of the 4,096 cells (or logic elements) available on the chip, evolution would be allowed to use only 100, with no clock and no timing components. Thompson chose this particular problem because differentiating between two tones is a first step toward speech recognition, and because it is so difficult. The logic elements in the fpga work very quickly, on the scale of a billionth of a second, while the tones come with a frequency a million times slower, making the task something like trying to evolve a human who could tell the difference between a year and a decade while using only the second hand of his watch and without counting to himself as he did it.

...
Strangely, Thompson has been unable to pin down how the chip was accomplishing the task. When he checked to see how many of the 100 cells evolution had recruited for the task, he found no more than 32 in use. The voltage on the other 68 could be held constant without affecting the chip’s performance. A chip designed by a human, says Thompson, would have required 10 to 100 times as many logic elements—or at least access to a clock—to perform the same task. This is why Thompson describes the chip’s configuration as flabbergastingly efficient.

It wasn’t just efficient, the chip’s performance was downright weird. The current through the chip was feeding back and forth through the gates, swirling around, says Thompson, and then moving on. Nothing at all like the ordered path that current might take in a human-designed chip. And of the 32 cells being used, some seemed to be out of the loop. Although they weren’t directly tied to the main circuit, they were affecting the performance of the chip. This is what Thompson calls the crazy thing about it.

Thompson gradually narrowed the possible explanations down to a handful of phenomena. The most likely is known as electromagnetic coupling, which means the cells on the chip are so close to each other that they could, in effect, broadcast radio signals between themselves without sending current down the interconnecting wires. Chip designers, aware of the potential for electromagnetic coupling between adjacent components on their chips, go out of their way to design their circuits so that it won’t affect the performance. In Thompson’s case, evolution seems to have discovered the phenomenon and put it to work.

It was also possible that the cells were communicating through the power-supply wiring. Each cell was hooked independently to the power supply; a rapidly changing voltage in one cell would subtly affect the power supply, which might feed back to another cell. And the cells may have been communicating through the silicon substrate on which the circuit is laid down. The circuit is a very thin layer on top of a thicker piece of silicon, Thompson explains, where the transistors are diffused into just the top surface part. It’s just possible that there’s an interaction through the substrate, if they’re doing something very strange. But the point is, they are doing something really strange, and evolution is using all of it, all these weird effects as part of its system.

In some of Thompson’s creations, evolution even took advantage of the personal computer that’s hooked up to the system to run the genetic algorithm. The circuit somehow picked up on what the computer was doing when it was running the programs. When Thompson changed the program slightly, during a public demonstration, the circuit failed to work.

All the creations were equally idiosyncratic. Change the temperature a few degrees and they wouldn’t work. Download a circuit onto one chip that had evolved on a different, albeit apparently identical chip, and it wouldn’t work. Evolution had created an extraordinarily efficient, utterly enigmatic circuit for solving a problem, but one that would survive only in the environment in which it was born. Thompson describes the problem, or the evolutionary phenomenon, as one of overexploiting the physics of the chips. Because no two environments would ever be exactly alike, no two solutions would be, either.

... Not that the COGS folks were claiming anything ax extraordinary as AI - in fact, they seemed reluctant to talk about it....

This brings us to the crucial question: Could you actually end up with a thinking machine as conscious as you or I?

It’s not so difficult to imagine how such a thing might happen. It would begin with the evolution of a series of circuits specialized to sift through and process more and more multimedia information. As the speed increased, along with the amount of information processed, evolution would happen upon and make use of sophisticated processing strategies, like anticipation, or an integrated control system that would ask, What should be done next? and coordinate all the computer’s thought processes to come up with an answer. In doing so, it would probably have to differentiate between all the myriad fpgas and circuits that constituted its self and the external world to which it had to react. Now, with a sense of self forming, it might even evolve a higher level of autostimulation; it might begin to use the language in which it communicated with its programmers to communicate with itself. The result might be not just a sense of self but the inner voice to go with it.

Of course, as Harvey points out, it makes little sense to ask whether such a machine is really conscious, because the only thing that matters—or at least the only thing that can be observed or tested—is whether it acts conscious. Still, it could be argued that this path from simple circuits to computational cognition and consciousness seems almost as inevitable as the path from single-celled organisms to you and me. But that’s the catch. Only the rare student of evolution believes that human consciousness was inevitable and not some perverse accident of nature. Harvey and Thompson consider the computer version of consciousness along the same lines. Evolution comes up with a lot of special-purpose tricks, says Harvey, but the special-purpose tricks of these machines may have very little in common with the special-purpose tricks that humans picked up during our own evolutionary history. Thus he and Thompson set the betting line at a definitive maybe: maybe we’ll eventually evolve a machine that appears as conscious as a human and can’t be fooled into betraying its silicon soul. But even if we do, it won’t be soon. As Harvey says, Evolution isn’t any swift magic. The more components required and the more connections among them (making them more like real neurons), the longer it would take to run the genetic algorithms. So perhaps evolving a conscious machine would be possible, says Thompson, when he finally agrees to discuss it, but it would probably take a hell of a long time. More to the point, he adds, I don’t think we know how to do it, even if we didn’t have the problem of us dying during the experiment.

Again, their point was not to try to make weird things happen - they were trying to do quite the opposite. They expected to understand the algorithms they found, but they didn't. These were just odd things they noted on their way to trying to make more efficient algorithms.

Because Thompson seems to be a pragmatist at heart, he has temporarily given up on evolving chips that could do anything fancier than distinguish between two sounds (in a follow-up experiment, he evolved a chip that could tell apart two spoken words, stop and go). Instead he’s trying to deal with what he calls the perverseness or the robustness problem. He has a grant from the British government to evolve circuits that can work in a wide range of environments and on more than a single fpga. He’s doing this by evolving circuits on five fpgas simultaneously. These five are all from Xilinx, but from different manufacturing plants, and a few are the equivalent of factory seconds. He is also drastically changing the temperature over those chips, so evolution will be encouraged to come up with a circuit that a chip designer would call extraordinarily robust. He’s trying for a circuit that works regardless of defects in the chips or damage incurred in shipping, a circuit that works in a tropical rain forest or in the depths of space. In other words, one that will run anywhere, maybe on anything.

Their work has continued with robotic evolution, but I maintain that the "mistakes" they made 13 years ago are some of the most interesting mistakes I've ever heard of in computers....
 
Re: Random Numbers

An interesting editorial, but it's hard to trust the wording of someone other than the expert, especially in a magazine, since misinterpretation is so easy in technical matters. If this Thompson fellow has published any peer-reviewed academic papers on the subject, that's what I'd be most interested in.

The discussion in the article can be mostly attributed to the classic over-fitting problem, which is well-known in all machine learning algorithms. If it's more than that, then a discussion of it on a more technical level will be interesting to read.
 
Re: Random Numbers

An interesting editorial, but it's hard to trust the wording of someone other than the expert, especially in a magazine, since misinterpretation is so easy in technical matters. If this Thompson fellow has published any peer-reviewed academic papers on the subject, that's what I'd be most interested in.

The discussion in the article can be mostly attributed to the classic over-fitting problem, which is well-known in all machine learning algorithms. If it's more than that, then a discussion of it on a more technical level will be interesting to read.

Here's the one that got the ball rolling:
http://www.informatics.sussex.ac.uk/users/inmanh/iscas96.pdf

Here's a neat one from Thompson about using GAs to self-repair complex systems (some of these are from the NASA/DoD Annual Workshops on Evolvable Hardware:
http://www.informatics.sussex.ac.uk/users/adrianth/OLTS2004/paper.pdf

A cool paper on exploiting the weird physics of evolving systems to create simple new nanocircuits - a "single electron NOR gate":
http://www.informatics.sussex.ac.uk/users/adrianth/cta00/paper.pdf

you can find more here:
http://www.cogs.susx.ac.uk/users/adrianth/ade.html
 
Last edited:
Robert Maxwell

If you want to equate quantum mechanics with classical mechanics via some 'hidden variables' you'll have to do more than just negate Heisenberg's uncertainty; you'll have to do away with superposition (a particle being in ALL its possible states simultaneosly until it's measured), entanglement, virtual particles and a few other "minor phenomena".
All of this while respecting the experimental evidence.

Until now, you just made the claim and failed to back it up at all. Well, let's see your proofs.

Where did I say anything about "hidden variables"? This is the second time in this forum you have distorted my words, and I really don't appreciate it.

No, I don't have any mathematical proofs for my position. What I have said is that perhaps our understanding of quantum physics is currently limited, so the available data we have points to random behavior, even if it isn't actually random.

A random number sequence with a sufficiently complex algorithm behind it would be indistinguishable from "true" randomness. This could apply to quantum mechanics as well as computer science.
 
Robert Maxwell

If you want to equate quantum mechanics with classical mechanics via some 'hidden variables' you'll have to do more than just negate Heisenberg's uncertainty; you'll have to do away with superposition (a particle being in ALL its possible states simultaneosly until it's measured), entanglement, virtual particles and a few other "minor phenomena".
All of this while respecting the experimental evidence.

Until now, you just made the claim and failed to back it up at all. Well, let's see your proofs.

Where did I say anything about "hidden variables"? This is the second time in this forum you have distorted my words, and I really don't appreciate it.

No, I don't have any mathematical proofs for my position. What I have said is that perhaps our understanding of quantum physics is currently limited, so the available data we have points to random behavior, even if it isn't actually random.

A random number sequence with a sufficiently complex algorithm behind it would be indistinguishable from "true" randomness. This could apply to quantum mechanics as well as computer science.

"A random number sequence with a sufficiently complex algorithm behind it" can only exist in nature/physics if this 'sufficiently complex algorithm' describes an as of yet unknown physical phenomenon aka a hidden variable.
In physics, mathematics/algorithms ALWAYS describe physical phenomena.

You WERE talking about a hidden variable, Robert Maxwell - you were just giving it another name.
 
Robert Maxwell

If you want to equate quantum mechanics with classical mechanics via some 'hidden variables' you'll have to do more than just negate Heisenberg's uncertainty; you'll have to do away with superposition (a particle being in ALL its possible states simultaneosly until it's measured), entanglement, virtual particles and a few other "minor phenomena".
All of this while respecting the experimental evidence.

Until now, you just made the claim and failed to back it up at all. Well, let's see your proofs.

Where did I say anything about "hidden variables"? This is the second time in this forum you have distorted my words, and I really don't appreciate it.

No, I don't have any mathematical proofs for my position. What I have said is that perhaps our understanding of quantum physics is currently limited, so the available data we have points to random behavior, even if it isn't actually random.

A random number sequence with a sufficiently complex algorithm behind it would be indistinguishable from "true" randomness. This could apply to quantum mechanics as well as computer science.

"A random number sequence with a sufficiently complex algorithm behind it" can only exist in nature/physics if this 'sufficiently complex algorithm' describes an as of yet unknown physical phenomenon aka a hidden variable.
In physics, mathematics/algorithms ALWAYS describe physical phenomena.

You WERE talking about a hidden variable, Robert Maxwell - you were just giving it another name.

Calling it a "variable" is a gross oversimplification.
 
Didn't someone discover a year or two ago that Bell's Theorem doesn't hold if the local hidden variables use noncommutative arithmetic -- for example, quaternions? Don't have a reference, sorry.
 
Didn't someone discover a year or two ago that Bell's Theorem doesn't hold if the local hidden variables use noncommutative arithmetic -- for example, quaternions? Don't have a reference, sorry.


There have been lots of challenges to Bell's Theorem (which is good, because that's what Science is supposed to do). Each one has failed in some experimental way, or requires a mechanism which is not observed in nature.

One challenge showed mathematically that Bell's inequality could be violated with local hidden variables - but only if the hidden variables themselves could communicate faster-than-light, or send information backward in time.

Most of these seem to stem from a belief that something about quantum mechanics "feels" unreal.

However, I feel that people make way too much of the "bizzareness" of quantum mechanics. Our senses are so dull, that we assume that (because our sofa doesn't seem to move about the place) all objects MUST have a definite position and momentum. But if you could see each atom in your sofa jumping about, and perceive it at the atomic (or even sub-atomic) level, you'd probably not think anything was even remotely odd about QM.

After all, one of the things you don't see in an atom is an electron spontaniously falling like a stone into the nucleus of an atom. If an electron were really free to behave with no uncertainty, you should expect to see that happen all the time. As Feynman pointed out in one of his famous lectures, you won't see that happen, because if the electron did fall into a proton, then it would have a definate position and momentum with respect to the proton.

I think QM weirdness is kind of over-stated. It's weird to us, but only becuase of our gross size and terribly inadequade senses....
 
Last edited:
Speaking of random numbers I study playing Cash 5 1-34...I notice more often than not certain numbers come up more than others and certain numbers coincide with the first digit of the last set of numbers are drawn. I have been able to predict using 12 numbers atleast 3/5 & 4/5 from draw to draw using a set of 12 numbers...the biggest problem is wheeling 12 numbers in cost effective way to hit 4/5-5/5...getting 3/5 in any give draw is relatively easy.
 
I've often wondered if genetic algorithms (discussed previously) might solve the lottery problems (which I have also noticed are fairly un-random).

Of course, if it worked, I'm sure someone would have done it by now - right?
 
I think it's safe to say even "quantum uncertainty" isn't truly random

If you believe that there is no randomness, then is it your opinion that free will doesn't exist either?

Because I believe that randomness and free will do exist, and that they are in fact the same thing. I don't think that classical processes can be responsible for either.

Randomness and free will are not the same thing. In fact, randomness and free will have nothing to do with each other.

What you're describing is just another type of determinism. The only difference between what you're describing and classical determinism is that our choices would be determined by truly random events, and would therefore be unpredictable, no matter how much information we had beforehand.

But that's not "free will" in a metaphysical sense--or indeed, in any meaningful sense. Free will, in the sense you seem to be using that term, is incompatible with both causal determinism and causal indeterminism.

Either our choices are determined, or they are not. If they are determined, then they follow inexorably from previous events, and are not free. But if they are not determined, then they are nothing but blind, random happenings, and are not free.

The only way out of this dilemma is to embrace compatibilism--the view that "free will" is nothing more than "freedom of action"--the freedom to choose what you would normally choose, and to do what you would normally do, in the absence of coercion or duress.

This type of free will is compatible with determinism, and is, in fact, the philosophical position on which our legal system is based.

What is more, the Libet experiments suggest that there are no conscious choices. We make choices unconsciously, become consciously aware of them, and then rationalize them afterwards. Obviously, an unconscious choice cannot be considered 'free': that's like saying our hearts beat 'freely.' This would be true whether they're caused by random events, or not.
 
I think it's safe to say even "quantum uncertainty" isn't truly random

If you believe that there is no randomness, then is it your opinion that free will doesn't exist either?

Because I believe that randomness and free will do exist, and that they are in fact the same thing. I don't think that classical processes can be responsible for either.

Randomness and free will are not the same thing. In fact, randomness and free will have nothing to do with each other.

What you're describing is just another type of determinism. The only difference between what you're describing and classical determinism is that our choices would be determined by truly random events, and would therefore be unpredictable, no matter how much information we had beforehand.

But that's not "free will" in a metaphysical sense--or indeed, in any meaningful sense. Free will, in the sense you seem to be using that term, is incompatible with both causal determinism and causal indeterminism.

Either our choices are determined, or they are not. If they are determined, then they follow inexorably from previous events, and are not free. But if they are not determined, then they are nothing but blind, random happenings, and are not free.

The only way out of this dilemma is to embrace compatibilism--the view that "free will" is nothing more than "freedom of action"--the freedom to choose what you would normally choose, and to do what you would normally do, in the absence of coercion or duress.

This type of free will is compatible with determinism, and is, in fact, the philosophical position on which our legal system is based.

What is more, the Libet experiments suggest that there are no conscious choices. We make choices unconsciously, become consciously aware of them, and then rationalize them afterwards. Obviously, an unconscious choice cannot be considered 'free': that's like saying our hearts beat 'freely.' This would be true whether they're caused by random events, or not.

Eloquently put. Thank you for articulating what I have been trying to say here. :) At least with regard to "free will." I am probably dead wrong about quantum randomness. :lol:
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top