In the case of a non-biological neural network, what would a dream state be? In the case of us humans, dreams seem to have several effects upon our conscious mind. One is that they continue to mull over problems we encounter in our waking world. The body seems to require sleep, a period when it shuts partially down, perhaps to concentrate some processes on healing, revitalizing, cell growth and regrowth. Some parts of the brain remain active and Dreams seem to represent much of that activity.
This could easily be part of the learning process. Practice and repetition have been demonstrated to strengthen synaptic connections by building thicker myelin around these connections. "...myelin sheath allows electrical impulses to transmit quickly and efficiently along the nerve cells. If myelin is damaged, these impulses slow down." (
Medlineplus.gov)
Dreams may benefit learning by continuing a mental practice of a learned concept. For non-biological neural networks, does the repetition of a neutral path make it more efficient? If not currently, can we design a neutral network that can improve itself by practice? If so, wouldn't we build that efficiency into the network from the start? Perhaps there are reasons to have less efficient pathways with a few higher level pathways.
Maybe the higher efficiency represents memory. If a stimulus to the network is repeated, the possible responses of that network are limited only by the complexity of the network. By this, I mean, the more complex the network, the greater number of possible paths that stimulus can take across the network. By reinforcing some paths across the network over others, perhaps through a "flag" system that indicates success or failure. Call this the dopamine response. Repeating the successful path without repeating the unsuccessful paths would be achieved by some how improving the ease with which the successful path was followed over the unsuccessful paths.
Always there's the possibility that the successful path isn't the only path to success or the best, so the neutral network should be able to explore pathways that either weren't explored earlier with the first stimulus, or maybe didn't prove fully successful, but could still lead to success if there was some minor anomaly that caused the more successful path to be chosen first. Thus, it might be a good idea to reinforce successful pathways gradually and by degrees, rather than "hard code" the first successful pathway into the response to stimuli from the beginning.
By treating memory as the reinforced pathways across the neural network, the memory capacity would grow exponentially with the network's complexity. Humans, having billions of braincells and trillions of synaptic connections would mathematically have nearly an infinite capacity to our memory. Considering how much information and how many responses even one year of life's experiences can contain, it's a damn good thing we can remember so much. It's just not always easy to impress those memories into our brains.
Having too many reinforced pathways may just set you right back to the beginning where all pathways were open to response to stimuli equally, thus knowing too much is the same as forgetting everything.
-Will