• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Which Sci-fi future will we reach?

I do agree that organic computing will be essential to implement the things we take for granted--creativity, intuition, pattern-recognition, etc. But we also need a leap in that area. Neural networks haven't exactly lived up to the hype.

About neural networks - what is the cause of their inefficiency?
The hardware or the used algorithms?

It's basically a hardware limitation. Our brains consist of tens of billions of neurons. Modeling their behavior for a non-trivial problem requires massive amounts of storage and processing power. Our neural processing is inherently and massively parallel. A single nerve signal--say, a visual stimulus--can cascade down through your brain and find a match instantaneously, because it essentially searches all of your memories simultaneously, coming back up to your conscious mind with that spark of recognition. A similar operation is absurdly tedious and slow on a computer.

What we need are massively-parallel computers. If you think of each neuron in your brain as its own little processor, then it quickly becomes clear why our technology is so limited in this regard. A computer with billions of CPUs in it is not currently feasible, so we model them programmatically. Emulating hardware processes in software always guts performance.

So, while we do have some reasonably powerful neural networks, they are either very slow (because they are so large and cumbersome) or they are only good for the narrowly-defined task for which they've been trained (such as driving a vehicle to follow a designated line on the road.) The slow ones are impractical for everyday use, and the purpose-specific ones have the drawback of only being good for a narrow range of tasks and so might not be able to respond appropriately to a previously-unencountered situation.

Humans are very good at improvising on the spot, which is a major key to our intelligence. Computers just aren't, largely because of the hardware they are built on, which is deeply deterministic.
 
Robert Maxwell

An interesting thing about neural networks is that you can simulate the neurons and their base algoriths, then the neural network learns, and at the end, you have no ideea how what's going on inside this neural network - essentially, the 'processing' is is a foreign language you don't understand in the least.
And you are the person who made the neural network - you should know it intimately, but you don't. Like a ghost in the machine.

You mentioned our current software is deterministic. However, artificial neural networks also operate on deteministic principles (natural neural networks may include quantum effects).
It seems to me that the difference is not so much determinism, but...options. A standard program will give a computer only a limited number of options. A neural network, on the other hand, learned a multitude of options - some of which are dead ends, some of which are efficient; and the more time the neural network spends on a problem, the more options it generates through trial and error.
Essentially, the difference between our software and neural network is learning.
 
Robert Maxwell

An interesting thing about neural networks is that you can simulate the neurons and their base algoriths, then the neural network learns, and at the end, you have no ideea how what's going on inside this neural network - essentially, the 'processing' is is a foreign language you don't understand in the least.
And you are the person who made the neural network - you should know it intimately, but you don't. Like a ghost in the machine.

You mentioned our current software is deterministic. However, artificial neural networks also operate on deteministic principles (natural neural networks may include quantum effects).
It seems to me that the difference is not so much determinism, but...options. A standard program will give a computer only a limited number of options. A neural network, on the other hand, learned a multitude of options - some of which are dead ends, some of which are efficient; and the more time the neural network spends on a problem, the more options it generates through trial and error.
Essentially, the difference between our software and neural network is learning.

Software neural networks do "learn," but their learning capacity is gimped by our limited technology. We do not have the storage space and processing power to adequately model something as complex as a human brain. Right now, I think we've gotten up to the level of insects and small animals, in terms of the number of neurons we can simulate in an acceptable time frame.

This is why we focus mostly on domain-specific neural networks that learn how to accomplish simple tasks, rather than general-purpose ones that can learn all sorts of things. The latter requires a vast number of inputs--think about how many nerve endings the human body has, providing information to your brain--and how quickly any system can learn is directly correlated with the diversity and quantity of its available inputs.
 
Robert Maxwell

Maybe, when humanity invents the first physical (not simulated) artificial neuron, a quantum leap in AI will follow.

Of course, such a neuron would be essentially a nanite - but a rather simple one.
 
Robert Maxwell

Maybe, when humanity invents the first physical (not simulated) artificial neuron, a quantum leap in AI will follow.

Of course, such a neuron would be essentially a nanite - but a rather simple one.

You're confusing things here. A nanite is not a neuron--or even a simulated one. Your notional nanite is just a tiny "worker bee," a nano-scale machine that can perform a specific task. It has no intelligence of its own, and a massive group of nanites with individualized programming isn't quite the same thing as a neural network. Some central authority would be controlling the nanites, and that system might use a neural network of some kind, but not necessarily.

Not to say nanites aren't cool, just don't confuse your concepts. :p
 
Robert Maxwell

When I said "humanity invents the first physical (not simulated) artificial neuron", I did NOT mean only one will be built.
Of course billions will be built then assembled on a structure (similar enough to how transistors are on a processor).

And this artificial neuron will be a nanite, being a physical, molecular-level structure that performs a specific task (forming conections with other artificial neurons).
This task will be simpler than, say, replicating itself, which is why I said "Of course, such a neuron would be essentially a nanite - but a rather simple one."

So - no confusion of concepts involved, Robert Maxwell.
 
Then you are misusing the term "nanite," which is used to refer to nanoscale robots.

What you are talking about is more accurately called a nanoscale network, which could be modeled after a neural network or any other kind of network.
 
Then you are misusing the term "nanite," which is used to refer to nanoscale robots.

What you are talking about is more accurately called a nanoscale network, which could be modeled after a neural network or any other kind of network.

A robot is a machine which is able to do a task on its own.
A nanorobot (nanite) is a nanoscale machine that can do a task on its own, on nanoscale resolution.

A nanoscale network is made of artificial neurons, which, as per the definition of the concept, are nanites.
 
Then you are misusing the term "nanite," which is used to refer to nanoscale robots.

What you are talking about is more accurately called a nanoscale network, which could be modeled after a neural network or any other kind of network.

A robot is a machine which is able to do a task on its own.
A nanorobot (nanite) is a nanoscale machine that can do a task on its own, on nanoscale resolution.

A nanoscale network is made of artificial neurons, which, as per the definition of the concept, are nanites.

The key concept here is that the nanite can do something on its own. If it is part of a neural network, then it is not "on its own"--it is part and parcel of the larger neural network. The network itself wouldn't exist if not for the presence of a consistent number of neurons, artificial or otherwise.

A separate neural network could indeed control a group of nanites, but it would not be composed of nanites. You could think of the nanites as more like sensory organs or limbs. But their innate independence from a larger body seems to be a key characteristic. That does not preclude cooperation between a large number of nanites, but I would argue that if the nanites become inseparable--for instance, they are required to stay together for the functioning of a neural network--then they cease to be nanites and are instead nano-scale processors or memory modules.

But since we're dealing with technical jargon regarding purely hypothetical machinery it's safe to say it's all a matter of opinion and it doesn't matter what anybody calls it until someone actually creates it. :lol:
 
Fascinating conversation. We don't seem to have reached ANY kind of conclusion, bit it's quite interesting.
 
It's all a moot point,
these Neurons and Nadites.
Technology will fail,
and we'll all be Luddites.

:)
 
Robert Maxwell

Physical artificial neurons or nanites - my original point was that the task it has to perform (forming connections to other neurons) is easier than, let's say, manipulating the surronding environment or replicating (the 'traditional' tasks for nanites).

This means we should have artificial neurons before we have self-replicating nanites. Of course, this 'before' means in a few decades, at the earliest.

After we have artificial neurons, will we be able to create AIs, or we will encounter other difficulties (I'm referring to, for example, the fact that natural neurons are more complex than appears to be necessary, having structures that seem to perform no relevant function)?
 
1. IF aliens never show up and interfere with our culture(s)......

2. IF some damn fool doesn't invent The Matrix/Skynet/HAL/the Grey Goo....

3. IF we don't get smacked into by an asteroid "the sahz of Texas, Mr. Prezd'nt!"....

4. We'll end up like Orwell's "1984", because people are becoming more and more and more apathetic. Just look at those poor fuckers in "Metropolis", trudging off to their dreary mechanical jobs...... :( :( :(
 
Robert Maxwell

Physical artificial neurons or nanites - my original point was that the task it has to perform (forming connections to other neurons) is easier than, let's say, manipulating the surronding environment or replicating (the 'traditional' tasks for nanites).

This means we should have artificial neurons before we have self-replicating nanites. Of course, this 'before' means in a few decades, at the earliest.

After we have artificial neurons, will we be able to create AIs, or we will encounter other difficulties (I'm referring to, for example, the fact that natural neurons are more complex than appears to be necessary, having structures that seem to perform no relevant function)?

I would point out that nanites are not necessarily self-replicating. It's a nice feature, but more likely you'll need a machine to generate them for you: a universal assembler. Self-replication is a more difficult task in the sense that the nanites will have to have full instructions for replicating themselves built-in.

I would agree that we would be better able to create artificial neurons than self-replicating nanites, though. If we can come up with a good design for an artificial neuron, we should be able to replicate it infinitely.

I have never heard that neurons are more complex than necessary. They are certainly very complex, but which components of that complexity do you consider to be superfluous? We have a rather poor understanding of how neurons work. We understand the basics but it's still a very wide-open area of research.
 
Robert Maxwell

Physical artificial neurons or nanites - my original point was that the task it has to perform (forming connections to other neurons) is easier than, let's say, manipulating the surronding environment or replicating (the 'traditional' tasks for nanites).

This means we should have artificial neurons before we have self-replicating nanites. Of course, this 'before' means in a few decades, at the earliest.

After we have artificial neurons, will we be able to create AIs, or we will encounter other difficulties (I'm referring to, for example, the fact that natural neurons are more complex than appears to be necessary, having structures that seem to perform no relevant function)?

I would point out that nanites are not necessarily self-replicating. It's a nice feature, but more likely you'll need a machine to generate them for you: a universal assembler. Self-replication is a more difficult task in the sense that the nanites will have to have full instructions for replicating themselves built-in.

Robert Maxwell, you're telling me nothing I don't already know:
"manipulating the surronding environment or replicating (the 'traditional' tasks for nanites)."

I would agree that we would be better able to create artificial neurons than self-replicating nanites, though. If we can come up with a good design for an artificial neuron, we should be able to replicate it infinitely.

I have never heard that neurons are more complex than necessary. They are certainly very complex, but which components of that complexity do you consider to be superfluous? We have a rather poor understanding of how neurons work. We understand the basics but it's still a very wide-open area of research.

About the natural network complexities:
http://en.wikipedia.org/wiki/Neural_network

So - your opinion is that, if we can create an artificial neuron (similar to the virtual ones we have today), we'll be able to create an AI or will the artificial neuron prove too simplistic?
 
Robert Maxwell

Physical artificial neurons or nanites - my original point was that the task it has to perform (forming connections to other neurons) is easier than, let's say, manipulating the surronding environment or replicating (the 'traditional' tasks for nanites).

This means we should have artificial neurons before we have self-replicating nanites. Of course, this 'before' means in a few decades, at the earliest.

After we have artificial neurons, will we be able to create AIs, or we will encounter other difficulties (I'm referring to, for example, the fact that natural neurons are more complex than appears to be necessary, having structures that seem to perform no relevant function)?

I would point out that nanites are not necessarily self-replicating. It's a nice feature, but more likely you'll need a machine to generate them for you: a universal assembler. Self-replication is a more difficult task in the sense that the nanites will have to have full instructions for replicating themselves built-in.

Robert Maxwell, you're telling me nothing I don't already know:
"manipulating the surronding environment or replicating (the 'traditional' tasks for nanites)."

I would agree that we would be better able to create artificial neurons than self-replicating nanites, though. If we can come up with a good design for an artificial neuron, we should be able to replicate it infinitely.

I have never heard that neurons are more complex than necessary. They are certainly very complex, but which components of that complexity do you consider to be superfluous? We have a rather poor understanding of how neurons work. We understand the basics but it's still a very wide-open area of research.

About the natural network complexities:
http://en.wikipedia.org/wiki/Neural_network

So - your opinion is that, if we can create an artificial neuron (similar to the virtual ones we have today), we'll be able to create an AI or will the artificial neuron prove too simplistic?

As long as the artificial neuron functions similarly enough to the real thing, and as long as we can provide a steady stream of varied stimuli, I see no reason why an artificial neural network could not learn and become useful for solving problems. It's sort of a fuzzy line to determine at which point you could call it "artificial intelligence." It is already artificial, but it's measuring the "intelligence" part that will be difficult. I would say that we could reasonably call something intelligent if it manages to solve a new problem that it has not previously been trained to solve, with no outside help.
 
It doesn't matter in which direction our technology goes, life will imitate art.

There will always be a ship named Enterprise.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top