We've had many threads on AI (artificial intelligence). I suppose one might define AI as the ability to produce a new, and perhaps unexpected, third from any two givens. And by "new" I mean something completely novel, not merely a randomly output datum already in the system, sequitur to the givens or not.
From my own reading and various opinions expressed in this forum the consensus is that we are a long way from achieving AI, and some believe it may never be possible.
With proper programming, a machine need not be independently "intelligent" to be useful. Collation and analysis of data, often drudge work, can accelerate scientific and technical developments by putting the right info in front of a mind that does have the "spark" for those "this bath water is too hot*" moments.
AI aside, I'm curious about the current state of "learning machines." As I understand the concept, such a machine is like a self-adjusting clutch, refining itself for optimum efficiency at a defined task. Nothing truly new ever comes out of the system, but the machine's actions "evolve" with the environment.
Now the curve ball. I've read many documented accounts of "micro-evolutionary" changes. The effect is quite real. However, we seem to be missing a few pages on "macro-evolutionary" changes. Carefully controlled and documented scientific attempts (with fruit flies, for example) to produce a totally new species have failed. For centuries many breeders have found that given organisms will "stretch" to a certain point before "snapping back" to the basic pattern, like a rubber band. Micro-evolutionary, but not macro-evolutionary.
This is not an argument for "intelligent design"; "god" is an assumption that explains nothing and opens up more questions. Occam's razor cuts it off. However, I do believe Darwinian evolution is flawed. We're still missing some major pieces of the model.
The analogy here is that observed micro-evolutionary changes are like learning machines. We understand the mechanisms of both well enough, but we're still unable to explain ourselves, or "replicate" our intelligence with AI.
Thoughts?
* "'Eureka' is Greek for 'this bath water is too hot.'"
—Doctor Who
.
From my own reading and various opinions expressed in this forum the consensus is that we are a long way from achieving AI, and some believe it may never be possible.
With proper programming, a machine need not be independently "intelligent" to be useful. Collation and analysis of data, often drudge work, can accelerate scientific and technical developments by putting the right info in front of a mind that does have the "spark" for those "this bath water is too hot*" moments.
AI aside, I'm curious about the current state of "learning machines." As I understand the concept, such a machine is like a self-adjusting clutch, refining itself for optimum efficiency at a defined task. Nothing truly new ever comes out of the system, but the machine's actions "evolve" with the environment.
Now the curve ball. I've read many documented accounts of "micro-evolutionary" changes. The effect is quite real. However, we seem to be missing a few pages on "macro-evolutionary" changes. Carefully controlled and documented scientific attempts (with fruit flies, for example) to produce a totally new species have failed. For centuries many breeders have found that given organisms will "stretch" to a certain point before "snapping back" to the basic pattern, like a rubber band. Micro-evolutionary, but not macro-evolutionary.
This is not an argument for "intelligent design"; "god" is an assumption that explains nothing and opens up more questions. Occam's razor cuts it off. However, I do believe Darwinian evolution is flawed. We're still missing some major pieces of the model.
The analogy here is that observed micro-evolutionary changes are like learning machines. We understand the mechanisms of both well enough, but we're still unable to explain ourselves, or "replicate" our intelligence with AI.
Thoughts?
* "'Eureka' is Greek for 'this bath water is too hot.'"
—Doctor Who
.