Crassmass Eve wrote:
When you say "many of them believe the inevitability" of a takeover of AI; I don't believe you. There's a huge AI department at the university I work in and what they're doing is trying to get programs to learn. By learn I mean become aware of their environment, react to its parameters, remember those parameters and then work within those parameters. That's a long way from composing the Liebestod from Tristan und Isolde. In fact it's never going to happen.
Yup, lots of those in the AI/robotics field lament how long it's taken to get where we are, but two things mitigate that. 1) Human biological evolution takes places over millions of years, AI has been worked on for mere decades out of that timescale. 2) The growth is exponential, meaning in rapid succesion, not on the normal linear timeline we usually perceive as humans in every day life, so the "slow" progress (which is actually lightning fast on a biological or even geological timescale)will mean such predicted AI in a few decades.
Yours is not an unusual reaction, because humans generally can only think of machines or intelligence as products independent of other things, and that will not be the case in the future. If you bring theism, human centrism into it, then there is going to be quite a knee jerk reaction to it. Trust me, if the "takeover" is true, you'll want to be an AI, and it may not have to be war, supplanting the machines may mean simply out-adapting/competing/evolving.
In terms of the actual material accomplishment of your "impossible" task, there is a lot of source material on the subject, Hans Marovec's work is available all over the internet for free. Of course a key work on the explanation of why the human brain is quantifiable, and computer technologies are improving(interestingly, a predicted 3D chip was just reported in Wired Magazine the other day) is available in Singularity is Near