Wow, I had no idea this was going to cause such a ruckus! You people are really tightly wound aren't you? In any case..let me clarify I few things since there are a host of misconceptions in the comments following my post.
Firstly, obviously I don't post in this forum that much anymore..aside from it not really covering a lot of topics that matter most in science or tech (people still talking about flying cars and the latest bug in Windows and such) the technology news and events are now moving too fast for me to really post about properly and have outstripped the ability to cover them..if I did on my own, even the headlines would probably fill the entire forum, and I couldn't to do that. So if I post it's either going to be a big breakthrough or simply something curious and fun.
Someone mentioned something about dumping links, but I always come back to check on them eventually to see what the response is. Often I'm surprised (like this time).
So to the first misconception:
As a someone who works everyday creating and using AI, I don't recognize any of these people as experts in AI. Here's my list of people who are actual experts in the field of AI.
The article is quite detailed on the method used and goal. They gather up 5000 people who are involved in the scope of AI, not just those who work on it in the lab...however those are there as well. It wasn't meant to be a scientific poll:
5,000 AI experts, organizations, specialists, influencers, and practitioners who have Twitter accounts. Then, we analyzed the connections between all of those and found the 2000 most-watched by their peers. This set of 2000 is important. But who among them is most important?
Next, we created a group of people to act as our panel of judges. 90 people and organizations in that set of 2000 credible AI specialists who are (1) most connected to their peers, (2) followed by at least 1,000 people in total, and (3) most focused on AI in the people they choose to follow.
So it's basically it's a popularity contest, it's not spelling out WHO is the top 10 in the AI lab. But the list is important in the sense that the popularizers are the ones who actually influence people outside the lab and even in the government, and inspire discourse among their peers sand supporters as well as those who supply the money..including those at Google and Elon Musk, etc.
You might be working in the field, but you are --arguably--not even remotely as influential as any one of the people in the top 10 or the 90 judges.
I also don't know how many of those AI researchers you named are actually on Twitter and included in the list, but I count at least 5 in the top 10 involved in AI research/companies and at least 1 who is a top level researcher in strong AI.
OP is false. These are the people talking about how it could happen, but aren't the ones making it happen.
Incorrect, knowing about it, understanding it, spreading information about it, then putting money where their mouth is actually DOES make it happen. It leads to things like the Singularity University, start-ups, and appropriations. It's self-fulfilling.
There's a great article by Ben Goertzel which states if we actually put the money into research into technologies supporting the Singularity could happen in 10 years, not in 2045, which really demonstrates where cognition comes into play. Yes I did post this link before.
I also read another article which lists 5,000 (yes 5,000) AI researchers and their answer about predicting the SIngularity, Strong AI. I probably should have posted it, but I'll have to find it some other time.
Singularity theorists get a lot of things wrong, but they're right about one thing: there IS a point at which AIs will become better at writing software than humans; that point is slowly approaching, but we're not there yet.
Not really, it's been awhile now but a lot of the criticisms are easily refuted, which I tried to do with you in the past. The rest is speculation (though more educated speculation than in the past) because as you say, a lot of this is prediction..
"Slowly approaching" is relative. In human terms maybe 2000 years seems like a long time, but it's not in geologic time, or even in the time of life on Earth. I'm guessing you're not taking into account accelerated change in your "slowly" comment, but while even 2 decades from now may seem far away to us, it's really quite rapid in the sense of human development on Earth. If you predict Strong AI and a Singularity by 2045 then they would certainly be writing their own software and get a good sense when it may happen.
Saying there's no way to understand an AI is a cop out answer as it's no different from declaring "It's magic!". And that's what these so called singularity theorists are doing too, They don't even try to define what intelligence is.
That's just what the math tells us. As many experts have suggested once a computer has surpassed (hypothetically if you will)us you won't be able to fathom something being done that's already above your cognitive complexity. Humans will no longer be the inventors or the top rung of evolution in intelligence.
have found that people who actually know things about how computers work tend to be more skeptical of this Singularity nonsense, because if you understand what a computer actually does, internally, you know there's nothing the least bit magical about it.
There's a reason for this, often people working in the field itself do not have a very good overview of the big picture in their own field. Often theiy're so involved in the day to day dealings with funding, research, step-by-step problems they need to solve to really see the implications(there's also a term for this thinking, though I don't recall it at the moment). I do think we are seeing more researchers coming around to the idea and then offering suggestions (usually very sober, intellectual discourses on how to fail-safe AI from getting to a Singularity) how to bypass it so people like Elon Musk in his hysteria (despite funding massive research into AI lately) will calm down.
You can also find many refutations to Singularity criticisms online. I've only provided a handful over the years to those such as brain complexity, software development lag, etc. Kurzweil himself devoted a whole chapter in his second book to this.
Some of the critics have been high profile in the computer industry(like Steve Wozniak, Paul Allen, Jaron Lanier, et al). Personally I do feel the refutations are satisfying and well explained(in fact i think he makes Paul Allen seem silly)..and ultimately it's hard not to notice many of the harshest critics dislike the implications of the Singularity rather than if it could actually happen! Others, like Bill Joy believe in Kurzweil's timeline but think it will always wind up dystopian.
The strongest criticisms often come from neurologists, but even here, we've seen those in this field create start-ups to re-create the human brain.
'm not too fond if AI's especially the ones that like Austrian accents...
This is a fantastic and humorous quote to end on..This is the fundamental reason most people, experts or not balk at the change implied by the mathematics and potential of the Singularity. Some people simply can't fathom a human/AI hybrid mind, or feel anything artificial enough is inhuman or even satanic (plenty of those).
I'm sure I missed a few things, but I'm sure I'll hear about it.
RAMA