• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

The top 10 AI elite

Wow, I had no idea this was going to cause such a ruckus! You people are really tightly wound aren't you? In any case..let me clarify I few things since there are a host of misconceptions in the comments following my post.

How about don't make drive-by threads if you don't like getting this kind of response?

Firstly, obviously I don't post in this forum that much anymore..aside from it not really covering a lot of topics that matter most in science or tech (people still talking about flying cars and the latest bug in Windows and such) the technology news and events are now moving too fast for me to really post about properly and have outstripped the ability to cover them..if I did on my own, even the headlines would probably fill the entire forum, and I couldn't to do that. So if I post it's either going to be a big breakthrough or simply something curious and fun.

So you're the arbiter of what "matter(s) most in science or tech"? And was this thread "curious and fun"? Because I'm not seeing it.

Incorrect, knowing about it, understanding it, spreading information about it, then putting money where their mouth is actually DOES make it happen. It leads to things like the Singularity University, start-ups, and appropriations. It's self-fulfilling.

Self-fulfilling is another way of saying "magical thinking." Which, you know, doesn't work. Talking about problems, by itself, does not solve them. Throwing money at them might--or it might not. Depends on what you are trying to do and whether that's even feasible.

I guarantee you that people popping off on twitter are not making our AI future happen, though.

There's a great article by Ben Goertzel which states if we actually put the money into research into technologies supporting the Singularity could happen in 10 years, not in 2045, which really demonstrates where cognition comes into play. Yes I did post this link before.

How much money?

Not really, it's been awhile now but a lot of the criticisms are easily refuted, which I tried to do with you in the past. The rest is speculation (though more educated speculation than in the past) because as you say, a lot of this is prediction..

Easily refuted, and yet you won't bother.

"Slowly approaching" is relative. In human terms maybe 2000 years seems like a long time, but it's not in geologic time, or even in the time of life on Earth. I'm guessing you're not taking into account accelerated change in your "slowly" comment, but while even 2 decades from now may seem far away to us, it's really quite rapid in the sense of human development on Earth. If you predict Strong AI and a Singularity by 2045 then they would certainly be writing their own software and get a good sense when it may happen.

Thanks for the equivocations, I guess?

There's a reason for this, often people working in the field itself do not have a very good overview of the big picture in their own field. Often theiy're so involved in the day to day dealings with funding, research, step-by-step problems they need to solve to really see the implications(there's also a term for this thinking, though I don't recall it at the moment). I do think we are seeing more researchers coming around to the idea and then offering suggestions (usually very sober, intellectual discourses on how to fail-safe AI from getting to a Singularity) how to bypass it so people like Elon Musk in his hysteria (despite funding massive research into AI lately) will calm down.

Spoken like someone who doesn't know much about computer science or software research at all. It's kind of a requirement to keep up on this stuff to stay competitive in the industry.

You can also find many refutations to Singularity criticisms online. I've only provided a handful over the years to those such as brain complexity, software development lag, etc. Kurzweil himself devoted a whole chapter in his second book to this.

You might help your credibility if you stop citing cranks like Kurzweil.

Some of the critics have been high profile in the computer industry(like Steve Wozniak, Paul Allen, Jaron Lanier, et al). Personally I do feel the refutations are satisfying and well explained(in fact i think he makes Paul Allen seem silly)..and ultimately it's hard not to notice many of the harshest critics dislike the implications of the Singularity rather than if it could actually happen! Others, like Bill Joy believe in Kurzweil's timeline but think it will always wind up dystopian.

You likely find them satisfying because they agree with your pre-existing conclusions.
 
Last edited:
Who said I had a problem with it? I said "surprised". It's a much more interesting thread than most here.

Self-fulling in the sense awareness of it is spurring investment and advancement? Apparently you didn't understand this?? What does magic have to do with it?

Yes, fun as opposed to boring and in-the-box. The posting about who the 5,000 "elite" follow isn't meant to suggest following people on twitter is going to bring about a strong AI, it was simply a fun exercise to see who they are following and if they are influential, apparently most of them are though of course they all could have been following George Lucas as well.

Kurzweil has enough credentials and also work within the industry to make your suggestion laughable if I listed them. Even those who disagree with him generally think he is a genius and has something worth discussing. I suppose you would rather be discussing jet packs..?

RAMA

How about don't make drive-by threads if you don't like getting this kind of response?



So you're the arbiter of what "matter(s) most in science or tech"? And was this thread "curious and fun"? Because I'm not seeing it.



Self-fulfilling is another way of saying "magical thinking." Which, you know, doesn't work. Talking about problems, by itself, does not solve them. Throwing money at them might--or it might not. Depends on what you are trying to do and whether that's even feasible.

I guarantee you that people popping off on twitter are not making our AI future happen, though.
 
So it's basically it's a popularity contest, it's not spelling out WHO is the top 10 in the AI lab. But the list is important in the sense that the popularizers are the ones who actually influence people outside the lab and even in the government, and inspire discourse among their peers sand supporters as well as those who supply the money..including those at Google and Elon Musk, etc.
Calling those people " the A.I. Elite" is bit like calling Kanye West "a key figure in the black congressional caucus."
 
AI isn't something magical. It's really just somewhat advanced statistics and probability. And as such, it can't do the miraculous things like become super intelligent. It's as simple as that.
 
Incorrect, knowing about it, understanding it, spreading information about it, then putting money where their mouth is actually DOES make it happen.
No it does not, because software engineering doesn't work that way.

Developing new AIs that have practical applications, improving those AIs, and learning lessons from those AIs' performance to build on the next generation of systems, is what makes "it" happen.

Maybe it doesn't make the SINGULARITY happen, but that's primarily because the Singularity (or at least, your vision of it) is 90% hype with equal parts of science fiction and messianic zeal making up the rest. Software engineering does not require "spreading the word" about new developments to secure funding or a business case. You either convince investors that your software works, or you go home and cry.
 
Three interesting articles from January, with many AI experts chiming in.

http://futureoflife.org/2016/01/27/are-humans-dethroned-in-go-ai-experts-weigh-in/

http://futureoflife.org/2016/01/12/the-future-of-ai-quotes-and-highlights-from-todays-nyu-symposium/

The esteemed Max Tegmark (MIT physicist) comes up with a solution to the computation to assess a promising measure of consciousness, something that might help brain emulation. The original published paper follows as well.

http://www.huffingtonpost.com/max-tegmark/lets-measure-consciousnes_b_8979504.html

http://arxiv.org/pdf/1601.02626.pdf

I also did some more research about how much agreement there is with AGI appearing..the latest and largest poll matches up pretty well with earlier polls among experts: 10% by 2022, 50% by 2040, 90% by 2075. Already half agree it would be before Kurzweil's predicted Singularity. I believe it will keep growing.
 
No it does not, because software engineering doesn't work that way.

Developing new AIs that have practical applications, improving those AIs, and learning lessons from those AIs' performance to build on the next generation of systems, is what makes "it" happen.

Maybe it doesn't make the SINGULARITY happen, but that's primarily because the Singularity (or at least, your vision of it) is 90% hype with equal parts of science fiction and messianic zeal making up the rest. Software engineering does not require "spreading the word" about new developments to secure funding or a business case. You either convince investors that your software works, or you go home and cry.
The proof is there and it doesn't matter if it's hardware or software, both grow exponentially, which I've demonstrated before on this BBS. . The reason we've come out of the "AI Winter" is because a lot of grinding research and results, spread by proponents, popularized then invested in. As the original article and elsewhere states, AI investment is soaring.

I still think the slowdown had to do with lack of money and immediate application. That's changing.

RAMA
 
No it does not, because software engineering doesn't work that way.

Developing new AIs that have practical applications, improving those AIs, and learning lessons from those AIs' performance to build on the next generation of systems, is what makes "it" happen.

Maybe it doesn't make the SINGULARITY happen, but that's primarily because the Singularity (or at least, your vision of it) is 90% hype with equal parts of science fiction and messianic zeal making up the rest. Software engineering does not require "spreading the word" about new developments to secure funding or a business case. You either convince investors that your software works, or you go home and cry.

The proof is there and it doesn't matter if it's hardware or software, both grow exponentially, which I've demonstrated before on this BBS. . The reason we've come out of the "AI Winter" is because a lot of grinding research and results, spread by proponents, popularized then invested in. As the original article and elsewhere states, AI investment is soaring.

I still think the slowdown had to do with lack of money and immediate application. That's changing.

RAMA
 
The proof is there and it doesn't matter if it's hardware or software, both grow exponentially, which I've demonstrated before on this BBS. . The reason we've come out of the "AI Winter" is because a lot of grinding research and results, spread by proponents, popularized then invested in. As the original article and elsewhere states, AI investment is soaring.

I still think the slowdown had to do with lack of money and immediate application. That's changing.

RAMA

The proof is there and it doesn't matter if it's hardware or software, both grow exponentially, which I've demonstrated before on this BBS. . The reason we've come out of the "AI Winter" is because a lot of grinding research and results, spread by proponents, popularized then invested in. As the original article and elsewhere states, AI investment is soaring.

I still think the slowdown had to do with lack of money and immediate application. That's changing.

RAMA

There was a glitch in the matrix. Lol

AI Winter sounds like a line from a post apocalyptic scifi book/movie with someone talking about the aftermath of the AI war.

Seriously though, I wonder how much that soaring investment is because investors now see a future/demand in/for smarter computers, cars, drones, and machines to replace humans from a lot of low to mid level jobs?
 
The proof is there
The "proof" comes when it actually happens. Until then it's just a theory based on far-fetched speculative possibilities.

both grow exponentially, which I've demonstrated before on this BBS
You've CLAIMED it many times, always without seriously addressing the glaring holes in your logic.

You're here to "spread the word" as an apostle of the coming Singularity, yes? Do you want your testimony to be convincing or do you want to be respected for being right the first time around? If it's the former, you're going to have to do something to address the logical flaws in your reasoning and also acknowledge the many alternate scenarios that lead to a version of the singularity far different than what you envision.

The reason we've come out of the "AI Winter" is because a lot of grinding research and results, spread by proponents, popularized then invested in.
No, the reason we came out of the "AI Winter", if such a thing can even be said to exist, is because the computing power and programming languages of modern information technology both progressed to a point where practical AI development could be more affordable. A business case was made for those systems and they were marketed to people who saw their benefit, particularly financial companies, advertisers, data miners and analytical systems. Even if you "spread the word" beyond the community of industrialists who have a practical use for those technologies, the ones who ARE using it for real things will be forever ahead of the curve. And for the forseeable future, the most advanced AIs on the planet will STILL be expert systems and analytical programs designed to produce useful data for a profit.

Hobbyists can do a lot of amazing things, but investors aren't interested in bringing about the singularity, they're looking for something they can patent and eventually market.
 
I just saw that and think it is simply an example of bad programming by Microsoft. I don't think it means much more than that.
 
That's the thing most people still don't understand about AI. Regardless of how "smart" an AI may appear, the smartness is still designed by a human being. Which means, there are always going to be a huge number of assumptions that get built into the AI design.

Case in point, AlphaGo lost one match because it was designed by it's makers to only counter moves a human might make. When the Go grandmaster Lee made a move no human had ever made before, AlphaGo did not know how to counter it and lost the match.

These days, the AI designer still has to be a very smart person who has a PhD in AI. Which raises the question, what's the smart one? The human designer? or the AI?
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top