In fact, RAMA, here's a teaching example for future reference.
New York Times: Exciting breakthroughs in Deep Learning.
Read the article if you haven't already. Exciting stuff. Key takeaways there could be:
- Improvements in patten recognition and speech recognition
- Improvements in computer cognition
- Considerable improvement in machine grasping of complex behaviors (natural conversation, learning from mistakes/trial and error, etc).
If you get really excited about this article, there are two things in it that are easy to miss.
Artificial intelligence researchers are acutely aware of the dangers of being overly optimistic. Their field has long been plagued by outbursts of misplaced enthusiasm followed by equally striking declines.
In the 1960s, some computer scientists believed that a workable artificial intelligence system was just 10 years away. In the 1980s, a wave of commercial start-ups collapsed, leading to what some people called the “A.I. winter.”
Second is even more important to what we were just discussing:
One of the most striking aspects of the research led by Dr. Hinton is that it has taken place largely without the patent restrictions and bitter infighting over intellectual property that characterize high-technology fields.
“We decided early on not to make money out of this, but just to sort of spread it to infect everybody,” he said. “These companies are terribly pleased with this.”
I do not mean to dampen your optimism, RAMA. Only to point out that optimism alone is not evidence and that realism has to be accounted for. The painful reality is that the development of technology is often hindered by other things, mostly involving money
. I happen to know that even the speech recognition algorithms that eventually went into developing Siri and similar apps were originally developed in the early 1990s; they took so long to develop into a working application, not because of limitations in technology, but because the original developers got swindled into a bad merger by Goldman Sachs and they lost the rights to their own technology, unable to do any meaningful work on it for over fifteen years; the technology wound up getting picked up by Apple and its development partners only after successive mergers and acquisitions steered the original patents into the hands of someone capable of using them.
Point is, it doesn't take a big disaster or a nuclear war to forestall the singularity. All it really takes is one poor business decision or one greedy hedge fund manager to sign the wrong contract at the wrong time to screw it up for everyone. The putative sentient AI could end up strangled in its crib just because Cisco Systems decides it isn't marketable and pulls its funding at the critical threshhold of self-awareness; the team disbands, work stops, and Cisco holds onto the rights to the research data, unwilling to fund further research but equally unwilling to sell it to someone who IS.
That happens ALOT in this business, and it's not something Singularity theorists even BEGIN to take seriously when they make these sorts of predictions (which is exactly why Kurzweil's predictions about speech recognition technology were so disastrously wrong). Until we get to the point where meaningful AI development can efficiently bypass profit motive without sacrificing effectiveness -- IOW, until/unless the SOFTWARE curve begins to show exponential growth in pace with hardware -- the conditions for the Singularity cannot be met. In this case, the obstacle is the fact that only a few humans on the entire planet are even qualified to do that kind of research and there are huge limits to how efficiently that kind of education can be distributed to people who are less likely to care about profit motive and more likely to develop strong AI systems. As I've said many times, commercial projects aren't going to do that -- there's very little market incentive to develop machine sentience of any kind -- but there a lot of places in the developing world where the development of supergenius artificial mind would have certain advantages, not least of which would be increased access to education (schools and universities require far more infrastructure and investment than pre-programmed expert systems) and dramatically increased productivity.
Until we start seeing these kinds of breakthroughs coming out of the developing world -- or at least being directly shared with the developing world on a partner basis -- this isn't Singularity news, it's just ordinary progress.