^^^There are basically four issues entertwined in this Singularity concept: the possibility of AI; the transhumanist notion of personal immortality; the extrapolation of smoothly exponential growth leading to a Singularity relatively soon, and the convergence of several types of development in a particular moment when the future suddenly bursts out upon us, i.e., the Singularity.
For the first, the opponents' arguments were pitifully incompetent from the start, and have since flamed out. Although there is no perfect distinction between human and animal intelligence, although intelligence evolved by adding "more power and parts" by increasing encephalization of primate species, the opponents still insist there is somehow no reason for the belief that AI is possible! Obviously this is purely superstitious. Whatever they may consciously "think," it is most likely that aggrieved amour propre cannot tolerate the idea that something so like their precious selves could be
manufactured.
For the second, a bbs Pope declared a fiat that simulations of personalities didn't interest him personally, which apparently was enough to settle everything. Amour propre indeed!

Personally, I rather think a cybernetic afterlife for a simulation is dubious in its own right, for rather the same sort of reasons that supernatural afterlives are not really coherent ideas. But we can leave it at that since no one here seems to be interested in going into the subject in detail.
For the third, the assumption of a smoothly exponential growth leading to a Singularity soon has always been the most problematic issue. The data you have cited is indeed evidence for your point. But the key issue for dating a Singularity soon is whether there are sufficiently strong grounds for
smoothly extrapolating exponential growth, especially in so many divergent fields. It's true the evidence for potential exponential growth is pretty sound, which is why so many working scientists at least give people like Kurzweil a serious consideration.
But (you know it was coming, didn't you?) there are factors at work that can prevent a smoothly exponential growth. For instance, there is the question of who will fund the research that leads to "AI." Suppose the path to AI involves continual simulations on a massive scale of trial-and-error algorithms, with humans doing artificial selection on the results. If a reactionary government cuts funding, how can this happen? Yes, Moore's Law et al. show that at some point it will become cheap enough for
someone to do this, but how can we tell now when that will be?
Further, there is the question of who wants an AI, especially an autonomous one? People want computers to do delegated tasks. The continued evolution of massively networked expert systems will serve most purposes. But the same rabid self-love that insists AI is impossible, just as it has here, will simply deny that these systems are intelligent. And just as they resort to insults here, they will erase any programs that make them uncomfortable.
Also, every sign points to a prolonged world recession so we cannot even assume that R&D will continue in the same fashion as in recent decades. The ever increasing importance in intellectual property laws for the domination of certain interests in world economy also seems like an exponentially growing trend.
But even if it is merely a linear trend, intellectual property laws in this economic environment constitute a drag on scientific and technological development. "Technology" is not autonomous, it is a tag for the material culture of a society. That means the actions of people are a necessary part of the extrapolation. The superstitions of the opponents we saw in this thread could result in more than flaming in the real world. It could result in laws actually forbidding certain kinds of research.
Last, there really are difficulties in AI that have not been surmounted. Of course, it is foolish to insist that AI is impossible. But how can you put a date on the time when those are overcome? As near as I can make out, it depends on assuming that the simulation of the human brain will achieve Ai. Yeah, sure, but that's not the kind of AI we need for a Singularity. Skip over the ethical implications. There's no reason to think that a simulated human brain will be smarter than an ordinary human brain. We already have billions of those, why do we need an artificial one? What we want is something smarter (or possibly dread but that's another question.) However do we put a date on achieving that?
As to the last question, nothing is ever as simple as projected. "The" Singularity will likely appear as a toy in laboratories or not appear at all, hidden away in military installations. The ruling class will try to monopolize it to their benefit, or failing that, try to suppress it. Like the every increasing cure rate for cancer, the expectation of some magical moment will just make it more difficult to recognize what's in front of you, because it's not what you expected.