^Again, I don't deny that it's possible to improve the performance of the brain in certain ways. But the study mentioned in that article I linked to suggested that such improvements would come with a cost, that there would be tradeoffs for any gain, and that eventually you'd reach a point of diminishing returns. It's just wishful thinking to assume the brain can be augmented without limit, or that any system can be scaled upward without limit. That's Kurzweil's fundamental mistake, that failure to recognize that not everything can be extrapolated forward indefinitely
Moore's Law is not an inviolable law of nature, just a description of a process Moore observed in his time. Moore himself never expected it to apply indefinitely into the future; in fact, the cutoff point at which he assumed it would cease applying is already in our past. So you can't automatically assume that computer capacity will continue to scale up indefinitely just because it did so in the past, and you sure as hell can't assume that there are no obstacles to bringing that same unlimited amplification to the human brain, because there are countless other variables you'd need to factor into that equation.
I think Singularity advocates sometimes forget that the Singularity is supposed to be a point beyond which our ability to extrapolate the future fails
because we don't have enough information to make any intelligent conjectures. So to claim certainty about what the Singularity will mean is oxymoronic.