Those words are neither famous, nor were they anyone's "last"Famous last words: the Singularity isn't going to happen.
Kids, teenagers and now 20-somethings are growing more and more dependent on smartphone technology and social media for every aspect of their day to day lives. Combine that with pseudo-AIs like Siri and Alexa, then the basic infrastructure for a singularity-like revolution is already in place.The biggest problem is all the "near" stuff, if it were we should at least see signs of it.
I forgot that AIs are already basically doing this on the hardware side, so on a microscale we're already a step in that direction. I think the next step will actually be an AI-based debugger that can use predictive algorithms and read context well enough to find missing close-parentheses and logical errors in code and then correct them without human intervention. Think of it as a programmer's spellcheck. That evolves into more subtle forms of debugging: detecting and correcting causes of memory leaks, incorrect data types, etc. That isn't that big of a leap, actually, and it would save software developers a shit load of time and money.Evolutionary algorithms could probably increase the utility and inate intelligence of machine-generated code, although the resulting code might be almost impossible to understand (as happens when such algorithms are used to design electronic circuits).
I was writing code-generating algorithms more than 30 years ago. Surprising that it seems to be a fairly uncommon feature. Add a genetic algorithm and put the two in a feedback loop together with a code search/learning algorithm and set a goal defining the abilities of the code that you want to evolve perhaps?
Perhaps - brains are evolved self-learning matter constructs after all. We already have manually programmed self-learning AIs. Where they fall short, in my opinion, is in having no wide sensory involvement with the real world, no instincts inherited from Darwinian selection in hostile environments, and no self-assigned goals. However, those factors might prove dangerous if implemented without safeguards. "Ooh, I can solve all the world's problems by uploading all human minds and converting all available matter to computronium." Heaven or hell, anyone?Could that evolve a self learning AI?
It's more common than you think. The trekbbs message board uses some of those very same algorithms to convert your posts into properly formatted HTML. The problem is, machine-generated code has the same features as machine-manufactured products: it's great when you want to automate a simple repetitive task like converting a string of text into a document based on some pre-set parameters, but if I took something complex like, say, a word document and told a machine to convert it into HTML, the result would be hilariously messy.I was writing code-generating algorithms more than 30 years ago. Surprising that it seems to be a fairly uncommon feature.
My code generation was for control system identification, design and simulation but it was pretty static and dumb as you describe and couldn't adapt and learn. Given that the skill set required to create even simple code translation and generation programs is probably limited to a small subset of the human population at most, I think that the AI level you describe is some way off, never mind bright or superbright AIs.It's more common than you think. The trekbbs message board uses some of those very same algorithms to convert your posts into properly formatted HTML. The problem is, machine-generated code has the same features as machine-manufactured products: it's great when you want to automate a simple repetitive task like converting a string of text into a document based on some pre-set parameters, but if I took something complex like, say, a word document and told a machine to convert it into HTML, the result would be hilariously messy.
You might remember that we USED to be able to do this with older word processors. Word, wordperfect and even pages could convert .doc to HTML pretty easily. But the formatting codes of word processor documents and their markup language keep getting more and more complicated and conversion to HTML just isn't that simple anymore (and the apps that do a not-completely-terrible job of it mainly accomplish this by cranking out an HTML document crammed with superfluous <span> tags nested three or four deep.)
It's not a problem with algorithms, it's a problem of heuristics. Because as soon as we come up with an algorithm that works for code generation, new forms of code get released that make the algorithm no longer useful. What we need is a process where a computer can analyze the structure of the programming language, choose which solutions are appropriate for what it wants to do, and then implement those solutions in a way that yields the desired result. So not just a code library, but the ability to operate on elements of the code library in a way that is consistent with the internal logic of the programming language itself.
In essence, the computer needs to "understand" the programming language and how to use it. That's why I pointed out that the real stepping stone for machines is not their ability to generate code, but their ability to find ERRORS in code and correct them. Call it the Dunning-Krueger-Turing test: the skill set you need to come up with the right answer is the same skill set you need to recognize a wrong one.
And that right there is the bottleneck. The number of people who know how to create systems that would lead to a singularity-like machine intelligence is shockingly low. The other part of the problem is, the number of people who WANT to do that is a completely different group than the people who COULD, and they have nothing in common socially or economically and they're rarely even in the same room together.My code generation was for control system identification, design and simulation but it was pretty static and dumb as you describe and couldn't adapt and learn. Given that the skill set required to create even simple code translation and generation programs is probably limited to a small subset of the human population at most, I think that the AI level you describe is some way off, never mind bright or superbright AIs.
The subset of the subset that might be able to develop the AI level you describe is probably numbered in single low decades at most, even if they are interested in the problem.
It's pattern-matching, for sure, and the ability to assign meaning to groups of symbols and then operate on that group as if it were a discrete object.Is understanding required and what do we mean by the term? Is it anything more thsn adaptive pattern matching with randomly inserted speculative cognitive leaps that might or might not lead anywhere?
Gotta keep those internal components squeaky cleanI'd be more inclined to believe stuff like this if humans weren't going around eating Tide Pods.
We use essential cookies to make this site work, and optional cookies to enhance your experience.