• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

The Singularity is Near

The Singularity is Near

  • For the Motion

    Votes: 2 16.7%
  • Against the Motion

    Votes: 10 83.3%

  • Total voters
    12
The Singularity is apocalyptic, neo-religious bullshit.

Go ahead, "envision" the future. We'll all have a good laugh later.
 
The biggest problem is all the "near" stuff, if it were we should at least see signs of it.. the only thing glued to technology are the kids these days but I have been laughing my ass off at those because one of them was riding a bicyle and took a corner about 5 meters before the actual location of said corner, wheel bumped against pavement. kiddo went flying and learned a lesson about gravity, extreme deceleration and the properties of thick concrete.. :biggrin:
I surely hope that this isn't one of the signs... :wtf:
 
It's funny what the imagination can conceive in an attempt to explain the unknown, without any solid scientific foundation. There is an unfortunate aspect to the human condition whereby under the right circumstances and context, people will value what their emotions tell them over facts.
 
The biggest problem is all the "near" stuff, if it were we should at least see signs of it.
Kids, teenagers and now 20-somethings are growing more and more dependent on smartphone technology and social media for every aspect of their day to day lives. Combine that with pseudo-AIs like Siri and Alexa, then the basic infrastructure for a singularity-like revolution is already in place.

For the ACTUAL definition of what the singularity is supposed to be (and not this utopianist Kurzwellian bullshit) all it would really take is for Apple or Microsoft to develop an AI expert system that is better, faster and more efficient at coding software than a human being. That is, a computer program capable of writing and/or debugging computer programs with minimal human intervention. Once computers learn how to write code from scratch, the next step is to learn how to DESIGN software and then implement the design in whatever programming language they're working with. Then you're primed for the final step, where computers can improve their original designs faster and more efficiently than humans can.

When human decision is removed from the design process of computers, engineering and software, you have a post-singularity society. That is, it is no longer possible to predict how tht software is going to be developed or how it will function because it will be based on logic and design choices made by machines and not by humans.

It probably won't be all damaging to human beings, but it probably won't be beneficent either. It will be good for us in alot of ways, bad for us in at least as many ways, and utterly confusing for literally everyone.
 
Evolutionary algorithms could probably increase the utility and inate intelligence of machine-generated code, although the resulting code might be almost impossible to understand (as happens when such algorithms are used to design electronic circuits).
 
But imagine how the development times will be shortened.

You tell your expert AI to design an OS what kind of hardware it's to run on and all the the parameters it's going to need to function with humans, the AI designs and builds the OS, and I assume also tests it and BAM! OS made in much less time than it would take if humans were involved with the process, and also tested.

They then ship it out to users in a lot less time then would normally be done.

Of course they don't tell you is that the OSes are all learning AIs da da dum...... :)

But seriously if machines can do that the build times would be so much shorter.
 
Evolutionary algorithms could probably increase the utility and inate intelligence of machine-generated code, although the resulting code might be almost impossible to understand (as happens when such algorithms are used to design electronic circuits).
I forgot that AIs are already basically doing this on the hardware side, so on a microscale we're already a step in that direction. I think the next step will actually be an AI-based debugger that can use predictive algorithms and read context well enough to find missing close-parentheses and logical errors in code and then correct them without human intervention. Think of it as a programmer's spellcheck. That evolves into more subtle forms of debugging: detecting and correcting causes of memory leaks, incorrect data types, etc. That isn't that big of a leap, actually, and it would save software developers a shit load of time and money.

And this is the Dunning Krueger effect working in reverse: if you have an AI capable of identifying and fixing errors, you have an AI that isn't going to make those kinds of errors in the first place. The next logical step in that progression is an AI that can write blocks of code to do what you tell it to do without having to do the coding yourself. Once you have that, all you need is a designer's algorithm.
 
I was writing code-generating algorithms more than 30 years ago. Surprising that it seems to be a fairly uncommon feature. Add a genetic algorithm and put the two in a feedback loop together with a code search/learning algorithm and set a goal defining the abilities of the code that you want to evolve perhaps?
 
Last edited:
I was writing code-generating algorithms more than 30 years ago. Surprising that it seems to be a fairly uncommon feature. Add a genetic algorithm and put the two in a feedback loop together with a code search/learning algorithm and set a goal defining the abilities of the code that you want to evolve perhaps?


Could that evolve a self learning AI?
 
Could that evolve a self learning AI?
Perhaps - brains are evolved self-learning matter constructs after all. We already have manually programmed self-learning AIs. Where they fall short, in my opinion, is in having no wide sensory involvement with the real world, no instincts inherited from Darwinian selection in hostile environments, and no self-assigned goals. However, those factors might prove dangerous if implemented without safeguards. "Ooh, I can solve all the world's problems by uploading all human minds and converting all available matter to computronium." Heaven or hell, anyone?

ETA: Google have creatd an automated system for generating ML code to analyse images that is based on neural networks (so probably largely inpenetrable to human examination or alteration and static once generated rather than plastically adaptable).
https://www.theregister.co.uk/2018/01/18/google_automl/
Long way to go...
 
Last edited:
I was writing code-generating algorithms more than 30 years ago. Surprising that it seems to be a fairly uncommon feature.
It's more common than you think. The trekbbs message board uses some of those very same algorithms to convert your posts into properly formatted HTML. The problem is, machine-generated code has the same features as machine-manufactured products: it's great when you want to automate a simple repetitive task like converting a string of text into a document based on some pre-set parameters, but if I took something complex like, say, a word document and told a machine to convert it into HTML, the result would be hilariously messy.

You might remember that we USED to be able to do this with older word processors. Word, wordperfect and even pages could convert .doc to HTML pretty easily. But the formatting codes of word processor documents and their markup language keep getting more and more complicated and conversion to HTML just isn't that simple anymore (and the apps that do a not-completely-terrible job of it mainly accomplish this by cranking out an HTML document crammed with superfluous <span> tags nested three or four deep.)

It's not a problem with algorithms, it's a problem of heuristics. Because as soon as we come up with an algorithm that works for code generation, new forms of code get released that make the algorithm no longer useful. What we need is a process where a computer can analyze the structure of the programming language, choose which solutions are appropriate for what it wants to do, and then implement those solutions in a way that yields the desired result. So not just a code library, but the ability to operate on elements of the code library in a way that is consistent with the internal logic of the programming language itself.

In essence, the computer needs to "understand" the programming language and how to use it. That's why I pointed out that the real stepping stone for machines is not their ability to generate code, but their ability to find ERRORS in code and correct them. Call it the Dunning-Krueger-Turing test: the skill set you need to come up with the right answer is the same skill set you need to recognize a wrong one.
 
It's more common than you think. The trekbbs message board uses some of those very same algorithms to convert your posts into properly formatted HTML. The problem is, machine-generated code has the same features as machine-manufactured products: it's great when you want to automate a simple repetitive task like converting a string of text into a document based on some pre-set parameters, but if I took something complex like, say, a word document and told a machine to convert it into HTML, the result would be hilariously messy.

You might remember that we USED to be able to do this with older word processors. Word, wordperfect and even pages could convert .doc to HTML pretty easily. But the formatting codes of word processor documents and their markup language keep getting more and more complicated and conversion to HTML just isn't that simple anymore (and the apps that do a not-completely-terrible job of it mainly accomplish this by cranking out an HTML document crammed with superfluous <span> tags nested three or four deep.)

It's not a problem with algorithms, it's a problem of heuristics. Because as soon as we come up with an algorithm that works for code generation, new forms of code get released that make the algorithm no longer useful. What we need is a process where a computer can analyze the structure of the programming language, choose which solutions are appropriate for what it wants to do, and then implement those solutions in a way that yields the desired result. So not just a code library, but the ability to operate on elements of the code library in a way that is consistent with the internal logic of the programming language itself.

In essence, the computer needs to "understand" the programming language and how to use it. That's why I pointed out that the real stepping stone for machines is not their ability to generate code, but their ability to find ERRORS in code and correct them. Call it the Dunning-Krueger-Turing test: the skill set you need to come up with the right answer is the same skill set you need to recognize a wrong one.
My code generation was for control system identification, design and simulation but it was pretty static and dumb as you describe and couldn't adapt and learn. Given that the skill set required to create even simple code translation and generation programs is probably limited to a small subset of the human population at most, I think that the AI level you describe is some way off, never mind bright or superbright AIs.

The subset of the subset that might be able to develop the AI level you describe is probably numbered in single low decades at most, even if they are interested in the problem.

Is understanding required and what do we mean by the term? Is it anything more thsn adaptive pattern matching with randomly inserted speculative cognitive leaps that might or might not lead anywhere?
 
Last edited:
My code generation was for control system identification, design and simulation but it was pretty static and dumb as you describe and couldn't adapt and learn. Given that the skill set required to create even simple code translation and generation programs is probably limited to a small subset of the human population at most, I think that the AI level you describe is some way off, never mind bright or superbright AIs.

The subset of the subset that might be able to develop the AI level you describe is probably numbered in single low decades at most, even if they are interested in the problem.
And that right there is the bottleneck. The number of people who know how to create systems that would lead to a singularity-like machine intelligence is shockingly low. The other part of the problem is, the number of people who WANT to do that is a completely different group than the people who COULD, and they have nothing in common socially or economically and they're rarely even in the same room together.

Is understanding required and what do we mean by the term? Is it anything more thsn adaptive pattern matching with randomly inserted speculative cognitive leaps that might or might not lead anywhere?
It's pattern-matching, for sure, and the ability to assign meaning to groups of symbols and then operate on that group as if it were a discrete object.

For example, if you teach a computer to treat:
do
{stuff}
while
{stuff=things};
as a discrete task, the computer might all on its own create something like this:

function do_stuff_to_things()
{
do {stuff}
while {stuff=things};
}

If it has a way of recognizing a discrete bloc of code and identify it saying "This is a subroutine" then we have a situation where the computer understands, in practical terms, what the code actually does and how changes in one part of the code will affect other parts. This is because the computer doesn't necessarily treat each line or argument as its own distinct logical unit, but as an element within other logical units that are related in ways it can track and categorize.

And that's just a basic first step. The next step after that is to get a situation where you can ask the computer "Why isn't my code doing stuff to the things?" and the computer can look at the code and answer "Because in line 34, the function has set stuff equal to not-things"
 
I expect the meta-level ability to describe why a decision was taken en route to a goal probably falls in the realm of expert systems rather than categorization techniques such as neural networks although our brains appear to consist of plastic neural networks with essential motor, language and social skills that are seemingly emergent and hardwired by evolution. That's two spheres of ML expertise that are required with probably the addition of Baysian statistical methods and genetic algorithms required both in development and in deployment (to implement a form of neural plasticity).

Alternatively, the effort to simulate a human or "more advanced" brain as purely neural networks might get there eventually although I suspect hardwired neural networks and biochemistry (neurotransmitters, hormones, etc) cannot be overlooked.
 
Last edited:
And to think that parents used to wash their kids' mouths out with soap was a perfectly viable punishment for decades... :lol:
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top