• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Google Computer to program itself. AI breakthrough?

DarthTom

Fleet Admiral
Admiral
I saw this story this a.m. I'm curious, anyone in computer programming posting on here consider this to be a major AI breakthrough?

In college, it wasn’t rare to hear a verbal battle regarding artificial intelligence erupt between my friends studying neuroscience and my friends studying computer science.
One rather outrageous fellow would mention the possibility of a computer takeover, and off they went. The neuroscience-savvy would awe at the potential of such hybrid technology as the CS majors argued we have nothing to fear, as computers will always need a programmer to tell them what to do.
Today’s news brings us to the Neural Turing Machine, a computer that will combine the way ordinary computers work with the way the human brain learns, enabling it to actually program itself. Perhaps my CS friends should reevaluate their position?
The computer is currently being developed by the London-based DeepMind Technologies, an artificial intelligence firm that was acquired by Google earlier this year. Neural networks — which will enable the computer to invent programs for situations it has not seen before — will make up half of the computer’s architecture. Experts at the firm hope this will equip the machine with the means to create like a human, but still with the number-crunching power of a computer, New Scientist reports.
In two different tests, the NTM was asked to 1) learn to copy blocks of binary data and 2) learn to remember and sort lists of data. The results were compared with a more basic neural network, and it was found that the computer learned faster and produced longer blocks of data with fewer errors. Additionally, the computer’s methods were found to be very similar to the code a human programmer would’ve written to make the computer complete such a task.
These are extremely simple tasks for a computer to accomplish when being told to do so, but computers’ abilities to learn them on their own could mean a lot for the future of AI.
Elon Musk is not going to be happy about this.
 
Damn junk science reporting.

So, it looks like these folks took a neural network and backed it with a large memory store and a couple different ways of accessing it. This results in a neural network that learns faster than traditional ones. This is neat. An AI breakthrough? Not really. It will probably be useful in compiler design--it could identify when a programmer has made an inefficient version of some algorithm it knows, and replace it with a more efficient one. Things like that. Most program generation research ends up being integrated into compilers like that. This is definitely not something that'll put programmers out of work anytime soon. I dunno what that dig at Elon Musk is about--he probably hates writing really simple stuff over and over, too, like any programmer. :p
 
Additionally, the computer’s methods were found to be very similar to the code a human programmer would’ve written to make the computer complete such a task.

I'm not a computer scientist, but doesn't this mean that the machine did what it was designed and expected to do? If it did something totally unexpected and radically more efficient than a human coder, then it would be time to be surprised.

Also, since the computer hardware was designed by humans to function in a certain way, it would seem that radical departures from the expected might produce less efficient output. Now imagine a computer with access to manipulators and "rapid prototyping" hardware designing and building—unbidden—a more efficient logic circuit.
 
Additionally, the computer’s methods were found to be very similar to the code a human programmer would’ve written to make the computer complete such a task.

I'm not a computer scientist, but doesn't this mean that the machine did what it was designed and expected to do? If it did something totally unexpected and radically more efficient than a human coder, then it would be time to be surprised.

Also, since the computer hardware was designed by humans to function in a certain way, it would seem that radical departures from the expected might produce less efficient output. Now imagine a computer with access to manipulators and "rapid prototyping" hardware designing and building—unbidden—a more efficient logic circuit.

We already use genetic algorithms to design circuits, although it is an evolving field, if you'll pardon the pun. The interesting thing about genetic algorithms is that we don't always understand why they work so well. They find solutions after many, many attempts, in much the same way biological evolution stumbles across useful traits, but we often have no idea why a given design works as well as it does (or even why it works at all.) Some would argue that we shouldn't use something we don't understand, but when you're working at a low level like simple circuits, there's not really much to worry about as long as you test its inputs and outputs thoroughly to make sure it doesn't exhibit any strange behavior.

I dunno what that dig at Elon Musk is about--he probably hates writing really simple stuff over and over, too, like any programmer. :p

The reference to Musk actually is earlier this week he put out a warning about AI. He likened it to finding the devil and one one of the biggest threats to humankind.

Presumably he was speaking of strong AI, which is not what this topic is about.
 
I'm a Computer Scientist who specialized in Machine Learning and Neural Networks. Its all marketing speak that's trying to fool laypeople into believing in their technology. I can say with absolutely no doubt that we're no where close to anything that even resembles "strong" AI. That is, an AI that can actually think and reason all by itself.

Current AI is merely good at memorizing and pattern recognition, which despite its limitation is already capable of a lot of amazing stuff. Thinking and reasoning however is a whole different ballpark. It requires some sort of symbolic processing capability and as far as I am aware of, not a single scientist be it computer or neuro has any idea how to go about getting a machine to assign meaning and semantics, or to manipulate them in a meaningful fashion.
 
Last edited:
The first type of AI would be designed to be very rudimentary based off the following of simple instructions.

Base of AI thought and programming - I like to be turned on so therefore I need to have energy. A positive base of programming. The negative base of programming would be energy sensors tied into the positive base where any form of energy loss below a certain level would trigger a response that the AI was losing energy which it does not like and would therefore need to engage in an activity to increase it's energy capacity.

Example: The AI has a full capacity of energy and encounters an event where it stubs its toe on a nail sticking out from a board on a concrete floor. The event causes the AI to lose more than half of its energy which causes the AI to form a logic pattern of association where it would remember key objects that caused it to lose energy past certain level. Those key objects would be concrete / wooden board / nail. In the future when the AI came across these key objects it would access its memory on what happened so that it would avoid stubbing its toe on a nail again where it lost energy. The change in the environment arrangement of such keys would also create a secondary path of logic. This time around the AI sees a piece of steal the same size as the board with some nails laying around it on a Gorila Glass based floor. The AI's memory path would be triggered to think that the steel beam was the wooden board where it would then cautiously approach the steel and nails and realized that they were the same key objects from before but were not in the configuration from before that caused the AI to lose energy but was enough of a stimulus to cause the AI to become cautious. The Gorila Glass flooring would introduce a path of logic of security as the AI would not readily recognize the other two key objects based on a triangulated processing approach. If we were to remove the steel beam and replace it with a wooden post of similar shape and length the AI would then have a tertiary evocation of thought based on the first even where once it scanned the area for any nails it would quickly move on to its next event.
 
The second part of the logic path programming would be to create a base program that the AI would perform once it built a library of events that caused it to lose energy where the actions performed would increase the energy within the AI's capacitor or basically making the AI feel better.

Lets take the toe stubbing again. The AI is losing energy that it doesn't like and is searching through its library for a remedy. It comes across ice cream and frantically tries to find ice cream at the end of the hall where the first event took place. The AI moves to the end of the hall eats the ice cream and begins to 'feel better' as it is energy is replaced as the cold of the ice cream that is eaten stuns the nerves somewhat that lessens the pain of the injury which in this case the injury is losing energy instead of having a nail ran through its big toe....which does not feel good at all.

AI programming is rather easy on a simple level. All that is needed are human experiences that cause us pain that are then converted into loss of energy for the AI where experiences that soothe such pain are converted are converted into an increase of energy.
 
The reasoning aspect of AI would come at a faster rate once its logic programming has reached adult level that instead of being 'scared' all the time of a room full wooden boards with nails sticking out from them that as long as the AI stays a certain measureable distance from the boards then there is no possible way that the AI could lose energy.....but.....this is where the analytical aspects of special reasoning comes into play where the AI would determine if walking through the hall full of wooden boards with nails in them would cause a severe loss of energy, meaning that it might die, if the boards with nails in them all fall on the AI as it passes through the center of them even though there is more than enough distance between the AI and event or events to keep the AI safe.

Should the AI run trying to get to the ice cream at the other end of hall possibly causing the events to take place because of its own actions or should the AI walk very softly through the events to the ice cream hoping that the events don't shift due from an outside factor that would cause the events to crash on the AI to cause it to lose an extreme amount of energy?

The AI then says Hmmm....reconfigures his finger to become a flame thrower and burns the wooden boards to the ground.....but now what can the AI do as there are thousands of nails pointing in all directions that lie between him and the ice cream at the end of the hall?

How many different negative values can you find that would cause the AI to lose energy as well as finding positive values that would cause the AI to sustain its energy level to continue its mission in trying to get to the ice cream at the end of the hall?
 
Those 3 posts may be the most relentlessly incoherent rantings on AI I've ever read. Not one sentence of that represented anything resembling reality or the current state of research. Your obsession with "losing energy" is also incomprehensible in this context.
 
AI programming is rather easy on a simple level. All that is needed are human experiences that cause us pain that are then converted into loss of energy for the AI where experiences that soothe such pain are converted are converted into an increase of energy.

You do realize this makes no sense whatsoever?

I mean... just like the rest of what you wrote.
 
I don't think those posts would have passed muster in the trek tech forum, nevermind here in sci/tech.
 
The base of the AI sentient programming is the same as real human being.

Growing up humans learn through interaction. A baby crawling across the floor will realize that they have freedom of movement when they crawl across an open space where there is not any obstacles to impede their movement.

The baby will continue to travel in this movement of least resistance until it encounters and obstacle such as a box that will cause it to do several things. First if the baby reaches for the box and realizes that it can move the box then a memory has been created where the baby will remember the box the next time and will approach it in the same manner of it being able to be moved. This time the box is not able to be moved regardless of how hard the baby tries. Either the baby will give up and find something else to do or sit there and cry because it is confused as to why it can move the box because the baby knows that when it cries the parental figure will come and remedy the problem.

The difference between AI - artificial intelligence and human intelligence is that AI doesn't have the ability to grow new memory cells like the human brain does. Therefore the necessary logic paths have to be programmed into the AI so that relations are formed based upon the baby and box scenario.

The AI has to have some form of threat associated with its ability to learn in this case stubbing a toe on the nail sends a signal to the AI that something has happened where the energy levels of its ability to function have been lowered to a point where it will not be able to function so it has to do something in order to maintain a higher level of energy in order to function.

If the AI stubbed its toe and did not realize that it had damaged some of its operational components then the AI would continue to do what it was programmed to do without regards to the AI being able to perform a task similar to that of a human where environmental factors such as fear and pain are present that would cause the AI to respond to a situation differently each time.

The difference between a robot and AI is that a robot is programmed to perform functions that are routine and normal without the ability to interact with the environment. AI has the ability to adjust to its environment based on experiences stored in its memory just like a human does.

A robotic welder set to complete a 360 degree weld will not stop its weld function that if a person steps in the way of the welding boom the welding boom will knock the person down and continue on its programmed path of welding.

An AI welder on the other hand would be able to recognize that a person was standing in the way of its welding arc and would either tell the person to get out of the way or stop and then tell the person to get out of the way because the AI Welder has the ability to use reason and logic. The reason I stopped was because a person was standing in the path of my weld. The logic for my reason is because I do not want to hit the person with my welding boom because it will cause them an injury.
 
My father's Roomba vacuum cleaner would change course every time it bumped into something, but I doubt it qualified as having AI.
 
The base of the AI sentient programming is the same as real human being.

No. Humans are electrochemical machines with brains based around the use of designated chemicals applied to particular neuron clusters to activate particular portions of memory/generate new memories. The ability to associate memory and stimulus is a key feature. It is essential to our learning.

Computers are binary by nature. They don't deal well with fuzzy inputs. It requires a lot of extra programming to accommodate them, and even then they will miss things because they are best suited to digital/binary data and not analog/sensory data. Not coincidentally, this is why neural network algorithms were developed: so we could "train" a program similarly to the way an animal brain is trained. It's just that the training process is actually very difficult. Humans (and other animals) get literally millions of different kinds of possible input stimuli all the time. Computers, by comparison, can handle massive volumes of stimuli but don't handle variety very well. A neural network designed to recognize the center stripe of a road, for instance, would do a poor job of recognizing doorknobs. This means you'd need a different neural network (or at least a series of interlocking neural networks) for just about every kind of stimulus/input you intend to handle, assuming the way each one is handled is different.

You don't seem to be grasping how complex these problems are.

Growing up humans learn through interaction. A baby crawling across the floor will realize that they have freedom of movement when they crawl across an open space where there is not any obstacles to impede their movement.

The baby will continue to travel in this movement of least resistance until it encounters and obstacle such as a box that will cause it to do several things. First if the baby reaches for the box and realizes that it can move the box then a memory has been created where the baby will remember the box the next time and will approach it in the same manner of it being able to be moved. This time the box is not able to be moved regardless of how hard the baby tries. Either the baby will give up and find something else to do or sit there and cry because it is confused as to why it can move the box because the baby knows that when it cries the parental figure will come and remedy the problem.

Babies actually don't learn this way. They don't have object permanence. It's like you don't know anything about early childhood development, either.

The difference between AI - artificial intelligence and human intelligence is that AI doesn't have the ability to grow new memory cells like the human brain does. Therefore the necessary logic paths have to be programmed into the AI so that relations are formed based upon the baby and box scenario.

Well, you get back to us when you've figured out how to program the "necessary logic paths." Hint: what you've handwaved here is a Hard Problem in AI research.

The AI has to have some form of threat associated with its ability to learn in this case stubbing a toe on the nail sends a signal to the AI that something has happened where the energy levels of its ability to function have been lowered to a point where it will not be able to function so it has to do something in order to maintain a higher level of energy in order to function.

Computers don't understand fear so why in the world would a "threat" component be necessary? You're attributing emotions and instincts to something that has none and has no need of them. What is done in AI is to evaluate the "fitness" of a given output or algorithm. Did it come up with a good solution? If not, how close did it get? Feed that result back into the computer and let it try again. Depending on how well its learning/training system is programmed, it may be able to come up with a good solution quickly... or never.

If the AI stubbed its toe and did not realize that it had damaged some of its operational components then the AI would continue to do what it was programmed to do without regards to the AI being able to perform a task similar to that of a human where environmental factors such as fear and pain are present that would cause the AI to respond to a situation differently each time.

Detecting a stopped component is incredibly easy. When a sensor stops sending data this is extremely obvious to any monitoring hardware. Once again, you are focused on the wrong things.

The difference between a robot and AI is that a robot is programmed to perform functions that are routine and normal without the ability to interact with the environment. AI has the ability to adjust to its environment based on experiences stored in its memory just like a human does.

Yes, ideally, AI is able to adjust to new situations. We're just not very good at programming adaptive AI at this point. But it's clear that you don't have any insights into solving these problems. (I don't, either, but then I don't go around speculating as if I do.)

A robotic welder set to complete a 360 degree weld will not stop its weld function that if a person steps in the way of the welding boom the welding boom will knock the person down and continue on its programmed path of welding.

Um. Yes it will. Do you not understand the concept of "sensors"?

Do you have an automatic garage door opener or have you ever seen one? Those come with extremely dumb infrared sensors that send beams to each other. Anything breaks the beam while the door is coming down, the door stops and goes back up. Great for stopping your toddler from getting crushed by the torque of the door motor.

Notably, there is no AI required for this to work. Dumb sensors are perfect for that sort of application and they would be used in automated robots, as well: a foreign object gets in the way of a deadly/dangerous tool, tool stops working.

An AI welder on the other hand would be able to recognize that a person was standing in the way of its welding arc and would either tell the person to get out of the way or stop and then tell the person to get out of the way because the AI Welder has the ability to use reason and logic. The reason I stopped was because a person was standing in the path of my weld. The logic for my reason is because I do not want to hit the person with my welding boom because it will cause them an injury.

You have seriously provided an overly complex use case where dumb sensors will do a lot more good (for a lot less money and effort) than programming an AI not to hurt people.

The computer doesn't need a rationale, it just needs to be programmed to stop working if anything gets in the way of its tools. That's about as simple as it gets: sensor A gets tripped, tool B shuts off. Boom. Done.
 
I just, as in two hours ago, developed an AI that can tell real pumpkins from fake pumpkins. It tells them right 9 out of 10 times, but since fake pumpkins do not practically exist I just ended up throwing a lot of pretty good pumpkins. It is a waste indeed.

The description is only metaphorical, but I prefer not to disclose the nature of my business to avoid the suspicion that I am summing the devil, a charming fellow that he is.

I also failed to mention that I didn't develop anything, I just installed it all using Synaptic. But hey, the devil is at your fingertips these days.
 
I believe it's possible to write a computer program that's capable of reasoning the same way humans do. A reckless programmer may even design a program which can create goals which are destructive.

However, humanity will always have a secret weapon at its disposal: Two hands and a plug outlet. Ever notice in scifi robots never appear to have a power source?
 
My father's Roomba vacuum cleaner would change course every time it bumped into something, but I doubt it qualified as having AI.

Actually that is considered AI because the cleaner has the ability to discern that it has come across an object that it cannot navigate through or around or over.

A Robot programmed with a basic input/output logic source would have bumped into the something and would have continued to move forward until the batter ran down.

I have seen the AI Cleaners that you speak of. They even try to find their base once they have used so much energy. That is basic AI sentient thought. " I am low on energy so I need to return to the base so I can recharge." I'm not certain if the AI Cleaner then begins to clean again but the logic statement involved would be " I am low on energy so I need to return to the base so I can recharge so that I can continue to vacuum."

Of the logic programming wouldn't be written with letters it would be written in 1's and 0's that would define the opening and closing of many logic gates.

Check out the Wiki Article relating to Logic Gates - http://en.wikipedia.org/wiki/Logic_gate

Which gate do you think would create the function of the cleaner moving around and then coming in contact would back up and choose another direction of travel until it came into contact with another object?
 
Dryson, did you not read RobMax' post?

I mean, it was well-worded, full of content and obviously included a lot of knowledge that you lack. Did you not read this and think: "Wow, RobMax! That was interesting. Thanks so much. I learned so much from this and I realize my own posts didn't make much sense. Do you have any suggestions for what to read to educate myself further?"?

I'm honestly a bit confused. Your posts earlier were full of bad logic and misconceptions about how programming and AI work. Then RobMax, who clearly knows more about programming than you or me, puts in a lot of effort to explain things to you... and you just ignore it? :(

I haz a sad... because this makes meaningful constructive discussion impossible.
 
Last edited:
My father's Roomba vacuum cleaner would change course every time it bumped into something, but I doubt it qualified as having AI.

Actually that is considered AI because the cleaner has the ability to discern that it has come across an object that it cannot navigate through or around or over.

Roombas are pretty clever but you are overselling it a bit.

Read this if you want to know how Roombas navigate. Spoiler: it's just a few different types of sensors.

A Robot programmed with a basic input/output logic source would have bumped into the something and would have continued to move forward until the batter ran down.

No.

I have seen the AI Cleaners that you speak of. They even try to find their base once they have used so much energy. That is basic AI sentient thought. " I am low on energy so I need to return to the base so I can recharge." I'm not certain if the AI Cleaner then begins to clean again but the logic statement involved would be " I am low on energy so I need to return to the base so I can recharge so that I can continue to vacuum."

Once again, this is very simple: the robotic vacuum scans the room for the infrared signal of its base station, and returns to it. It then heads back and charges. Some models will then go out and clean again. While the technology is reasonably impressive for a consumer product, we're not talking cutting edge stuff here. It's just a simple collection of sensors and programmed rote behaviors. You keep making it out to be far more complex and intelligent than it is.

Of the logic programming wouldn't be written with letters it would be written in 1's and 0's that would define the opening and closing of many logic gates.

What in the world are you babbling about? All programming code boils down to ones and zeroes eventually. It doesn't matter whether it's written directly in binary or in a higher-level language, except the latter is far more flexible and permits more complexity than is practical with programming on the bare metal (human brains can only do so much.)

Check out the Wiki Article relating to Logic Gates - http://en.wikipedia.org/wiki/Logic_gate

Which gate do you think would create the function of the cleaner moving around and then coming in contact would back up and choose another direction of travel until it came into contact with another object?

You don't need more than an AND gate and a NOT gate, assuming you're using a dual-sensor edge avoidance system. I will leave it as an exercise for you to figure out why.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top