• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Google Computer to program itself. AI breakthrough?

Dryson, did you not read RobMax' post?

I mean, it was well-worded, full of content and obviously included a lot of knowledge that you lack. Did you not read this and think: "Wow, RobMax! That was interesting. Thanks so much. I learned so much from this and I realize my own posts didn't make much sense. Do you have any suggestions for what to read to educate myself further?"?

I'm honestly a bit confused. Your posts earlier were full of bad logic and misconceptions about how programming and AI work. Then RobMax, who clearly knows more about programming than you or me, puts in a lot of effort to explain things to you... and you just ignore it? :(

I haz a sad... because this makes meaningful constructive discussion impossible.

If you haven't noticed, 9 times out of 10 Dryson doesn't really respond to anything. He just quotes something he can use to continue making his ill-informed posts about.

It'd be nice if he actually engaged in conversation.
 
I'm trying to avoid my usual sarcasm, but this is likely to sound that way anyway:

Right out of the starting gate, Robert and others shot down the OP assertion of a breakthrough in AI.
So what is the topic now? Arguing with Dryson about the nature of AI seems as on-topic as anything else.
 
Last edited:
So what is the topic now? Arguing with Dryson about the nature of AI seems as on-topic as anything else.

It sure is.
I'm just trying to make sure we're not making Dryson the topic of this thread. :)
Sorry about the confusion!
 
Last edited:
I'm trying to avoid my usual sarcasm, but this is likely to sound that way anyway:

Right out of the starting gate, Robert and others shot down the OP assertion of a breakthrough in AI.
So what is the topic now? Arguing with Dryson about the nature of AI seems as on-topic as anything else.

Well, we could talk about what the AI innovation in the OP actually does. Or AI in general.

What if some of you have questions about AI? There are some reasonably knowledgeable people around here (not just me.) ;)
 
I've had a quick 5 minute scan through of the paper mentioned in the article, http://arxiv.org/abs/1410.5401

From just that quick glance, I get the feeling that this paper has not been peer reviewed. I highly suspect its just a student paper as there's no detail on the actual algorithm they're using, they provided no theorems nor proofs and worse, they made a few claims that will seem correct to a casual reader but are unsubstianted and in one or two cases actually incorrect.

For example, the paper mentions Recurrent Neural Networks are Turing-Complete, that in theory a RNN perform any kind of computation a modern computer can compute. What they neglect to mention is that certain types of computation (such as a simple adder circuit) requires a Neural Network to have an infinite number of neurons. Which, of course is impossible in practice. And if you can't even do a simple addition, then more complicated computing tasks such as copying bits or sorting numbers should be beyond the reach of RNNs. Yet the paper claims that their Neural Turing Machine, which is a variation of a Recurrent Neural Network, is capable of copying bits and sorting numbers.

One of the laughable claim made in the paper is as follows (I'm paraphrasing): We trained our NTM to copy bits. Ignoring a small percentage of error, the output is largely correct. So clearly NTM has learned to copy bits. They then went on to state that they believe NTM has learnt the standard programming algorithm for copying bits.

Right, that's like saying I believe that cloud in the sky looks like a bird therefore the cloud is a bird. First of all, the standard programming algorithm for copying bits does the copying perfectly, with no errors. So if NTM has truly learnt the algorithm, surely there would be no errors in the copied bits. Yet the NTM's output contains some error and failed to perfectly copy bits, so clearly it did not learn the algorithm to copy bits. And that's just the most obvious mistake I saw in the paper.
 
:lol:

Thanks for your analysis! I had a feeling they might be blowing a lot of smoke. Clearly the article cited in the OP was. I'm not an AI researcher so I have to take a lot of stuff at face value, it's just that the face value of the paper wasn't all that impressive. It's even less so now that you're saying they basically handwaved obvious mistakes.
 
You are correct the sensor is allot cheaper and it does shut the robotic welder off but not based on a set logic and rationality that would be part of the AI's decision to either allow the robotic welder arm to continue to strike the human to get a kick out of seeing how far the human is thrown or knocked down but instead the sensor is a static input function of the process being that if the sensing area within a certain zone of influence comes into contact with an object the welder shuts off.

I come into contact with allot of objects on a daily basis as well as people and I do not simply complete and action without first thinking about the though which separates Sentient AI from Robotic AI.
 
You are correct the sensor is allot cheaper and it does shut the robotic welder off but not based on a set logic and rationality

Robots don't operate according to logic and rationality? :vulcan:


There is a really obvious comeback here which I am going to resist making.
 
You are correct the sensor is allot cheaper and it does shut the robotic welder off but not based on a set logic and rationality that would be part of the AI's decision to either allow the robotic welder arm to continue to strike the human to get a kick out of seeing how far the human is thrown or knocked down but instead the sensor is a static input function of the process being that if the sensing area within a certain zone of influence comes into contact with an object the welder shuts off.

You have yet to explain why we'd use AI to accomplish something that's very easily done with cheap sensors. What is the purpose?

I come into contact with allot of objects on a daily basis as well as people and I do not simply complete and action without first thinking about the though which separates Sentient AI from Robotic AI.

Actually, much more of human behavior is automatic than people tend to think it is. Avoiding running into objects, for instance, is not something one has to consciously think about (most of the time.)
 
Would anyone like to learn the basics of artificial intelligence, well more specifically a branch of AI called Machine Learning? I've been a teaching assistant for Coursera's Stanford Machine Learning class for the past two years. Through the experience and interaction with the students in that course, I've kinda developed my own heavily simplified "Dummy's Guide". Hopefully, unlike the Stanford course that requires a minimum of middle school to undergraduate level mathematics, my version shouldn't which is more of a 101 guide shouldn't require any prior mathematical or computer knowledge.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top