• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Computer passes Turing test

While it may not exactly meet scientific criteria, it seems to be a major step toward creating androids who can interact reasonably well with humanoids.

That is a positive. But, knowing the Russians have done it gives me a bit of a pause.
 
While it may not exactly meet scientific criteria, it seems to be a major step toward creating androids who can interact reasonably well with humanoids.

That is a positive. But, knowing the Russians have done it gives me a bit of a pause.

It doesn't matter who authored the test, it's that the test was rigged from the start. It's a shitty chatbot, that's all. Seriously, follow the link Maurice posted above. The responses made by the software are awful. I got results like those using Dr. Sbaitso way back in the early 90s. It's not A.I. software, it's a chatbot script.
 
I've built a perpetual motion device. If you ignore the fact that you need to wind it every 24 hours, it's practically an eternal machine.
 
I don't know enough about programming but the news media is claiming this is another major milestone on the road to AI

One computer teaches another how to play Ms. Pac Man
Researchers at Washington State University have programmed one computer to teach another how to play Ms. Pac-Man.
Getting a high score, or any score, in Ms. Pac-Man or any other video game isn't the goal of the project, just a means to the end of teaching computers, and ultimately robots, how to teach themselves. Right now, robots are "very dumb," said WSU's Matthew E. Taylor, a professor of artificial intelligence. The most advanced ones are easily confused, and when that happens, they stop working.
The reasoning is that, as robots become more common, it will be easier if they are capable of learning how to perform tasks from other robots. We don't want this information to be lost," Taylor says. "Once your home robot knows how you like your bath, how you like your house cleaned, you don't want to lose that information."
Why not just take one robot's memory and pass it along to another, though? It's not always so simple, Taylor said. Some later models may have different hardware and software from the one whose memory it would inherit.
The real trick is knowing how much teaching the computer should offer. Just like human teaching, little or no advice is not teaching, and too much doesn't really cause the robot to learn something for itself.
 
See, that is at least a believable and interesting development, and also more indicative of where AI research really is right now.

Obviously it makes lots of sense that a self-learning computer should be able to teach other computers what it knows, although I wonder why you wouldn't simply copy the information. I can see it being the case where you could only copy the entire data set, which would overwrite anything unique the target machine had already learned, so instead you have to add to what it knows rather than overwriting everything. I see in this case, the issue is incompatible hardware/software, which makes a good deal of sense.

Coincidentally, just last night I was wondering what the state of computers playing video games was--actually playing them the way humans do, with a physical interface, looking at the screen and listening to the sounds. Doesn't sound like that's what this does, though.
 
See, that is at least a believable and interesting development, and also more indicative of where AI research really is right now.

Obviously it makes lots of sense that a self-learning computer should be able to teach other computers what it knows, although I wonder why you wouldn't simply copy the information. I can see it being the case where you could only copy the entire data set, which would overwrite anything unique the target machine had already learned, so instead you have to add to what it knows rather than overwriting everything. I see in this case, the issue is incompatible hardware/software, which makes a good deal of sense.

But the machine teaching the machine means then eventually you have a machine that can teach another correct?

The simplest example I can think of right now is say you put this type of learning programming into a Rumba vacuum. One vaccum has already learned how to sweep the floor and avoid obstacles. The machine would then teach another machine how to do the same and avoid the obstacles its already learned about so that you'd not only have a programmed vacuum to avoid existing obstacles but one that could also avoid future ones.
 
See, that is at least a believable and interesting development, and also more indicative of where AI research really is right now.

Obviously it makes lots of sense that a self-learning computer should be able to teach other computers what it knows, although I wonder why you wouldn't simply copy the information. I can see it being the case where you could only copy the entire data set, which would overwrite anything unique the target machine had already learned, so instead you have to add to what it knows rather than overwriting everything. I see in this case, the issue is incompatible hardware/software, which makes a good deal of sense.

But the machine teaching the machine means then eventually you have a machine that can teach another correct?

The simplest example I can think of right now is say you put this type of learning programming into a Rumba vacuum. One vaccum has already learned how to sweep the floor and avoid obstacles. The machine would then teach another machine how to do the same and avoid the obstacles its already learned about so that you'd not only have a programmed vacuum to avoid existing obstacles but one that could also avoid future ones.
With machines, it is almost always more efficient to develop a pre-programmed algorithm to acquire that kind of information rather than take the time to EVOLVE one in a prototype machine and then "teach" it to other machines.

The "teaching" aspect in AI science is experimental only and has implications for machines that are either 1) placed in a new environment they are not programmed for or 2) given the job of teaching humans.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top