While it may not exactly meet scientific criteria, it seems to be a major step toward creating androids who can interact reasonably well with humanoids.
That is a positive. But, knowing the Russians have done it gives me a bit of a pause.
Researchers at Washington State University have programmed one computer to teach another how to play Ms. Pac-Man.
Getting a high score, or any score, in Ms. Pac-Man or any other video game isn't the goal of the project, just a means to the end of teaching computers, and ultimately robots, how to teach themselves. Right now, robots are "very dumb," said WSU's Matthew E. Taylor, a professor of artificial intelligence. The most advanced ones are easily confused, and when that happens, they stop working.
The reasoning is that, as robots become more common, it will be easier if they are capable of learning how to perform tasks from other robots. We don't want this information to be lost," Taylor says. "Once your home robot knows how you like your bath, how you like your house cleaned, you don't want to lose that information."
Why not just take one robot's memory and pass it along to another, though? It's not always so simple, Taylor said. Some later models may have different hardware and software from the one whose memory it would inherit.
The real trick is knowing how much teaching the computer should offer. Just like human teaching, little or no advice is not teaching, and too much doesn't really cause the robot to learn something for itself.
See, that is at least a believable and interesting development, and also more indicative of where AI research really is right now.
Obviously it makes lots of sense that a self-learning computer should be able to teach other computers what it knows, although I wonder why you wouldn't simply copy the information. I can see it being the case where you could only copy the entire data set, which would overwrite anything unique the target machine had already learned, so instead you have to add to what it knows rather than overwriting everything. I see in this case, the issue is incompatible hardware/software, which makes a good deal of sense.
With machines, it is almost always more efficient to develop a pre-programmed algorithm to acquire that kind of information rather than take the time to EVOLVE one in a prototype machine and then "teach" it to other machines.See, that is at least a believable and interesting development, and also more indicative of where AI research really is right now.
Obviously it makes lots of sense that a self-learning computer should be able to teach other computers what it knows, although I wonder why you wouldn't simply copy the information. I can see it being the case where you could only copy the entire data set, which would overwrite anything unique the target machine had already learned, so instead you have to add to what it knows rather than overwriting everything. I see in this case, the issue is incompatible hardware/software, which makes a good deal of sense.
But the machine teaching the machine means then eventually you have a machine that can teach another correct?
The simplest example I can think of right now is say you put this type of learning programming into a Rumba vacuum. One vaccum has already learned how to sweep the floor and avoid obstacles. The machine would then teach another machine how to do the same and avoid the obstacles its already learned about so that you'd not only have a programmed vacuum to avoid existing obstacles but one that could also avoid future ones.
We use essential cookies to make this site work, and optional cookies to enhance your experience.