A trio of research projects out of Cornell, MIT and the University of Washington highlight the promise of building robots that can learn to do the things we want them to, but also suggest that patience on behalf of programmers will be a real virtue. Like any application of machine learning, robots will need a whole lot of data and possibly a whole lot of training.

Here is what each project is up to:

  • The University of Washington research involves teaching robots to build things (shapes of out Legos, in this case) based on examples provided by humans. In order to improve accuracy, the researchers also let the robots analyze different, crowdsourced examples of certain objects (e.g., a person or a turtle) and the robots tended to choose those that offered the best balance of simplicity and similarity to the original version.

  • The MIT research focused on a method for crowdsourcing the learned facts of several robots -- or any nodes in a distributed system -- in order to achieve a collective intelligence among them. An example of the method in action would be assigning multiple robots to investigate the same building and classify each room based on what's in it. Each robot might have learned different things about the same room, but they're able to achieve accurate models by constantly comparing notes until they've established ground truth.
  • The Cornell research, which includes an interactive online demonstration called Tell Me Dave, taught robots how to perform specific actions and even fill in some details not specifically given by its instructors. It has been trained on specific commands (e.g., take a pot to the stove) as well as what objects look like and what they're used for. If someone tells it to "go heat water," for example, the robot knows it can use a microwave or a stove, and that it must turn either one on in order to heat the water.

Who wouldn't want a robot assistant to perform certain tasks for them? But it's easy to overlook how annoying or underwhelming the experience might be if the robots could only perform specific rote tasks (think manufacturing robots) or had to be spoken to in a specific way.

The experience might actually be more like raising a child than one might expect. These are learning systems, after all, and learning requires teaching -- in this case using lots and lots of data.