Know what freaks out a robot? Stuff like cluttered rooms and messy desks.
Even the most advanced bots are confounded by the diverse environments where humans live and work. But if a robot can navigate only a well-ordered lab, what practical use is it?
Not much, says computer scientist Ashutosh Saxena, an assistant professor in the computer science department at Cornell University. Saxena is one of an up-and-coming group of roboticists trying to create autonomous robots that can interact smoothly with humans in everyday life. Saxena is working to give robots the ability to anticipate our actions and, in a sense, anticipate our wants and needs.
You have a video on YouTube showing a robot slowly opening a fridge for a guy and later pouring him a beer. My first reaction is: At last, a useful robot! But what's actually so impressive about a robot pouring a beer?
The physical act of opening the door and the act of pouring is not the challenging part for the robot. The challenging part is being able to observe the environment and the person's actions and then concluding the door needs to be opened, and knowing when to do it.
That's the kind of thing we humans generally do without much thought. How does a robot anticipate human actions?
First, it identifies the people and the main objects in the environment using a Microsoft Kinect camera [a 3-D camera for use with the Xbox gaming console]. The camera gives you something like a skeleton view of the humans, and we use our algorithm to figure out the main objects in the environment. Then, using what it has learned in its previous training, the robot considers both how the people have been moving in the last few seconds, and what the objects are used for. It reasons that because the people have been moving in a certain way, and certain objects are present, therefore it is very likely the person is about to take a certain action.
How often does the robot get it right?
If it was not using our algorithm while serving water or beer, for instance, it would be pouring at the wrong time and causing spills about half the time. When it uses our algorithm, it causes a spill once in 10 or 20 tries. Part of the reason it is not perfect is the Kinect camera is very noisy, so the robot has a hard time seeing. We are hoping the new sensor with the Xbox One that is coming out in a few months will help with that. The second reason is more difficult. People are sometimes creative and unpredictable, and that makes it hard for a robot.
With all the difficulties, how long before we see personal robot assistants that are actually useful?
I don't think we should expect the Rosie the robot maid from the Jetsons—something that can do everything that's needed in the house—to be available in the next few years. But what is possible are robots that cost perhaps $5,000 that can do specific tasks. For example, it's capable of cleaning the house by putting things in their right places, and putting dishes in the dishwasher. I think that kind of limited but useful robot may be available within the next 5 years.