This week’s BEACON Researchers at Work blog post is by University of Idaho graduate student Travis DeVault.
I imagine it would be difficult to find someone working in the field of computer science that did not start with a love of working with a computer. Likewise, I doubt many people choose to work with robots unless they love robots and the future that robots hold for us. We live in a world where personal, mobile computers are more limited by fashion trends than by hardware requirements, but it was only a few decades ago that personal computers were just starting to enter the average home. And so, it is the same for robots today as it was for computers decades ago.
The promise that robots offer us for tomorrow is that of cheap, reliable machines that can perform any number of complex or simple tasks that are currently performed by people. We have robots working on other planets, robots that explore our oceans, robots that perform surgery, and robots that build cars; in the near future though, robots will be common in every home and business. Robot surgeons and explorers will need less human supervision, and the cars will be robots. I’m personally most looking forward to a robot maid that can do a good job cleaning dishes.
But for now, I think we’ve got to admit robots are pretty stupid. All the cool robots are either teleoperated by people, or at least heavily monitored and given instructions. Sure, I’ve got a robot vacuum that can do a better job than I can, but according to my wife, I’ve always found a way to make the house more of a mess when I try to clean. The robot vacuum never learns a better way to clean, it misses spots, it never knows where the dirty areas are, it scares my dog, and it still can’t figure out how to empty its own dirt bin. It’s really just an RC car with a vacuum and some infrared sensors to make sure it doesn’t bump into walls (I still bump into walls when I vacuum).
The research I do at the University of Idaho Laboratory for Artificial Intelligence and Robotics (LAIR) uses the principles of evolution in many different ways to enhance robotic learning. Our goal is to make robots that can learn over time, either through observing people or by receiving instruction from a human trainer or from other robots. One aspect that is very unique about the LAIR is that we use real robots for all of our work. Most groups doing robotics research will do most of the work in simulation, and then maybe transfer a finished control structure to a physical robot in order to create a youtube video. At the LAIR, the entire experiment is conducted on the robot.
Because the work is done with a physical robot, one of the challenges of the work is creating a robot that is able to sense its environment. Although many sensors have been created for robots such as infrared and ultrasonic eyes, we’ve chosen to rely more on the built-in cameras of a smartphone. Image processing is a slow job even on a beefy PC, on a smartphone it because a very slow process. One of the ways that we use evolution is in an evolved vision algorithm; the evolution uses a genetic algorithm to decide what parts of an image it should process in order to make decisions.
Our goal is to create robots capable of learning in a large variety of environments, which includes taking the robots outside as part of our experiments. We create robotic brains which can evolve different behaviors based on the situations presented to the robots by a human trainer. Our robots have used an evolved brain to travel on indoor and outdoor paths. The learning is done at run time when the robot is driven on the road by the trainer. Using this type of evolved learning, the robots have achieved a 95% success rate at navigating roads which the robot had never been trained on.
Continuing on this work, we have decided to focus on distributing the evolutionary learning over a network of several robots. Some of the questions we’ve asked leading into the work are: Does distribution increase the learning rate? Does a robot perform better with distribution? Do multiple trainers matter? Can we make the robots train other robots to perform better on a more difficult problem? Currently, the roads following results are so good without distribution that we are creating a more difficult experiment for the robot, so that we can effectively test all of these questions.
Future plans for the LAIR include working with the agriculture department at the University of Idaho to make evolve robots capable of weeding potato and wheat fields. We intend to try to use an evolved vision algorithm to identify invasive species and plant illnesses using smartphone cameras and sensors. The smartphones could then create a GPS map of areas that farmer would need to investigate. We will eventually have robots with sophisticated enough behaviors that we can rely on them to kill the unwanted plants.
For more information about Travis’ work, you can contact him at zerill at gmail dot com.