Detailseite
Projekt Druckansicht

Learning methods and new sensors for autonomous robots

Fachliche Zuordnung Theoretische Informatik
Förderung Förderung von 2002 bis 2003
Projektkennung Deutsche Forschungsgemeinschaft (DFG) - Projektnummer 5402308
 
Investigate the possibility of using some of the new biologically inspired sensors developed at Penn, such as the tracking and orientation sensors. Right now, our larger robots do the full analysis of the video images using a laptop connected to a digital camera. Most of the processing is spent finding field features in the image (lines, corners, color transitions) in order to find the position of the robot. Using Penn's tracking sensor it would be possible to start from a known position in the field and track the movement of the robot, obtaining in this way an update independent of the robot's odometry. Penn's sensor has been used until now to track objects moving in front of a fixed system - our system would be moving and the environment would be fixed. Penn's orientation sensor detects also some of the low-level features we are interested in (lines, edges, line stops), it would be interesting to explore how to integrate it in our current System. b) Investigate automatic color and light calibration for autonomous robots. Our robots are calibrated now by hand, i.e. by an operator that adjusts the camera and a color table according to what he sees on a computer screen. This is the approach followed by all other RoboCup teams. I would like to be able to put a "naive" robot on the field, that looks around, and from its knowledge of the geometry and colors of the field calibrates itself automatically. The robot would learn the characteristics of the geometric projection made by the camera and would generate a color table according to its position on the field. The main objective would be to be able to take the robot out of a box, and after a few seconds, the robot should be able to explore its surroundings and play. This learning approach could be extended to other robots, such as the Sony legged robots. There are also some new sensors I would like to test for this task, such as the retina inspired camera of the Institute of Microelectronics in Stuttgart (IMS). c) Investigate control of the robot using Reinforcement Learning. Until now we have written the full control routines by hand. Every time we change motors or some part of the hardware, we have to adjust parameters of the robot or weite new control routines. I would like to let the robot learn how to move automatically, by moving on the field and tumbling around, at the beginning. If point (b) succeeds, the robot should be able to learn how to move, how to provide power to the motors, and how to brake.
DFG-Verfahren Sachbeihilfen
 
 

Zusatzinformationen

Textvergrößerung und Kontrastanpassung