Project Details
Projekt Print View

Learning and Control of Dynamic Manipulation Tasks from Human Demonstrations

Subject Area Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term from 2012 to 2015
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 216483539
 
Final Report Year 2017

Final Report Abstract

Robot learning and control are essential elements in order for robots to perform a variety of tasks which are difficult to implement explicitly in advance. The field of robot learning is at a relatively early stage, most learning algorithms are still restricted to learning and adaptation of movement skills in a well modelled environment setting. This project aims at enhancing the state-of-the-art algorithms on imitation learning for manipulation skills by developing following approaches. In the beginning of this project we focused on the common problems encountered while fitting a Gaussian Mixture Model (GMM) with Expectation Maximization. A novel nature inspired approach has been developed to solve the initialization problem when applying Expectation Maximization. Additionally a simulated annealing expectation maximization algorithm is developed to solve the singularity issues when fitting a Gaussian Mixture Model. After proposing a Gaussian Mixture Model learning scheme, we used GMMs for developing a programming by demonstrations algorithm. We proposed an approach called task parameterized-dynamic movement primitives, which formulates learning as a density estimation problem by using mixture of Gaussian Mixture Models. We have handled the data sparsity along task parameters by introducing artificially generated incomplete data, which can be elegantly handled by Expectation Maximization. The developed Task Parameterized-Dynamic Movement Primitives model requires very few demonstrations for learning and can reproduce motions beyond the demonstrated regions. Lastly we studies on how the policies encoded via GMMs can be further optimized with reinforcement learning. As a benchmark test we considered the skill of generating brush strokes for calligraphy. A GMM based stroke extraction algorithm is developed to provide initial trajectories for drawing calligraphic characters using KUKA-Light Weight Robot. The reward is formulated as the correlation in between the original image and the image reproduced by the robot. The robot successfully acquired the skill by directly using the image of character as demonstrated in the experiments. Moreover, it was also able to improve the acquired skill using RL. The work done in this project has great potential in industrial applications and is economically valuable. For instance, a worker in a factory can quickly setup an assembly line by applying Programming by Demonstrations. This in turn can reduce costs of hiring experienced programmers, can increase productivity in a factory and can save significant amount of programming effort if small batches of items need to be produced.

Publications

  • (2018) Learning task-parameterized dynamic movement primitives using mixture of GMMs. Intel Serv Robotics (Intelligent Service Robotics) 11 (1) 61–78
    Pervez, Affan; Lee, Dongheui
    (See online at https://doi.org/10.1007/s11370-017-0235-8)
  • ”A Componentwise Simulated Annealing EM Algorithm for Mixtures.” Joint German/Austrian Conference on Artificial Intelligence (Künstliche Intelligenz). Springer International Publishing, 2015
    Affan Pervez, and Dongheui Lee
    (See online at https://doi.org/10.1007/978-3-319-24489-1_25)
  • ”Robotic Calligraphy: Learning From Character Images.” Human Friendly Robotics workshop, TU München, 2015
    Omair Ali, Affan Pervez, and Dongheui Lee
  • ”Robotic Calligraphy: Learning From Character Images” TU München, 2015
    Omair Ali
  • Novel learning from demonstration approach for repetitive teleoperation tasks: 2017 IEEE World Haptics Conference (WHC), 2017
    Affan Pervez, Arslan Ali, Jee-Hwan Ryu and Dongheui Lee
    (See online at https://doi.org/10.1109/WHC.2017.7989877)
  • (2019): Learning Control. In: Ambarish Goswami und Prahlad Vadakkepat (Hg.): Humanoid Robotics: A Reference. Dordrecht: Springer Netherlands, S. 1261–1312
    Sylvain Calinon and Dongheui Lee
    (See online at https://doi.org/10.1007/978-94-007-6046-2_68)
 
 

Additional Information

Textvergrößerung und Kontrastanpassung