Control-Induced Learning for Autonomous Robots
thesisposted on 23.07.2021, 15:52 authored by Wanxin JinWanxin Jin
The recent progress of machine learning, driven by pervasive data and increasing computational power, has shown its potential to achieve higher robot autonomy. Yet, with too much focus on generic models and data-driven paradigms while ignoring inherent structures of control systems and tasks, existing machine learning methods typically suffer from data and computation inefficiency, hindering their public deployment onto general real-world robots. In this thesis work, we claim that the efficiency of autonomous robot learning can be boosted by two strategies. One is to incorporate the structures of optimal control theory into control-objective learning, and this leads to a series of control-induced learning methods that enjoy the complementary benefits of machine learning for higher algorithm autonomy and control theory for higher algorithm efficiency. The other is to integrate necessary human guidance into task and control objective learning, leading to a series of paradigms for robot learning with minimal human guidance on the loop.
The first part of this thesis focuses on the control-induced learning, where we have made two contributions. One is a set of new methods for inverse optimal control, which address three existing challenges in control objective learning: learning from minimal data, learning time-varying objective functions, and learning under distributed settings. The second is a Pontryagin Differentiable Programming methodology, which bridges the concepts of optimal control theory, deep learning, and backpropagation, and provides a unified end-to-end learning framework to solve a broad range of learning and control tasks, including inverse reinforcement learning, neural ODEs, system identification, model-based reinforcement learning, and motion planning, with data- and computation- efficient performance.
The second part of this thesis focuses on the paradigms for robot learning with necessary human guidance on the loop. We have made two contributions. The first is an approach of learning from sparse demonstrations, which allows a robot to learn its control objective function only from human-specified sparse waypoints given in the observation (task) space; and the second is an approach of learning from
human’s directional corrections, which enables a robot to incrementally learn its control objective, with guaranteed learning convergence, from human’s directional correction feedback while it is acting.
Degree TypeDoctor of Philosophy
DepartmentAeronautics and Astronautics
Campus locationWest Lafayette
Advisor/Supervisor/Committee ChairShaoshuai Mou
Additional Committee Member 2Dana Kulic
Additional Committee Member 3Dengfeng Sun
Additional Committee Member 4Martin J. Corless
Robot LearningMachine Learning for Robot Controlcontrol-induced learningAutonomous robotshuman-robot systemscontrol and learningcontrol objective learninginverse optimal controlInverse Reinforcement LearningLearning from Demonstration (LfD)Optimal ControlMotion planningdifferentiable controlReinforcement LearningControl-based Reinforcement Learningcontrol for learningrobot autonomyControl Systems, Robotics and AutomationAutomation and Control Engineering