Safe Model-based Reinforcement Learning with Stability Guarantees (NIPS 2017 Spotlight)

Описание к видео Safe Model-based Reinforcement Learning with Stability Guarantees (NIPS 2017 Spotlight)

Poster session at NIPS 2016 (Tuesday, Dec 5th, 2016, 6-9.30pm):
https://nips.cc/Conferences/2017/Sche...

Extended version of the paper:
https://arxiv.org/abs/1705.08551

Code:
https://github.com/befelix/safe_learning

Talk at CoRL 2017:
   • The Conference on Robot Learning 2017  

Authors:
Felix Berkenkamp, Matteo Turchetta, Angela P. Schoellig, Andreas Krause

Abstract:
Reinforcement learning is a powerful paradigm for learning optimal policies from experimental data. However, to find optimal policies, most reinforcement learning algorithms explore all possible actions, which may be harmful for real-world systems. As a consequence, learning algorithms are rarely applied on safety-critical systems in the real world. In this paper, we present a learning algorithm that explicitly considers safety, defined in terms of stability guarantees. Specifically, we extend control-theoretic results on Lyapunov stability verification and show how to use statistical models of the dynamics to obtain high-performance control policies with provable stability certificates. Moreover, under additional regularity assumptions in terms of a Gaussian process prior, we prove that one can effectively and safely collect data in order to learn about the dynamics and thus both improve control performance and expand the safe region of the state space. In our experiments, we show how the resulting algorithm can safely optimize a neural network policy on a simulated inverted pendulum, without the pendulum ever falling down.

More information:
https://las.ethz.ch
http://berkenkamp.me

Комментарии

Информация по комментариям в разработке