Robust Reinforcement Learning against Adversarial Perturbations on State Observations

Описание к видео Robust Reinforcement Learning against Adversarial Perturbations on State Observations

Papers covered in this video:
"Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations" https://arxiv.org/abs/2003.08938
"Robust Reinforcement Learning on State Observations with Learned Optimal Adversary" https://openreview.net/pdf?id=sCZbhBv...

Code and pretrained agents:
http://papercode.cc/RobustRL

Abstract:

A reinforcement learning (RL) agent observes its states through observations, which may contain natural measurement errors or adversarial noises that can mislead the agent into making suboptimal actions. Several works have shown this vulnerability via adversarial attacks, but existing approaches on improving the robustness under adversarial perturbations on state observations have limited success and lack for theoretical principles. We propose the state-adversarial Markov decision process (SA-MDP) to study the fundamental properties of this problem, and develop a theoretically principled robust policy regularization which can be applied to a large family of deep RL algorithms, including proximal policy optimization (PPO), deep deterministic policy gradient (DDPG) and deep Q networks (DQN). Additionally, we show that under the SA-MDP framework, we can solve an optimal adversary which is significantly stronger than existing adversarial attacks, and we can alternatively train the agent with a learned optimal adversary to improve the robustness of RL agents under strong attacks. We significantly improve the robustness of PPO, DDPG and DQN agents under a suite of strong white box adversarial attacks, including new attacks of our own (robust sarsa attack and maximal action difference attack).

Комментарии

Информация по комментариям в разработке