auto_LiRPA: An Automatic Library for Neural Network Verification and Scalable Certified Defense

Описание к видео auto_LiRPA: An Automatic Library for Neural Network Verification and Scalable Certified Defense

Papers covered in this video:
"Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond" (NeurIPS 2020), https://arxiv.org/pdf/2002.12920 and "Efficient Neural Network Robustness Certification with General Activation Functions" (NeurIPS 2018), https://arxiv.org/pdf/1811.00866

The open-source auto_LiRPA library: https://github.com/KaidiXu/auto_LiRPA

Abstract:

We develop an automatic framework to enable neural network verification on general network structures using linear relaxation based perturbation analysis (LiRPA). Our framework generalizes existing LiRPA algorithms such as CROWN and DeepPoly to operate on general computational graphs. The flexibility, differentiability and ease of use of our framework allow us to obtain state-of-the-art certified defense on fairly complicated networks like DenseNet, ResNeXt, LSTM and Transformer that are not supported by prior works.

In this presentation, I discussed basic concepts on robustness verification of neural networks, and gave a brief overview of CROWN, an efficient neural network verification algorithm auto_LiRPA is based on. Then, I discussed how to extend GROWN to a graph algorithm and operate on general computational graphs. Lastly, I discussed a few applications of auto_LiRPA, including certified defense on Downscaled ImageNet where previous approaches (e.g., CROWN-IBP) cannot scale to, and training robust natural language classifiers and deep reinforcement learning agents. To show auto_LiRPA's capability beyond common robustness verification tasks, we create a neural network with a provably flat optimization landscape by applying LiRPA to network parameters and considering perturbations on model weights.

Комментарии

Информация по комментариям в разработке