WFVML 2022 Invited Talk: Efficient Neural Network Verification using Branch and Bound (Suman Jana)

Описание к видео WFVML 2022 Invited Talk: Efficient Neural Network Verification using Branch and Bound (Suman Jana)

Pre-recorded invited talk at 1st Workshop on Formal Verification of Machine Learning (WFVML 2022), colocated with ICML 2022. More details about this workshop can be found at https://www.ml-verification.com/

Title: Efficient Neural Network Verification using Branch and Bound
Speaker: Prof. Suman Jana (Columbia University)

Bio: Suman Jana is an associate professor in the department of computer science at Columbia University. His primary research interests are at the intersection of computer security and machine learning. More specifically, he is interested both in using machine learning to improve software security and in improving security and reliability of the machine learning models themselves. Suman’s work is well recognized in computer security and he received the 2021 OSDI best paper award, 2019 NSF CAREER Award, 2018 ARO Young Investigator Award, 2017 SOSP best paper award, 2017 Google Faculty Research Award, IEEE S&P 2014 Best Practical Paper Award and IEEE S&P 2012 Best Student Paper Award.

Abstract: In this talk, I will describe two recent Branch and Bound (BaB) verifiers developed by our group to ensure different safety properties of neural networks. The BaB verifiers involve two main steps: (1) recursively splitting the original verification problem into easier independent subproblems by splitting input or hidden neurons; and (2) for each split subproblem, using fast but incomplete bound propagation techniques to compute sound estimated bounds for the outputs of the target neural network. One of the key limitations of existing BaB verifiers is computing tight relaxations of activation functions' (i.e., ReLU) nonlinearities. Our recent works (α-CROWN and β-CROWN) introduce a primal-dual approach and jointly optimize the corresponding Lagrangian multipliers for each ReLU with gradient ascent. Such an approach is highly parallelizable and avoids calls to expensive LP solvers. Our verifiers not only provide tighter output estimations than existing bound propagation methods but also can fully leverage GPUs with massive parallelization. Our verifier, α,β-CROWN (alpha-beta-CROWN), won the second International Verification of Neural Networks Competition (VNN-COMP 2021) with the highest total score.

Комментарии

Информация по комментариям в разработке