PhD Dissertation Defense – Aniruddha Saha (BACKDOOR ATTACKS IN COMPUTER VISION)

Описание к видео PhD Dissertation Defense – Aniruddha Saha (BACKDOOR ATTACKS IN COMPUTER VISION)

BACKDOOR ATTACKS IN COMPUTER VISION: TOWARDS ADVERSARIALLY ROBUST MACHINE LEARNING MODELS

Deep Neural Networks (DNNs) have become the standard building block in numerous machine learning applications, including computer vision, speech recognition, machine translation, and robotic manipulation, achieving state-of-the-art performance on complex tasks. The widespread success of these networks has driven their deployment in sensitive domains like health care, finance, autonomous driving, and defense-related applications.

However, DNNs are vulnerable to adversarial attacks. An adversary is a person with malicious intent whose goal is to disrupt the normal functioning of a machine learning pipeline. Research has shown that an adversary can tamper with the training process of a model by injecting misrepresentative data (poisons) into the training set. The manipulation is done in a way that the victim’s model will malfunction only when a trigger modifies a test input. These are called backdoor attacks. For instance, a backdoored model in a self-driving car might work accurately for days before it suddenly fails to detect a pedestrian when the adversary decides to exploit the backdoor. Vulnerability to backdoor attacks is dangerous when deep learning models are deployed in safety-critical applications.

This dissertation studies ways in which state-of-the-art deep learning methods for computer vision are vulnerable to backdoor attacks and proposes defense methods to remedy the vulnerabilities. We push the limits of our current understanding of backdoors and address pertinent research questions.

Advisory Committee:
Dr. Hamed Pirsivash, Advisor/Co-Chair
Dr. Anupam Joshi, Chair
Dr. Tim Oates
Dr. Tom Goldstein
Dr. Pin-Yu Chen

Комментарии

Информация по комментариям в разработке