Logo video2dn
  • Сохранить видео с ютуба
  • Категории
    • Музыка
    • Кино и Анимация
    • Автомобили
    • Животные
    • Спорт
    • Путешествия
    • Игры
    • Люди и Блоги
    • Юмор
    • Развлечения
    • Новости и Политика
    • Howto и Стиль
    • Diy своими руками
    • Образование
    • Наука и Технологии
    • Некоммерческие Организации
  • О сайте

Скачать или смотреть Polish NLProc #15 - On the Adversarial Vulnerabilities of Large Language Models

  • Polish NLP Meetup Group
  • 2022-10-29
  • 33
Polish NLProc #15 - On the Adversarial Vulnerabilities of Large Language Models
  • ok logo

Скачать Polish NLProc #15 - On the Adversarial Vulnerabilities of Large Language Models бесплатно в качестве 4к (2к / 1080p)

У нас вы можете скачать бесплатно Polish NLProc #15 - On the Adversarial Vulnerabilities of Large Language Models или посмотреть видео с ютуба в максимальном доступном качестве.

Для скачивания выберите вариант из формы ниже:

  • Информация по загрузке:

Cкачать музыку Polish NLProc #15 - On the Adversarial Vulnerabilities of Large Language Models бесплатно в формате MP3:

Если иконки загрузки не отобразились, ПОЖАЛУЙСТА, НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если у вас возникли трудности с загрузкой, пожалуйста, свяжитесь с нами по контактам, указанным в нижней части страницы.
Спасибо за использование сервиса video2dn.com

Описание к видео Polish NLProc #15 - On the Adversarial Vulnerabilities of Large Language Models

Abstract: Large-scale pre-trained language models have achieved tremendous success across a wide range of natural language understanding (NLU) tasks, even surpassing human performance. However, the robustness of these models can be challenged by carefully crafted textual adversarial examples. We first propose an efficient and effective framework SemAttack to generate natural adversarial text by constructing different semantic perturbation functions, which optimizes the generated perturbations constrained on generic semantic spaces, including typo space, knowledge space (e.g., WordNet), contextualized semantic space (e.g., the embedding space of BERT clusterings), or the combination of these spaces. We further present Adversarial GLUE (AdvGLUE), a new multi-task benchmark to quantitatively and thoroughly explore and evaluate the vulnerabilities of modern large-scale language models under various types of adversarial attacks. We hope our work will motivate the development of new adversarial attacks that are more stealthy and semantic-preserving, as well as new robust language models against sophisticated adversarial attacks.

Bio: Boxin Wang is a computer science PhD candidate at the University of Illinois at Urbana-Champaign (UIUC). He is a research assistant at Secure Learning lab led by Prof. Bo Li. He was awarded with NeurIPS 2022 Scholar Award, Yunni & Maxine Pao Memorial Fellowship, and has been selected as The Norton Labs Graduate Fellowship Finalist. He had multiple research internships at Google, Microsoft, and NVIDIA. His research interests are trustworthy natural language processing (NLP), including exploring the vulnerabilities of existing state-of-the-art ML models, as well as designing robust, private, and generalizable models for social goods. Additional information is available at https://wbx.life.

The talk will be held in English.
#nlp #adversarial_examples #large_language_models

Комментарии

Информация по комментариям в разработке

Похожие видео

  • О нас
  • Контакты
  • Отказ от ответственности - Disclaimer
  • Условия использования сайта - TOS
  • Политика конфиденциальности

video2dn Copyright © 2023 - 2025

Контакты для правообладателей [email protected]