Logo video2dn
  • Сохранить видео с ютуба
  • Категории
    • Музыка
    • Кино и Анимация
    • Автомобили
    • Животные
    • Спорт
    • Путешествия
    • Игры
    • Люди и Блоги
    • Юмор
    • Развлечения
    • Новости и Политика
    • Howto и Стиль
    • Diy своими руками
    • Образование
    • Наука и Технологии
    • Некоммерческие Организации
  • О сайте

Скачать или смотреть Polish NLProc #17 - Machine Reading, Fast and Slow: When Do Models “Understand” Language?

  • Polish NLP Meetup Group
  • 2022-12-17
  • 27
Polish NLProc #17 - Machine Reading, Fast and Slow: When Do Models “Understand” Language?
  • ok logo

Скачать Polish NLProc #17 - Machine Reading, Fast and Slow: When Do Models “Understand” Language? бесплатно в качестве 4к (2к / 1080p)

У нас вы можете скачать бесплатно Polish NLProc #17 - Machine Reading, Fast and Slow: When Do Models “Understand” Language? или посмотреть видео с ютуба в максимальном доступном качестве.

Для скачивания выберите вариант из формы ниже:

  • Информация по загрузке:

Cкачать музыку Polish NLProc #17 - Machine Reading, Fast and Slow: When Do Models “Understand” Language? бесплатно в формате MP3:

Если иконки загрузки не отобразились, ПОЖАЛУЙСТА, НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если у вас возникли трудности с загрузкой, пожалуйста, свяжитесь с нами по контактам, указанным в нижней части страницы.
Спасибо за использование сервиса video2dn.com

Описание к видео Polish NLProc #17 - Machine Reading, Fast and Slow: When Do Models “Understand” Language?

Abstract:
Two of the most fundamental challenges in Natural Language Understanding (NLU) at present are: (a) how to establish whether deep learning-based models score highly on NLU benchmarks for the `right' reasons; and (b) to understand what those reasons would even be. We investigate the behaviour of reading comprehension models with respect to two linguistic `skills': coreference resolution and comparison. We propose a definition for the reasoning steps expected from a system that would be `reading slowly' and compare that with the behaviour of five models of the BERT family of various sizes, observed through saliency scores and counterfactual explanations. We find that for comparison (but not coreference), the systems based on larger encoders are more likely to rely on the `right' information, but even they struggle with generalization, suggesting that they still learn specific lexical patterns rather than the general principles of comparison.

Bio:
Sagnik Ray Choudhury is a research fellow at the University of Michigan working on explainable information extraction. Previously, as a postdoctoral researcher at the University of Copenhagen, he worked on the explainability of DNN models used in multi-hop reasoning systems, such as question-answering, fact-checking, and natural language inference. During his PhD at Penn State, he worked on information extraction from scholarly figures and tables, information retrieval, and crawling.

Sagnik also worked in the industry as an NLP/ML engineer at Interactions LLC, a leading AI-based customer service automation company. He developed DNN models for large-scale entity extraction and linking, dialogue systems, and sentiment classification and contributed to open-source DNN libraries.

#nlp #xai #polishnlpgroup

Комментарии

Информация по комментариям в разработке

Похожие видео

  • О нас
  • Контакты
  • Отказ от ответственности - Disclaimer
  • Условия использования сайта - TOS
  • Политика конфиденциальности

video2dn Copyright © 2023 - 2025

Контакты для правообладателей [email protected]