Logo video2dn
  • Сохранить видео с ютуба
  • Категории
    • Музыка
    • Кино и Анимация
    • Автомобили
    • Животные
    • Спорт
    • Путешествия
    • Игры
    • Люди и Блоги
    • Юмор
    • Развлечения
    • Новости и Политика
    • Howto и Стиль
    • Diy своими руками
    • Образование
    • Наука и Технологии
    • Некоммерческие Организации
  • О сайте

Скачать или смотреть Beyond Sentences & Paragraphs: Towards Document-level & Multi-doc Understanding/Arman Cohan (UW&AI2)

  • Web IR / NLP Group at NUS
  • 2022-01-01
  • 325
Beyond Sentences & Paragraphs: Towards Document-level & Multi-doc Understanding/Arman Cohan (UW&AI2)
  • ok logo

Скачать Beyond Sentences & Paragraphs: Towards Document-level & Multi-doc Understanding/Arman Cohan (UW&AI2) бесплатно в качестве 4к (2к / 1080p)

У нас вы можете скачать бесплатно Beyond Sentences & Paragraphs: Towards Document-level & Multi-doc Understanding/Arman Cohan (UW&AI2) или посмотреть видео с ютуба в максимальном доступном качестве.

Для скачивания выберите вариант из формы ниже:

  • Информация по загрузке:

Cкачать музыку Beyond Sentences & Paragraphs: Towards Document-level & Multi-doc Understanding/Arman Cohan (UW&AI2) бесплатно в формате MP3:

Если иконки загрузки не отобразились, ПОЖАЛУЙСТА, НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если у вас возникли трудности с загрузкой, пожалуйста, свяжитесь с нами по контактам, указанным в нижней части страницы.
Спасибо за использование сервиса video2dn.com

Описание к видео Beyond Sentences & Paragraphs: Towards Document-level & Multi-doc Understanding/Arman Cohan (UW&AI2)

ABSTRACT:
In this talk, I will describe a few of our recent works on developing Transformer-based models that target document-level and multi-document natural language tasks. I will first introduce Specter, a method for producing document representations using a Transformer model that incorporates document-level relatedness signals. I will then discuss Longformer, an efficient transformer model that can process and contextualize information across inputs of several thousands of tokens. This is achieved by replacing the full self-attention mechanism in transformers with sparse local and global attention patterns. I will then discuss two of our efforts in developing general language models for multi-document tasks. CDLM is an encoder-only model for multi-document tasks that uses multiple related documents during pretraining and pretrains a dynamic global attention for multi document tasks. I will then briefly discuss our recent work on PRIMER, a general pre-trained model for multi-document summarization tasks. Finally, I will discuss some of our other efforts on creating challenging document level benchmarks.

BIO-DATA:
Arman Cohan is a Research Scientist at the Allen Institute for AI (AI2) and an Affiliate Assistant Professor at University of Washington. His research focused on developing natural language processing (NLP) models for document-level and multi-document understanding, natural language generation and summarization as well as information discovery and filtering.He is also interested in applications of NLP in science and health domains. His research has been recognized with multiple awards including a best paper award at EMNLP 2017, an honorable mention at COLING 2018, and Harold N. Glassman Distinguished Doctoral Dissertation award in 2019.

Slides link (via Speakerdeck): https://speakerdeck.com/wingnus/beyon...
Related Link: https://wing-nus.github.io/ir-seminar...

Комментарии

Информация по комментариям в разработке

Похожие видео

  • О нас
  • Контакты
  • Отказ от ответственности - Disclaimer
  • Условия использования сайта - TOS
  • Политика конфиденциальности

video2dn Copyright © 2023 - 2025

Контакты для правообладателей [email protected]