Logo video2dn
  • Сохранить видео с ютуба
  • Категории
    • Музыка
    • Кино и Анимация
    • Автомобили
    • Животные
    • Спорт
    • Путешествия
    • Игры
    • Люди и Блоги
    • Юмор
    • Развлечения
    • Новости и Политика
    • Howto и Стиль
    • Diy своими руками
    • Образование
    • Наука и Технологии
    • Некоммерческие Организации
  • О сайте

Скачать или смотреть dScience Lunch Seminar: Large language models under the hood

  • UiO Realfagsbiblioteket
  • 2023-09-07
  • 266
dScience Lunch Seminar: Large language models under the hood
  • ok logo

Скачать dScience Lunch Seminar: Large language models under the hood бесплатно в качестве 4к (2к / 1080p)

У нас вы можете скачать бесплатно dScience Lunch Seminar: Large language models under the hood или посмотреть видео с ютуба в максимальном доступном качестве.

Для скачивания выберите вариант из формы ниже:

  • Информация по загрузке:

Cкачать музыку dScience Lunch Seminar: Large language models under the hood бесплатно в формате MP3:

Если иконки загрузки не отобразились, ПОЖАЛУЙСТА, НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если у вас возникли трудности с загрузкой, пожалуйста, свяжитесь с нами по контактам, указанным в нижней части страницы.
Спасибо за использование сервиса video2dn.com

Описание к видео dScience Lunch Seminar: Large language models under the hood

Welcome to our dScience lunch seminar in the Science Library, where Andrey Kutuzov will talk about ChatGPT.

In the last few years, radical increase in the scale of deep neural language models (both in terms of the size of the training data and the size of the models themselves) has led to impressive achievements in various natural language processing tasks. "Celebrity" models, like ChatGPT, LLaMa, BLOOM or PaLM are already sometimes described to as "approaching artificial intelligence", although the reality can differ from over-hyped media coverage.

In this talk, Kutuzov will describe the foundations of the technology behind large-scale language models. Two most important components behind their success are 1) state-of-the-art deep learning architectures (in particular, Transformer) and 2) the availability of tremendous amount of textual data used to train such models. The interaction of these two poses intricate theoretical and practical questions, also linked to issues with unequal distribution of computing resources. Do we have enough good-quality training data for languages other than English? Is "data poisoning" with automatically generated texts is a real danger? Why is it important to open source both training data and model weights?

Speaker
Andrey Kutuzov (PhD, UiO, 2020) is an Associate Professor in the Language Technology Group at the University of Oslo. He currently serves as the Norwegian on-site manager of the High-Performance Language Technology (HPLT) project. His academic interests include Computational linguistics and natural language processing, semantic change detection and diachronically aware language models, distributional semantics, machine learning and large-scale language models. In 2022, Kutuzov received the Norwegian Artificial Intelligence Research Consortium (NORA) award as a Distinguished Early Career Researcher.

https://www.uio.no/dscience/english/n...

Комментарии

Информация по комментариям в разработке

Похожие видео

  • О нас
  • Контакты
  • Отказ от ответственности - Disclaimer
  • Условия использования сайта - TOS
  • Политика конфиденциальности

video2dn Copyright © 2023 - 2025

Контакты для правообладателей [email protected]