Logo video2dn
  • Сохранить видео с ютуба
  • Категории
    • Музыка
    • Кино и Анимация
    • Автомобили
    • Животные
    • Спорт
    • Путешествия
    • Игры
    • Люди и Блоги
    • Юмор
    • Развлечения
    • Новости и Политика
    • Howto и Стиль
    • Diy своими руками
    • Образование
    • Наука и Технологии
    • Некоммерческие Организации
  • О сайте

Скачать или смотреть Active Preference-Based Gaussian Process Regression for Reward Learning: Supplemental Video

  • Stanford ILIAD
  • 2020-05-05
  • 483
Active Preference-Based Gaussian Process Regression for Reward Learning: Supplemental Video
roboticsactive learningstanfordpolytechniqueGaussian processespreference learningpreference elicitationpreference-basedmachine learningartificial intelligencecomparison-based learningcomparative learningreward learninghuman preferenceshrihuman-robot interactionrssrobotics science and systemsnonlinearitynonlinear reward functionsautonomous drivingself-driving carsautonomyinformation gainoptimizationuncertaintyresearchsciencetechnology
  • ok logo

Скачать Active Preference-Based Gaussian Process Regression for Reward Learning: Supplemental Video бесплатно в качестве 4к (2к / 1080p)

У нас вы можете скачать бесплатно Active Preference-Based Gaussian Process Regression for Reward Learning: Supplemental Video или посмотреть видео с ютуба в максимальном доступном качестве.

Для скачивания выберите вариант из формы ниже:

  • Информация по загрузке:

Cкачать музыку Active Preference-Based Gaussian Process Regression for Reward Learning: Supplemental Video бесплатно в формате MP3:

Если иконки загрузки не отобразились, ПОЖАЛУЙСТА, НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если у вас возникли трудности с загрузкой, пожалуйста, свяжитесь с нами по контактам, указанным в нижней части страницы.
Спасибо за использование сервиса video2dn.com

Описание к видео Active Preference-Based Gaussian Process Regression for Reward Learning: Supplemental Video

Paper: https://arxiv.org/abs/2005.02575
Code: https://github.com/Stanford-ILIAD/act...
Talk at RSS 2020:    • Erdem Bıyık's Talk on "Active Preference-B...  

Companion video for RSS 2020 paper:
E Bıyık*, N Huynh*, MJ Kochenderfer, D Sadigh, "Active Preference-Based Gaussian Process Regression for Reward Learning", Proceedings of Robotics: Science and Systems (RSS), Corvallis, Oregon, USA, Jul. 2020.


Designing reward functions is a challenging problem in AI and robotics. Humans usually have a difficult time directly specifying all the desirable behaviors that a robot needs to optimize. One common approach is to learn reward functions from collected expert demonstrations. However, learning reward functions from demonstrations introduces many challenges ranging from methods that require highly structured models, e.g. reward functions that are linear in some predefined set of features to less structured reward functions that on the other hand require tremendous amount of data. In addition, humans tend to have a difficult time providing demonstrations on robots with high degrees of freedom, or even quantifying reward values for given demonstrations. To address these challenges, we present a preference-based learning approach, where as an alternative, the human feedback is only of the form of comparisons between trajectories. Furthermore, we do not assume highly constrained structures on the reward function. Instead, we model the reward function using a Gaussian Process (GP) and propose a mathematical formulation to actively find a GP using only human preferences. Our approach enables us to tackle both inflexibility and data-inefficiency problems within a preference-based learning framework. Our results in simulations and a user study suggest that our approach can efficiently learn expressive reward functions for robotics tasks.

Комментарии

Информация по комментариям в разработке

Похожие видео

  • О нас
  • Контакты
  • Отказ от ответственности - Disclaimer
  • Условия использования сайта - TOS
  • Политика конфиденциальности

video2dn Copyright © 2023 - 2025

Контакты для правообладателей [email protected]