Logo video2dn
  • Сохранить видео с ютуба
  • Категории
    • Музыка
    • Кино и Анимация
    • Автомобили
    • Животные
    • Спорт
    • Путешествия
    • Игры
    • Люди и Блоги
    • Юмор
    • Развлечения
    • Новости и Политика
    • Howto и Стиль
    • Diy своими руками
    • Образование
    • Наука и Технологии
    • Некоммерческие Организации
  • О сайте

Скачать или смотреть Gillian Hadfield | The Normative Infrastructure of Cooperation, NeurIPS 2020

  • Schwartz Reisman Institute
  • 2021-09-28
  • 233
Gillian Hadfield | The Normative Infrastructure of Cooperation, NeurIPS 2020
schwartz reisman institutesrischwartzreismantechnologysocietygillian hadfieldgillian k hadfielduniversity of torontou of tneuripsnormativityaiartificial intelligencemachine learningcooperationalignment problemai alignment problemrulessilly rulescomputer sciencecooperative aideepmindopen aigame theoryhumanintelligence
  • ok logo

Скачать Gillian Hadfield | The Normative Infrastructure of Cooperation, NeurIPS 2020 бесплатно в качестве 4к (2к / 1080p)

У нас вы можете скачать бесплатно Gillian Hadfield | The Normative Infrastructure of Cooperation, NeurIPS 2020 или посмотреть видео с ютуба в максимальном доступном качестве.

Для скачивания выберите вариант из формы ниже:

  • Информация по загрузке:

Cкачать музыку Gillian Hadfield | The Normative Infrastructure of Cooperation, NeurIPS 2020 бесплатно в формате MP3:

Если иконки загрузки не отобразились, ПОЖАЛУЙСТА, НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если у вас возникли трудности с загрузкой, пожалуйста, свяжитесь с нами по контактам, указанным в нижней части страницы.
Спасибо за использование сервиса video2dn.com

Описание к видео Gillian Hadfield | The Normative Infrastructure of Cooperation, NeurIPS 2020

Schwartz Reisman Institute Director and Chair Gillian Hadfield presents a keynote talk at NeurIPS 2020, "The normative infrastructure of cooperation."

Details here: https://nips.cc/Conferences/2020/Sche...

Abstract:
In this talk, I will present the case for the critical role played by third-party enforced rules in the extensive forms of cooperation we see in humans. Cooperation, I’ll argue, cannot be adequately accounted for—or modeled for AI—within the framework of human preferences, coordination incentives or bilateral commitments and reciprocity alone. Cooperation is a group phenomenon and requires group infrastructure to maintain. This insight is critical for training AI agents that can cooperate with humans and, likely, other AI agents. Training environments need to be built with normative infrastructure that enables AI agents to learn and participate in cooperative activities—including the cooperative activity that undergirds all others: collective punishment of agents that violate community norms.

Event information:
Problems of cooperation—in which agents seek ways to jointly improve their welfare—are ubiquitous and important. They can be found at all scales ranging from our daily routines—such as highway driving, communication via shared language, division of labor, and work collaborations—to our global challenges—such as disarmament, climate change, global commerce, and pandemic preparedness. Arguably, the success of the human species is rooted in our ability to cooperate, in our social intelligence and skills. Since machines powered by artificial intelligence and machine learning are playing an ever greater role in our lives, it will be important to equip them with the skills necessary to cooperate and to foster cooperation.

We see an opportunity for the field of AI, and particularly machine learning, to explicitly focus effort on this class of problems which we term Cooperative AI. The goal of this research would be to study the many aspects of the problem of cooperation, and innovate in AI to contribute to solving these problems. Central questions include how to build machine agents with the capabilities needed for cooperation, and how advances in machine learning can help foster cooperation in populations of agents (of machines and/or humans), such as through improved mechanism design and mediation.

Research could be organized around key capabilities necessary for cooperation, including: understanding other agents, communicating with other agents, constructing cooperative commitments, and devising and negotiating suitable bargains and institutions. Since artificial agents will often act on behalf of particular humans and in ways that are consequential for humans, this research will need to consider how machines can adequately learn human preferences, and how best to integrate human norms and ethics into cooperative arrangements.

We are planning to bring together scholars from diverse backgrounds to discuss how AI research can contribute to the field of cooperation.

https://www.CooperativeAI.com/

Комментарии

Информация по комментариям в разработке

Похожие видео

  • О нас
  • Контакты
  • Отказ от ответственности - Disclaimer
  • Условия использования сайта - TOS
  • Политика конфиденциальности

video2dn Copyright © 2023 - 2025

Контакты для правообладателей [email protected]