Eliezer Yudkowsky – AI Alignment: Why It's Hard, and Where to Start

Описание к видео Eliezer Yudkowsky – AI Alignment: Why It's Hard, and Where to Start

On May 5, 2016, Eliezer Yudkowsky gave a talk at Stanford University for the 26th Annual Symbolic Systems Distinguished Speaker series (https://symsys.stanford.edu/viewing/e....

Eliezer is a senior research fellow at the Machine Intelligence Research Institute, a research nonprofit studying the mathematical underpinnings of intelligent behavior.

Talk details—including slides, notes, and additional resources—are available at https://intelligence.org/stanford-talk/.

UPDATES/CORRECTIONS:

1:05:53 - Correction Dec. 2016: FairBot cooperates iff it proves that you cooperate with it.

1:08:19 - Update Dec. 2016: Stuart Russell is now the head of a new alignment research institute, the Center for Human-Compatible AI (http://humancompatible.ai/).

1:08:38 - Correction Dec. 2016: Leverhulme CFI is a joint venture between Cambridge, Oxford, Imperial College London, and UC Berkeley. The Leverhulme Trust provided CFI's initial funding, in response to a proposal developed by CSER staff.

1:09:04 - Update Dec 2016: Paul Christiano now works at OpenAI (as does Dario Amodei). Chris Olah is based at Google Brain.

Комментарии

Информация по комментариям в разработке