Logo video2dn
  • Сохранить видео с ютуба
  • Категории
    • Музыка
    • Кино и Анимация
    • Автомобили
    • Животные
    • Спорт
    • Путешествия
    • Игры
    • Люди и Блоги
    • Юмор
    • Развлечения
    • Новости и Политика
    • Howto и Стиль
    • Diy своими руками
    • Образование
    • Наука и Технологии
    • Некоммерческие Организации
  • О сайте

Скачать или смотреть Andrew Critch on what AGI might look like in practice

  • Foresight Institute
  • 2025-12-11
  • 300
Andrew Critch on what AGI might look like in practice
  • ok logo

Скачать Andrew Critch on what AGI might look like in practice бесплатно в качестве 4к (2к / 1080p)

У нас вы можете скачать бесплатно Andrew Critch on what AGI might look like in practice или посмотреть видео с ютуба в максимальном доступном качестве.

Для скачивания выберите вариант из формы ниже:

  • Информация по загрузке:

Cкачать музыку Andrew Critch on what AGI might look like in practice бесплатно в формате MP3:

Если иконки загрузки не отобразились, ПОЖАЛУЙСТА, НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если у вас возникли трудности с загрузкой, пожалуйста, свяжитесь с нами по контактам, указанным в нижней части страницы.
Спасибо за использование сервиса video2dn.com

Описание к видео Andrew Critch on what AGI might look like in practice

When people think about AGI, most of them ask “When is it going to arrive?” or “What kind of AGI will we get?”. Andrew Critch, AI safety researcher and mathematician, argues that the most important question is actually “What will we do with it?”

In our conversation, we explore the importance of our choices in the quest to make AGI a force for good. Andrew explains what AGI might look like in practical terms, and the consequences of it being trained on our culture. He also claims that finding the “best” values AI should have is a philosophical trap, and that we should instead focus on finding a basic agreement about “good” vs. “bad” behaviors.

The episode also covers concrete takes on the transition to AGI, including:
Why an advanced intelligence would likely find killing humans “mean.”
How automated computer security checks could be one of the best uses of powerful AI.
Why the best preparation for AGI is simply to build helpful products today.


00:00 Intro
00:54 Andrew Critch’s journey: from math to AI safety
04:49 What everyone gets wrong about extinction risk
06:30 Successionism: would humans care if we went extinct?
08:22 What will AGI actually look like?
11:55 The most important question: what will we do with AGI?
13:12 Visions of AGI going well: tool vs. collaborator
18:58 The "best" trap: why "good" alignment is better than "optimal"
25:17 Making AI a force for good via cultural training
26:43 Building helpful AI: NotADoctor and Multiplicity
29:56 Concrete ideas for high-impact AI work
31:54 Why products change culture faster than narratives
35:39 How to identify and build "good" products
48:54 Principles for the future: transparency & de-escalatory self-defense
57:48 Why we shouldn't just accept AI succession
59:28 Final advice: just build stuff


The full transcript and a complete list of resources can be found here: https://www.existentialhope.com/podca...

Комментарии

Информация по комментариям в разработке

Похожие видео

  • О нас
  • Контакты
  • Отказ от ответственности - Disclaimer
  • Условия использования сайта - TOS
  • Политика конфиденциальности

video2dn Copyright © 2023 - 2025

Контакты для правообладателей [email protected]