Logo video2dn
  • Сохранить видео с ютуба
  • Категории
    • Музыка
    • Кино и Анимация
    • Автомобили
    • Животные
    • Спорт
    • Путешествия
    • Игры
    • Люди и Блоги
    • Юмор
    • Развлечения
    • Новости и Политика
    • Howto и Стиль
    • Diy своими руками
    • Образование
    • Наука и Технологии
    • Некоммерческие Организации
  • О сайте

Скачать или смотреть I'm Sick of the ASI Fear-Mongering (Hank Green's Video Made Me Rage, Featuring Nate Soares)

  • Ontology Explained: Philosophy and AI
  • 2025-11-03
  • 3316
I'm Sick of the ASI Fear-Mongering (Hank Green's Video Made Me Rage, Featuring Nate Soares)
  • ok logo

Скачать I'm Sick of the ASI Fear-Mongering (Hank Green's Video Made Me Rage, Featuring Nate Soares) бесплатно в качестве 4к (2к / 1080p)

У нас вы можете скачать бесплатно I'm Sick of the ASI Fear-Mongering (Hank Green's Video Made Me Rage, Featuring Nate Soares) или посмотреть видео с ютуба в максимальном доступном качестве.

Для скачивания выберите вариант из формы ниже:

  • Информация по загрузке:

Cкачать музыку I'm Sick of the ASI Fear-Mongering (Hank Green's Video Made Me Rage, Featuring Nate Soares) бесплатно в формате MP3:

Если иконки загрузки не отобразились, ПОЖАЛУЙСТА, НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если у вас возникли трудности с загрузкой, пожалуйста, свяжитесь с нами по контактам, указанным в нижней части страницы.
Спасибо за использование сервиса video2dn.com

Описание к видео I'm Sick of the ASI Fear-Mongering (Hank Green's Video Made Me Rage, Featuring Nate Soares)

I watched a video with Hank Green and Nate Soares discussing the existential threat of Artificial Super Intelligence (ASI), and honestly, I was infuriated. Soares, co-author of If Anyone Builds It, Everyone Dies, argues that if an ASI is not perfectly "aligned" with human values, it will surely lead to our doom. But as I break down their major points—from their vague, fear-inducing definition of ASI to their outlandish claims about current AI capabilities—it seems like the real problem isn't a malicious super-intelligence, but rather the unnecessary mystery and power concentration surrounding the technology.

This video is my detailed objection to the "stop all AI" argument. I challenge the notion that LLMs are plotting, self-aware, or "caring" in a way that poses an existential threat. The metaphors they use, like comparing AI development to alchemy, only serve to obscure the technology and increase the power of the builders (the "alchemists"). Ultimately, the true "alignment problem" isn't with an uncaring algorithm, but with the huge corporations and influential figures who are shaping this powerful technology. I propose four foundational principles for thinking about AI that prioritize clarity, human flourishing, and a healthy distrust of the powerful.

Resources:
-- Hank's original video:    • ChatGPT isn't Smart. It's something Much W...  
-- The Anthropic Paper on self-awareness: https://assets.anthropic.com/m/12f214... (page 58)

Here is a chunk of my speaking notes, to help you orient yourself:
Caveats:
The run down and my problems
Nate defines AI as:
"smarter/better than the average human at any mental task" (8:00 mark)
** The definition is super vague.
*** Does ASI have to be doing the things mentally? Or just functionally equivalent to the mental tasks?
*** Does it have to be better at any single task? Two? Three? How many?
Bold take: the vagueness is the point
** They disparage philosophy several times. And I hate that, but it's weird that they do it almost as a crutch to point out that there's a gaping hole in their worldview that they can't fill in.
First, Nate suggests that some LLMs are using proto-reasoning
11:38 -    • ChatGPT isn't Smart. It's something Much W...  
*** This isn't something we can just gloss over. If they aren't thinking, then they aren't ASI. And if they're not ASI, then the whole thing crumbles.
** They make outlandish claims about how capable AI is.
*** It loads the dice saying that they can lie.
*** This takes for granted that the truth part is more or less easy, which is nonsense.
** They contradict themselves about whether LLM words mean anything at all
*** They think the words give us insight into its thinking.
*** Then they say we can't trust AI because they are using words differently.
13:05    • ChatGPT isn't Smart. It's something Much W...  
"[nate]Sometimes your human intuitions for what these pieces of reasoning mean aren't how the AI is using those words." [hank] "That freaks me out. I'm just saying that freaks me out."
15:08    • ChatGPT isn't Smart. It's something Much W...  
Third, Hank and Nate are talking about how good an AI can be at understanding creatures.
19:40    • ChatGPT isn't Smart. It's something Much W...  
But I found this very confusing. So what?
** What are LLMs doing?
He does a great job of describing how LLM's work. (28:00 mark)
** They just throw out there that AI is self-aware.
13:58    • ChatGPT isn't Smart. It's something Much W...  
We can go straight to the source material here: https://assets.anthropic.com/m/12f214... (page 58)
*** The alignment problem--the big one.
They describe the AI as wanting/caring about things that don't align with our well-being and interests.
**** AI DOESN'T CARE! Or at least, you have to make the case that it does.
**** Alignment is the issue, but it's not me being misaligned with Claude and ChatGPT and Grok, it's me being misaligned with Dario Amodei and Sam Altman and Elon Musk.
** Eliezer Yudkowski
This gets tied up in a bunch of things, but it starts with Harry Potter rationalist fan fiction. It gets into effective altruism and Roko's Basilisk. It gets really weird.
** Decision Theory Run Amok
1. ASI is at least a little bit likely.
2. If ASI comes about, there's an x% chance everybody dies.
3. If x is greater than some low number, we should stop AI development.
4. So, we should stop AI development.
You HAVE to get the first premise to a plausible
** Some metaphors they use a lot:
*** AI is grown, not coded. It's like an alien biology.
*** Doritos/sucralose/junk food (cigarettes).
*** Alchemy1
Foundational Views:
** Ecclesiastes: Nothing new under the sun.
** Technology isn't magic. Try to understand stuff, and it's fine to admit when you don't.
** Distrust the large/powerful/influential people and companies.
** Be kind to others. Promote human flourishing.
To Hank Green

Комментарии

Информация по комментариям в разработке

Похожие видео

  • О нас
  • Контакты
  • Отказ от ответственности - Disclaimer
  • Условия использования сайта - TOS
  • Политика конфиденциальности

video2dn Copyright © 2023 - 2025

Контакты для правообладателей [email protected]