Logo video2dn
  • Сохранить видео с ютуба
  • Категории
    • Музыка
    • Кино и Анимация
    • Автомобили
    • Животные
    • Спорт
    • Путешествия
    • Игры
    • Люди и Блоги
    • Юмор
    • Развлечения
    • Новости и Политика
    • Howto и Стиль
    • Diy своими руками
    • Образование
    • Наука и Технологии
    • Некоммерческие Организации
  • О сайте

Скачать или смотреть The Illusion of Thinking: Why Advanced AI Reasoning Models Collapse Under Complexity

  • Techsavy
  • 2025-11-28
  • 6
The Illusion of Thinking: Why Advanced AI Reasoning Models Collapse Under Complexity
  • ok logo

Скачать The Illusion of Thinking: Why Advanced AI Reasoning Models Collapse Under Complexity бесплатно в качестве 4к (2к / 1080p)

У нас вы можете скачать бесплатно The Illusion of Thinking: Why Advanced AI Reasoning Models Collapse Under Complexity или посмотреть видео с ютуба в максимальном доступном качестве.

Для скачивания выберите вариант из формы ниже:

  • Информация по загрузке:

Cкачать музыку The Illusion of Thinking: Why Advanced AI Reasoning Models Collapse Under Complexity бесплатно в формате MP3:

Если иконки загрузки не отобразились, ПОЖАЛУЙСТА, НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если у вас возникли трудности с загрузкой, пожалуйста, свяжитесь с нами по контактам, указанным в нижней части страницы.
Спасибо за использование сервиса video2dn.com

Описание к видео The Illusion of Thinking: Why Advanced AI Reasoning Models Collapse Under Complexity

Frontier Large Reasoning Models (LRMs), such as Claude 3.7 Sonnet Thinking and DeepSeek-R1, are designed to generate detailed "thinking processes" (like long Chain-of-Thought) before providing answers. But how robust is this reasoning capability?

This video dives into a systematic investigation of LRMs using *controllable puzzle environments* like the Tower of Hanoi, Checker Jumping, River Crossing, and Blocks World. These puzzles allow for precise manipulation of problem complexity while maintaining consistent logical structures, offering insights into how LRMs "think".

*Key Findings Revealed:*

*Complete Collapse:* We demonstrate that state-of-the-art LRMs face a *complete accuracy collapse* beyond certain complexity thresholds, failing to develop generalizable problem-solving capabilities for planning tasks.
*Three Performance Regimes:* When compared to standard Large Language Models (LLMs), three distinct performance regimes emerge: standard LLMs surprisingly outperform LRMs on low-complexity tasks, LRMs show an advantage on medium-complexity tasks, but both models fail completely at high complexity.
*Counter-intuitive Scaling Limit:* As problems approach the critical collapse point, LRMs exhibit a scaling limit where their reasoning effort (measured by thinking tokens) **declines**, despite the increased problem difficulty and adequate token budget.
*Internal Inefficiencies:* Analysis of reasoning traces reveals complexity-dependent patterns, including "overthinking" on simpler problems (continuing to explore incorrect solutions after finding the right one). LRMs also show surprising limitations in exact computation, failing to benefit even when provided with explicit algorithms for a solution.

These results raise crucial questions about the nature of reasoning in current LLM systems and highlight fundamental barriers to achieving robust, generalizable AI reasoning capabilities.

***

*Source of Study:*

The information presented is drawn from the research paper: *"The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity"* by Parshin Shojaee, Iman Mirzadeh, Keivan Alizadeh, Maxwell Horton, Samy Bengio, and Mehrdad Farajtabar.

Комментарии

Информация по комментариям в разработке

Похожие видео

  • О нас
  • Контакты
  • Отказ от ответственности - Disclaimer
  • Условия использования сайта - TOS
  • Политика конфиденциальности

video2dn Copyright © 2023 - 2025

Контакты для правообладателей [email protected]