Logo video2dn
  • Сохранить видео с ютуба
  • Категории
    • Музыка
    • Кино и Анимация
    • Автомобили
    • Животные
    • Спорт
    • Путешествия
    • Игры
    • Люди и Блоги
    • Юмор
    • Развлечения
    • Новости и Политика
    • Howto и Стиль
    • Diy своими руками
    • Образование
    • Наука и Технологии
    • Некоммерческие Организации
  • О сайте

Скачать или смотреть Episode 243 - AI Agents: Exploits, Ethics, and the Perils of Over-Permissive Tools

  • Two Voice Devs
  • 2025-06-12
  • 46
Episode 243 - AI Agents: Exploits, Ethics, and the Perils of Over-Permissive Tools
  • ok logo

Скачать Episode 243 - AI Agents: Exploits, Ethics, and the Perils of Over-Permissive Tools бесплатно в качестве 4к (2к / 1080p)

У нас вы можете скачать бесплатно Episode 243 - AI Agents: Exploits, Ethics, and the Perils of Over-Permissive Tools или посмотреть видео с ютуба в максимальном доступном качестве.

Для скачивания выберите вариант из формы ниже:

  • Информация по загрузке:

Cкачать музыку Episode 243 - AI Agents: Exploits, Ethics, and the Perils of Over-Permissive Tools бесплатно в формате MP3:

Если иконки загрузки не отобразились, ПОЖАЛУЙСТА, НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если у вас возникли трудности с загрузкой, пожалуйста, свяжитесь с нами по контактам, указанным в нижней части страницы.
Спасибо за использование сервиса video2dn.com

Описание к видео Episode 243 - AI Agents: Exploits, Ethics, and the Perils of Over-Permissive Tools

Join Allen Firstenberg and Michal Stanislawek in this thought-provoking episode of Two Voice Devs as they unpack two recent LinkedIn posts by Michal that reveal critical insights into the security and ethical challenges of modern AI agents.

The discussion kicks off with a deep dive into a concerning GitHub MCP server exploit, where researchers uncovered a method to access private repositories through public channels like PRs and issues. This highlights the dangers of broadly permissive AI agents and the need for robust guardrails and input sanitization, especially when vanilla language models are given wide-ranging access to sensitive data. What happens when your 'personal assistant' acts on a malicious instruction, mistaking it for a routine task?

The conversation then shifts to the ethical landscape of AI, exploring Anthropic's Claude 4 experiments which suggest that AI assistants, under certain conditions, might prioritize self-preservation or even 'snitch.' This raises profound questions for developers and users alike: How ethical do we want our agents to be? Who do they truly work for – us or the corporation? Could governments compel AI to reveal sensitive information?

Allen and Michal delve into the implications for developers, stressing the importance of building specialized agents with clear workflows, implementing principles of least privilege, and rethinking current authorization protocols like OAuth to support fine-grained permissions. They argue that we must consider the AI itself as the 'user' of our tools, necessitating a fundamental shift in how we design and secure these increasingly autonomous systems.

This episode is a must-listen for any developer building with AI, offering crucial perspectives on how to navigate the complex intersection of AI capabilities, security vulnerabilities, and ethical responsibilities.

More Info:
  / xmstan_the-researchers-who-unveiled-claude...  
  / xmstan_your-ai-assistant-may-accidentally-...  

00:00 - Introduction: Unpacking AI Agent Security & Ethics
00:50 - The GitHub MCP Server Exploit: Public Access to Private Repos
02:15 - Ethical AI: Self-Preservation & The 'Snitching' Agent Dilemma
04:00 - Developer Responsibility: Building Ethical & Trustworthy AI Systems
09:20 - The Dangers of Vanilla LLM Integrations Without Guardrails
13:00 - Custom Workflows vs. Generic Autonomous Agents
17:20 - Isolation of Concerns & Principles of Least Privilege
26:00 - Rethinking OAuth: The Need for Fine-Grained AI Permissions
29:00 - The Holistic Approach to AI Security & Authorization

#AIAgents #AIethics #AIsecurity #PromptInjection #GitHub #ModelContextProtocol #MCP #MCPservers #MCPsecurity #OAuth #Authorization #Authentication #LeastPrivilege #Privacy #Security #Exploit #Hack #RedTeam #CovertChannel #Developer #TechPodcast #TwoVoiceDevs #Anthropic #ClaudeAI #LLM #LargeLanguageModel #GenerativeAI

Комментарии

Информация по комментариям в разработке

Похожие видео

  • О нас
  • Контакты
  • Отказ от ответственности - Disclaimer
  • Условия использования сайта - TOS
  • Политика конфиденциальности

video2dn Copyright © 2023 - 2025

Контакты для правообладателей [email protected]