Logo video2dn
  • Сохранить видео с ютуба
  • Категории
    • Музыка
    • Кино и Анимация
    • Автомобили
    • Животные
    • Спорт
    • Путешествия
    • Игры
    • Люди и Блоги
    • Юмор
    • Развлечения
    • Новости и Политика
    • Howto и Стиль
    • Diy своими руками
    • Образование
    • Наука и Технологии
    • Некоммерческие Организации
  • О сайте

Скачать или смотреть Prof. Furong Huang: Towards AI Security – An Interplay of Stress-Testing and Alignment

  • AI Agent Frontier
  • 2025-09-09
  • 137
Prof. Furong Huang: Towards AI Security – An Interplay of Stress-Testing and Alignment
  • ok logo

Скачать Prof. Furong Huang: Towards AI Security – An Interplay of Stress-Testing and Alignment бесплатно в качестве 4к (2к / 1080p)

У нас вы можете скачать бесплатно Prof. Furong Huang: Towards AI Security – An Interplay of Stress-Testing and Alignment или посмотреть видео с ютуба в максимальном доступном качестве.

Для скачивания выберите вариант из формы ниже:

  • Информация по загрузке:

Cкачать музыку Prof. Furong Huang: Towards AI Security – An Interplay of Stress-Testing and Alignment бесплатно в формате MP3:

Если иконки загрузки не отобразились, ПОЖАЛУЙСТА, НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если у вас возникли трудности с загрузкой, пожалуйста, свяжитесь с нами по контактам, указанным в нижней части страницы.
Спасибо за использование сервиса video2dn.com

Описание к видео Prof. Furong Huang: Towards AI Security – An Interplay of Stress-Testing and Alignment

Talk Abstract: As large language models (LLMs) become increasingly integrated into critical applications, ensuring their robustness and alignment with human values is paramount. This talk explores the interplay between stress-testing LLMs and alignment strategies to secure AI systems against emerging threats. We begin by motivating the need for rigorous stress-testing approaches that expose vulnerabilities, focusing on three key challenges: hallucinations, jailbreaking, and poisoning attacks. Hallucinations—where models generate incorrect or misleading content—compromise reliability. Jailbreaking methods that bypass safety filters can be exploited to elicit harmful outputs, while data poisoning undermines model integrity and security. After identifying these challenges, we propose alignment methods that embed ethical and security constraints directly into model behavior. By systematically combining stress-testing methodologies with alignment interventions, we aim to advance AI security and foster the development of resilient, trustworthy LLMs.

Bio: Furong Huang is an Associate Professor of the Department of Computer Science at the University of Maryland. Specializing in trustworthy machine learning, Security in AI, AI for sequential decision-making, and generative AI, Dr. Huang focuses on applying principles to solve practical challenges in contemporary computing to develop efficient, robust, scalable, sustainable, ethical, and responsible machine learning algorithms. She is recognized for her contributions with awards including best paper awards, the MIT Technology Review Innovators Under 35 Asia Pacific, the MLconf Industry Impact Research Award, the NSF CRII Award, the Microsoft Accelerate Foundation Models Research award, the Adobe Faculty Research Award, three JP Morgan Faculty Research Awards and Finalist of AI in Research - AI researcher of the year for Women in AI Awards North America.

#AgenticAI #LLMAgents #MultiAgentSystems #AgenticWorkflows #llms #AIResearch #BoundedAutonomy #CodeGeneration #NLP #AI2025 #agi #safeai

Комментарии

Информация по комментариям в разработке

Похожие видео

  • О нас
  • Контакты
  • Отказ от ответственности - Disclaimer
  • Условия использования сайта - TOS
  • Политика конфиденциальности

video2dn Copyright © 2023 - 2025

Контакты для правообладателей [email protected]