Logo video2dn
  • Сохранить видео с ютуба
  • Категории
    • Музыка
    • Кино и Анимация
    • Автомобили
    • Животные
    • Спорт
    • Путешествия
    • Игры
    • Люди и Блоги
    • Юмор
    • Развлечения
    • Новости и Политика
    • Howto и Стиль
    • Diy своими руками
    • Образование
    • Наука и Технологии
    • Некоммерческие Организации
  • О сайте

Скачать или смотреть Can We Make Machine Learning Safe for Safety-Critical Systems?

  • Software Engineering Institute | Carnegie Mellon University
  • 2025-05-15
  • 209
Can We Make Machine Learning Safe for Safety-Critical Systems?
AIMLMachine LearningArtificial Intelligence Engineering
  • ok logo

Скачать Can We Make Machine Learning Safe for Safety-Critical Systems? бесплатно в качестве 4к (2к / 1080p)

У нас вы можете скачать бесплатно Can We Make Machine Learning Safe for Safety-Critical Systems? или посмотреть видео с ютуба в максимальном доступном качестве.

Для скачивания выберите вариант из формы ниже:

  • Информация по загрузке:

Cкачать музыку Can We Make Machine Learning Safe for Safety-Critical Systems? бесплатно в формате MP3:

Если иконки загрузки не отобразились, ПОЖАЛУЙСТА, НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если у вас возникли трудности с загрузкой, пожалуйста, свяжитесь с нами по контактам, указанным в нижней части страницы.
Спасибо за использование сервиса video2dn.com

Описание к видео Can We Make Machine Learning Safe for Safety-Critical Systems?

AI Engineering https://insights.sei.cmu.edu/artifici...

This talk was given as part of the National AI Engineering Study speaker series.

The impressive new capabilities of systems created using deep learning are encouraging engineers to apply these techniques in safety-critical applications such as medicine, aeronautics, and self-driving cars. This talk will discuss the ways that machine learning methodologies are changing to operate in safety-critical systems. These changes include (a) building high-fidelity simulators for the domain, (b) adversarial collection of training data to ensure coverage of the so-called Operational Design Domain (ODD) and, specifically, the hazardous regions within the ODD, (c) methods for verifying that the fitted models generalize well, and (d) methods for estimating the probability of harms in normal operation. There are many research challenges to achieving these.

But we must do more, because traditional safety engineering only addresses the known hazards. We must design our systems to detect novel hazards as well. We adopt Leveson’s view of safety as an ongoing hierarchical control problem in which controls are put in place to stabilize the system against disturbances. Disturbances include novel hazards but also management changes such as budget cuts, staff turnover, novel regulations, and so on. Traditionally, it has been the human operators and managers who have provided these stabilizing controls. Are there ways that AI methods, such as novelty detection, near-miss detection, diagnosis and repair, can be applied to help the human organization manage these disturbances and maintain system safety?

Dr. Dietterich is University Distinguished Professor Emeritus in the School of Electrical Engineering and Computer Science at Oregon State University. Dietterich is one of the pioneers of the field of Machine Learning and has authored more than 220 refereed publications and two books. His current research topics include robust artificial intelligence, robust human-AI systems, and applications in sustainability.

Dietterich is the 2025 recipient of the Feigenbaum prize for applied AI and the 2024 recipient of the IJCAI Award for Research Excellence. Dietterich is also the recipient of the 2022 AAAI Distinguished Service Award and the 2020 ACML Distinguished Contribution Award, both recognizing his many years of service to the research community. He is a former President of the Association for the Advancement of Artificial Intelligence and the founding president of the International Machine Learning Society. Other major roles include Executive Editor of the journal Machine Learning, co-founder of the Journal for Machine Learning Research, and program chair of AAAI 1990 and NIPS 2000. He currently chairs the Computer Science Section of arXiv.org.

#aiengineering

Комментарии

Информация по комментариям в разработке

Похожие видео

  • О нас
  • Контакты
  • Отказ от ответственности - Disclaimer
  • Условия использования сайта - TOS
  • Политика конфиденциальности

video2dn Copyright © 2023 - 2025

Контакты для правообладателей [email protected]