Logo video2dn
  • Сохранить видео с ютуба
  • Категории
    • Музыка
    • Кино и Анимация
    • Автомобили
    • Животные
    • Спорт
    • Путешествия
    • Игры
    • Люди и Блоги
    • Юмор
    • Развлечения
    • Новости и Политика
    • Howto и Стиль
    • Diy своими руками
    • Образование
    • Наука и Технологии
    • Некоммерческие Организации
  • О сайте

Скачать или смотреть Vinay Sankarapu: ML Observability for High-Risk AI Governance Framework

  • USFDataInstitute
  • 2023-05-16
  • 135
Vinay Sankarapu: ML Observability for High-Risk AI Governance Framework
  • ok logo

Скачать Vinay Sankarapu: ML Observability for High-Risk AI Governance Framework бесплатно в качестве 4к (2к / 1080p)

У нас вы можете скачать бесплатно Vinay Sankarapu: ML Observability for High-Risk AI Governance Framework или посмотреть видео с ютуба в максимальном доступном качестве.

Для скачивания выберите вариант из формы ниже:

  • Информация по загрузке:

Cкачать музыку Vinay Sankarapu: ML Observability for High-Risk AI Governance Framework бесплатно в формате MP3:

Если иконки загрузки не отобразились, ПОЖАЛУЙСТА, НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если у вас возникли трудности с загрузкой, пожалуйста, свяжитесь с нами по контактам, указанным в нижней части страницы.
Спасибо за использование сервиса video2dn.com

Описание к видео Vinay Sankarapu: ML Observability for High-Risk AI Governance Framework

Introduction of the speaker:
Vinay Kumar Sankarapu is the Co-Founder and CEO of Arya.ai. He did his Bachelor's and Masters's in Mechanical Engineering at IIT Bombay. He started Arya.ai in 2013, along with Deekshith while in college. He wrote many guest articles on ‘Responsible AI’, ‘AI usage risks in BFSIs’ and ‘AI Governance framework’. He presented multiple technical and industry presentations across multiple conferences globally - Nvidia GTC, ReWork, Cypher, Nasscom, TEDx etc. He was the youngest member of ‘AI task force’ set up by the Indian Commerce and Ministry in 2017 to provide inputs on policy and to support AI adoption as part of Industry 4.0. He was listed in Forbes Asia 30-Under-30 under the technology section. He represented India in Worldcup Technology Challenge in San Fransisco in 2015, among 54 other countries in the finals.

Abstract:

Topic: How to use ML Observability to design governance framework for high-risk AI use cases.

Building AI solutions for high-risk use cases requires multiple layers on top of models, such as explainability, audibility, and safety, to make them acceptable and usable. For example, without strong supporting evidence, a doctor may not trust the model prediction to treat a customer with chemo. If the models fail in such use cases, there could be a financial or reputational loss.
But can these models fail? How do we build trust and accountability?
ML Models carry intrinsic baggage such as - any AI/ML model can fail, they are not explainable by design, they always are risky to use in production, and auditing them is very complex. This is what ML Observability tools are defined to solve. In this discussion, the speaker will present the issues with ML models and how to design AI governance using ML observability to provide trust, confidence, and accountability.

Agenda:

Why do models fail?
Introducing ML Observability
Using ML Observability for model monitoring, model explainability, and auditing.

Комментарии

Информация по комментариям в разработке

Похожие видео

  • О нас
  • Контакты
  • Отказ от ответственности - Disclaimer
  • Условия использования сайта - TOS
  • Политика конфиденциальности

video2dn Copyright © 2023 - 2025

Контакты для правообладателей [email protected]