Building AI Security In: MLSecOps in Practice

Описание к видео Building AI Security In: MLSecOps in Practice

Are your AI and ML systems secure? How do you know? The more we rely on AI and ML, the more important it is that those systems are trusted and resilient. This talk explains how teams can build security into the Machine Learning lifecycle. Because, although many engineering and security professionals are new to ML, they carry with them, deep learning and practical experience from DevSecOps implementations that can serve as a strong foundation for becoming MLSecOps experts. Starting with an overview of real vs. perceived or overblown risks in AI and ML, we’ll help attendees focus on the most impactful security issues. From this baseline, we provide an explanation of how the MLOps lifecycle overlaps with DevOps and highlight the areas where the two processes diverge and why that matters. For example, while developers work in IDEs, data scientists perform tests and analysis inside of Jupyter notebooks. In use, software doesn’t change, while ML models change dynamically as they “learn." Using DevSecOps as a guide, we provide clear guidance on how and where security can be woven into the ML pipeline to create an MLSecOps framework that incorporates core learnings from DevSecOps and extends them to ML uses cases. We close the talk with lessons from real ML Engineering teams that illustrate best practices for securing ML across people, process, and technology.

Комментарии

Информация по комментариям в разработке