Securing LLM Applications: the devil is in the detail

Описание к видео Securing LLM Applications: the devil is in the detail

Securing a modern LLM system (even if what’s under scrutiny is only an application involving LLM technology) must involve diving into the engineering and design of the specific LLM system itself. During this webinar, we will discuss an approach intended to make that kind of detailed work easier and more consistent by providing a baseline and a set of risks to consider. Fortunately, modern threat modeling tools already have knowledge of these risks built in.

In our view, LLM systems engineers can (and should) devise and field a more secure LLM application by carefully considering ML-related risks while designing, implementing, and fielding their own specific systems. In security, the devil is in the details, and we attempt to provide as much detail as possible regarding LLM security risks and some basic controls.

Are you a user of Large Language Models (LLMs)?
Are you a CISO or an application security leader confronted with ML security and Adversarial AI?
Do you or your teams utilize output from Machine Learning (ML)/Artificial Intelligence (AI) applications and systems?
Are you looking for risk management and threat modeling guidance for AI/ML?
Do you wonder how NIST, OWASP, and BIML stack up when it comes to ML risks?
If you said yes to any of the above, then this webinar is for you. Listen in as we discuss 81 specific risks identified in the 2024 LLM Risk Analysis Paper from Berryville Institute of Machine Learning (BIML). Gary McGraw, the “father of software security,” is Co-Founder and CEO of BIML and will go into detail of what the risks mean for you and your organization, why you need to take notice, and why the time to act is now.

Jacob Teale, Head of Security Research at IriusRisk, will contribute on why security teams need to ensure they are not just considering risks from LLMs, but incorporating it into their wider cybersecurity strategies for 2024 and beyond.

Sections:
00:00 Welcome and introduction of hosts and panelists
01:04 Webinar mechanics and setting the scene for LLMs
01:43 Basics of Machine Learning and explicit instructions
02:51 Example of learning with shapes and security implications
05:48 Introduction to BIML's work and identified risks in Machine Learning
08:03 Regulation and risk management in LLMs
09:47 Threat modeling with AI and implementation examples
22:02 Importance of data ownership and regulatory challenges
35:33 Challenges and effectiveness of large LLM models
37:00 Example of prompt injection and its impact on security
41:07 Monitoring and managing risks in LLM applications
45:00 Risk assessment and management decisions in AI implementation
50:00 Collaboration between security and data science in AI projects
52:00 Conclusions and next steps in LLM security

Комментарии

Информация по комментариям в разработке