LLM Security: Practical Protection for AI Developers

Описание к видео LLM Security: Practical Protection for AI Developers

With thousands of open-source LLMs on Hugging Face, AI developers have a wealth of resources at their disposal. As developers harness these models that power innovative applications, they may inadvertently expose their company to security risks. It’s not sufficient to rely on the internal guardrails that LLM providers have baked into their models. The stakes are too high, especially with proprietary data being made available to models through fine-tuning or retrieval-augmented generation (RAG). Even internal apps are still vulnerable to adversarial attack. With that, how can developers deploy LLMs painlessly but securely? In this session, we review the top LLM security risks using real-world examples and explore what’s required to meet emerging standards from OWASP, NIST, and MITRE. We share how a validation framework can enable developers to innovate freely while protecting from indirect prompt injection, prompt extraction, data poisoning, supply chain risk, and more.

Talk By: Yaron Singer, CEO & Co-Founder, Robust Intelligence

Here's more to explore:
LLM Compact Guide: https://dbricks.co/43WuQyb
Big Book of MLOps: https://dbricks.co/3r0Pqiz

Connect with us: Website: https://databricks.com
Twitter:   / databricks  
LinkedIn:   / data…  
Instagram:   / databricksinc  
Facebook:   / databricksinc  

Комментарии

Информация по комментариям в разработке