Jailbreaking LLMs - Prompt Injection and LLM Security

Описание к видео Jailbreaking LLMs - Prompt Injection and LLM Security

Building applications on top of Large Language Models brings unique security challenges, some of which we still don't have great solutions for. Simon will be diving deep into prompt injection and jailbreaking, how they work, why they're so hard to fix and their implications for the things we are building on top of LLMs.Simon Willison is the creator of Datasette, an open source tool for exploring and publishing data. He currently works full-time building open source tools for data journalism, built around Datasette and SQLite.

Комментарии

Информация по комментариям в разработке