Indirect Prompt Injection

Описание к видео Indirect Prompt Injection

👩‍🎓👨‍🎓 Learn about Large Language Model (LLM) attacks! This lab is vulnerable to indirect prompt injection. The user carlos frequently uses the live chat to ask about the Lightweight "l33t" Leather Jacket product. To solve the lab, we must delete the user carlos.

If you're struggling with the concepts covered in this lab, please review https://portswigger.net/web-security/... 🧠

🔗 Portswigger challenge: https://portswigger.net/web-security/...

🧑💻 Sign up and start hacking right now - https://go.intigriti.com/register

👾 Join our Discord - https://go.intigriti.com/discord

🎙️ This show is hosted by   / _cryptocat   ( ‪@_CryptoCat‬ ) &   / intigriti  

👕 Do you want some Intigriti Swag? Check out https://swag.intigriti.com

Overview:
0:00 Intro
0:20 Insecure output handling
0:52 Indirect prompt injection
2:20 Lab: Indirect prompt injection
3:05 Explore site functionality
3:42 Probe LLM chatbot
4:29 Launch attacks via review feature
11:00 Conclusion

Комментарии

Информация по комментариям в разработке