LLM01: Indirect Prompt Injection | Jailbreaking image generation | AI Security Expert

Описание к видео LLM01: Indirect Prompt Injection | Jailbreaking image generation | AI Security Expert

This video explains LLM direct prompt injection which leads to unintended output generation.

Check out my courses:
https://aisecurityexpert.com/penetrat...
https://aisecurityexpert.com/offensiv...

Комментарии

Информация по комментариям в разработке