Steering vectors: tailor LLMs without training. Part I: Theory (Interpretability Series)

Описание к видео Steering vectors: tailor LLMs without training. Part I: Theory (Interpretability Series)

State-of-the-art foundation models are often seen as black boxes: we send a prompt in and we get out our - often useful - answer. But what happens inside the system as the prompt gets processed remains a bit of a mystery & our ability to control or steer the processing into specific directions is limited.
Enter steering vectors!

By computing a vector that represents a particular feature or concept, we can use this to steer the model to include any property in the output we want: add more love into the answers, ensure it always answers your prompts (even if harmful!), or make the model such that it cannot stop talking about the Golden Gate Bridge. In this video we discuss how to compute such steering vectors, what makes it such simple steering possible (somehow the network's hidden representations decompose into simple-ish linear structures), and look at a couple of examples. In Part II (   • Steering vectors: tailor LLMs without...  ) we code up our steering vectors.

Disclaimer: finding these steering vectors is an active area of research; right now making it work includes a lot of trial-and-error and clarity on when steering works vs when it's not possible to find a useful direction remains unclear. Work on sparse autoencoders (a current hot topic in interpretability research) aims to automate the finding of useful directions.

Further reading & references I used:
Activation addition: https://arxiv.org/abs/2308.10248
Refusal directions: https://www.alignmentforum.org/posts/... and https://huggingface.co/posts/mlabonne...
Golden Gate Claude: https://www.anthropic.com/news/golden...
Superposition: https://transformer-circuits.pub/2022...
Sparse autoencoders: https://arxiv.org/pdf/2406.04093v1

Комментарии

Информация по комментариям в разработке