Low-rank Adaption of Large Language Models: Explaining the Key Concepts Behind LoRA

Описание к видео Low-rank Adaption of Large Language Models: Explaining the Key Concepts Behind LoRA

In this video, I go over how LoRA works and why it's crucial for affordable Transformer fine-tuning.

LoRA learns low-rank matrix decompositions to slash the costs of training huge language models. It adapts only low-rank factors instead of entire weight matrices, achieving major memory and performance wins.

🔗 LoRA Paper: https://arxiv.org/pdf/2106.09685.pdf
🔗 Intrinsic Dimensionality Paper: https://arxiv.org/abs/2012.13255

About me:
Follow me on LinkedIn:   / csalexiuk  
Check out what I'm working on: https://getox.ai/

Комментарии

Информация по комментариям в разработке