Low-Rank Adaptation - LoRA explained

Описание к видео Low-Rank Adaptation - LoRA explained

RELATED LINKS
Paper Title: LoRA: Low-Rank Adaptation of Large Language Models
LoRA Paper: https://arxiv.org/abs/2106.09685
QLoRA Paper: https://arxiv.org/abs/2305.14314
LoRA official code: https://github.com/microsoft/LoRA
Parameter-Efficient Fine-Tuning (PEFT) Adapters paper: https://arxiv.org/abs/1902.00751
Parameter-Efficient Fine-Tuning (PEFT) library: https://github.com/huggingface/peft
HuggingFace LoRA training: https://huggingface.co/docs/diffusers...
HuggingFace LoRA notes: https://huggingface.co/docs/peft/conc...

⌚️ ⌚️ ⌚️ TIMESTAMPS ⌚️ ⌚️ ⌚️
0:00 - Intro
0:58 - Adapters
1:48 - Twitter (  / ai_bites  )
2:13 - What is LoRA
3:17 - Rank Decomposition
4:28 - Motivation Paper
5:02 - LoRA Training
6:53 - LoRA Inference
8:24 - LoRA in Transformers
9:20 - Choosing the rank
9:50 - Implementations

MY KEY LINKS
YouTube:    / @aibites  
Twitter:   / ai_bites​  
Patreon:   / ai_bites​  
Github: https://github.com/ai-bites​

Комментарии

Информация по комментариям в разработке