Fine-tuning LLMs with PEFT and LoRA

Описание к видео Fine-tuning LLMs with PEFT and LoRA

LoRA Colab : https://colab.research.google.com/dri...
Blog Post: https://huggingface.co/blog/peft
LoRa Paper: https://arxiv.org/abs/2106.09685

In this video I look at how to use PEFT to fine tune any decoder style GPT model. This goes through the basics LoRa fine-tuning and how to upload it to HuggingFace Hub.

For more tutorials on using LLMs and building Agents, check out my Patreon:
Patreon:   / samwitteveen  
Twitter:   / sam_witteveen  

My Links:
Linkedin:   / samwitteveen  

Github:
https://github.com/samwit/langchain-t...
https://github.com/samwit/llm-tutorials

00:00 Intro
00:04 - Problems with fine-tuning
00:48 - Introducing PEFT
01:11 - PEFT other cool techniques
01:51 - LoRA Diagram
03:25 - Hugging Face PEFT Library
04:06 - Code Walkthrough

Комментарии

Информация по комментариям в разработке