Fine-tuning Large Language Models (LLMs) | w/ Example Code

Описание к видео Fine-tuning Large Language Models (LLMs) | w/ Example Code

This is the 5th video in a series on using large language models (LLMs) in practice. Here, I discuss how to fine-tune an existing LLM for a particular use case and walk through a concrete example with Python code.

Resources:
▶️ Series Playlist:    • Large Language Models (LLMs)  
📰 Read more: https://towardsdatascience.com/fine-t...
💻 Example code: https://github.com/ShawhinT/YouTube-B...
Final Model: https://huggingface.co/shawhin/distil...
🔢 Dataset: https://huggingface.co/datasets/shawh...

References:
[1] Deeplearning.ai Finetuning Large Langauge Models Short Course: https://www.deeplearning.ai/short-cou...
[2] arXiv:2005.14165 [cs.CL] (GPT-3 Paper)
[3] arXiv:2303.18223 [cs.CL] (Survey of LLMs)
[4] arXiv:2203.02155 [cs.CL] (InstructGPT paper)
[5] 🤗 PEFT: Parameter-Efficient Fine-Tuning of Billion-Scale Models on Low-Resource Hardware: https://huggingface.co/blog/peft
[6] arXiv:2106.09685 [cs.CL] (LoRA paper)
[7] Original dataset source — Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning Word Vectors for Sentiment Analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics.

--
Homepage: https://shawhintalebi.com/

Intro - 0:00
What is Fine-tuning? - 0:32
Why Fine-tune - 3:29
3 Ways to Fine-tune - 4:25
Supervised Fine-tuning in 5 Steps - 9:04
3 Options for Parameter Tuning - 10:00
Low-Rank Adaptation (LoRA) - 11:37
Example code: Fine-tuning an LLM with LoRA - 15:40
Load Base Model - 16:02
Data Prep - 17:44
Model Evaluation - 21:49
Fine-tuning with LoRA - 24:10
Fine-tuned Model - 26:50

Комментарии

Информация по комментариям в разработке