✅ Easiest Way to Fine-tune LLMs Locally + Code - NO GPU Needed!

Описание к видео ✅ Easiest Way to Fine-tune LLMs Locally + Code - NO GPU Needed!

#ai #llm #finetuning #nlp
This is the easiest way to fine-tune a large language model 100% free and locally, without using any API or third-party platform in just 20 minutes! You don't need GPU, we will run this model training on CPU only! The code and process for fine-tuning are in my GitHub repository; the link is below :)

Transformer Language Models Simplified in JUST 3 MINUTES!
   • Transformer Language Models Simplifie...  

GitHub code for fine-tuning LLMs:
https://github.com/Maryam-Nasseri/Fin...

   / @analyticscamp  

Chapters and Key Moments:
00:00 Intro
00:37 Virtual Environment in Jupyter Notebook
01:10 What is Fine-tuning a language model?
01:42 Sentiment Analysis with LLM
01:52 Google Bert (Bidirectional Encoder Representations from Transformers): Tokenization, Embedding, Encoding, Task Head
02:52 Installing dependencies
03:06 Model specs: sequence classification, tokenizer, optimizer
03:33 Model parameters: torch tensors, padding, truncation, sentiment classification labels
05:22 Visual guide to fine-tuning LLMs
05:54 Working with external datasets for model training: tweet classification dataset
06:33 Tokenization, lemmatization, stemming
07:01 batch processing with map
07:51 Attention Mask and Positional Encoding
09:07 Training the model: fine-tuning LLM: learning rate, weight decay and L2 Regularization, AdamW optimizer, CUDA
10:50 Model training output: training loss, validation loss

Комментарии

Информация по комментариям в разработке