The EASIEST way to finetune LLAMA-v2 on local machine!

Описание к видео The EASIEST way to finetune LLAMA-v2 on local machine!

In this video, I'll show you the easiest, simplest and fastest way to fine tune llama-v2 on your local machine for a custom dataset! You can also use the tutorial to train/finetune any other Large Language Model (LLM). In this tutorial, we will be using autotrain-advanced.

AutoTrain Advanced github repo: https://github.com/huggingface/autotr...

Steps:
Install autotrain-advanced using pip:
pip install autotrain-advanced

Setup (optional, required on google colab):
autotrain setup --update-torch

Train:
autotrain llm --train --project_name my-llm --model meta-llama/Llama-2-7b-hf --data_path . --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 12 --num_train_epochs 3 --trainer sft

If you are on free version of colab, use this model instead: https://huggingface.co/abhishek/llama.... This is a smaller sharded version of llama-2-7b-hf by meta.

Please subscribe and like the video to help me keep motivated to make awesome videos like this one. :)

My book, Approaching (Almost) Any Machine Learning problem, is available for free here: https://bit.ly/approachingml

Follow me on:
Twitter:   / abhi1thakur  
LinkedIn:   / abhi1thakur  
Kaggle: https://kaggle.com/abhishek

Комментарии

Информация по комментариям в разработке