EASIEST Way to Fine-Tune a LLM and Use It With Ollama

Описание к видео EASIEST Way to Fine-Tune a LLM and Use It With Ollama

In this video, we go over how you can fine-tune Llama 3.1 and run it locally on your machine using Ollama! We use the open source repository "Unsloth" to do all of the fine tuning on an SQL dataset!

Throughout this video, we discuss the ins and outs of what fine-tuning an LLM is, how you can work with your data so that the LLM can process it and then importing it into Ollama for you to run locally on your machine!

Ollama is available on all platforms!

Dataset used: https://huggingface.co/datasets/grete...
Ollama: https://github.com/ollama/ollama
Unsloth: https://github.com/unslothai/unsloth

Make sure you follow for more content!
___________________________

Try Warp now for FREE 👉 bit.ly/warpdotdev

Twitter 🐦
  / warpdotdev  

TikTok 📱
  / warp.dev  

TIMESTAMPS
0:00 Intro
0:10 Getting the dataset
0:45 The Tech Stack
1:18 Installing Dependencies
1:48 Fast Language Model Explained
2:35 LORA Adaptes Explained
3:02 Converting your data to fine-tune
3:36 Training the Model....
4:01 Converting to Ollama compatibility
4:11 Creating a Modelfile for Ollama
4:50 Final Output!
5:01 Check out Ollama in 2 minutes!

Комментарии

Информация по комментариям в разработке