EASIEST Way to Fine-Tune LLAMA-3.2 and Run it in Ollama

Описание к видео EASIEST Way to Fine-Tune LLAMA-3.2 and Run it in Ollama

Meta recently released Llama 3.2, and this video demonstrates how to fine-tune the 3 billion parameter instruct model using Unsloth and run it locally with Olama. By preparing the FindTom100K dataset, adjusting prompt templates, and adding LoRA adapters, the tutorial covers efficient fine-tuning and conversion of the model into GGUF format for local deployment. This enables users to run custom fine-tuned Llama 3.2 models on their own devices, leveraging powerful AI capabilities without relying on cloud resources.

LINKS:
Colab: https://colab.research.google.com/dri...
https://www.llama.com/
Dataset: https://huggingface.co/datasets/mlabo...
Ollama madelfile: https://github.com/ollama/ollama/blob...


💻 RAG Beyond Basics Course:
https://prompt-s-site.thinkific.com/c...

Let's Connect:
🦾 Discord:   / discord  
☕ Buy me a Coffee: https://ko-fi.com/promptengineering
|🔴 Patreon:   / promptengineering  
💼Consulting: https://calendly.com/engineerprompt/c...
📧 Business Contact: [email protected]
Become Member: http://tinyurl.com/y5h28s6h

💻 Pre-configured localGPT VM: https://bit.ly/localGPT (use Code: PromptEngineering for 50% off).

Signup for Newsletter, localgpt:
https://tally.so/r/3y9bb0


00:00 Introduction to Llama 3.2 Release
00:40 Overview of Llama 3.2 Models
01:42 Fine-Tuning Llama 3.2 with Unsloth
01:58 Preparing the Dataset for Fine-Tuning
02:34 Setting Up the Fine-Tuning Environment
03:32 Configuring the Fine-Tuning Parameters
07:59 Training the Model
12:31 Running the Fine-Tuned Model Locally
16:39 Conclusion and Future Videos

All Interesting Videos:
Everything LangChain:    • LangChain  

Everything LLM:    • Large Language Models  

Everything Midjourney:    • MidJourney Tutorials  

AI Image Generation:    • AI Image Generation Tutorials  

Комментарии

Информация по комментариям в разработке