L-10 RAG Vs Fine-tuning

Описание к видео L-10 RAG Vs Fine-tuning

In this video, we'll dive into the differences between Retrieval-Augmented Generation (RAG) and fine-tuned models.
We'll explore how each approach enhances language models and when to use them.

A fine-tuned model begins with a pre-trained language model, such as LLaMA, Mistral, Gemini, GPT-3.5, or GPT-4. We then train this model further on a specific dataset. The model generates responses based on the knowledge it acquired during this training phase.

Retrieval-Augmented Generation (RAG) allows us to provide external, up-to-date knowledge to the model.

Комментарии

Информация по комментариям в разработке