Complete Guide to Build RAG App using Ollama Python Lib | Local LLM RAG

Описание к видео Complete Guide to Build RAG App using Ollama Python Lib | Local LLM RAG

Hi, My name is Sunny Solanki, and in this video, I provide a step-by-step guide to creating a RAG LLM App using the Python library "Ollama". It is a wrapper around the "Ollama" tool that lets us access Open-Source LLM on the local machine for free. To build our LLM app, we have used open-source LLM "llama-2" with 7B parameters. For storing & searching embeddings of external docs, we have used the FAISS library. The tutorial is a good starting point for someone who wants to learn how to create RAG LLM Apps. It's a good introduction to the RAG workflow as well. You can easily extend the app to use any other LLMs like Mistral, falcon, vicuna, Gemma, etc.

============================================
CODE - https://github.com/sunny2309/ollama_rag
==============================================
=============================================
Learn Ollama -    • Ollama: Run LLMs on your Local Machin...  
Learn Ollama Python Library -    • Ollama Python Library: Use LLMs on yo...  
RAG App using LangChain -    • Step-by-Step Guide to Build RAG App u...  
RAG App using LlamaIndex -    • Step-by-Step Guide to Build RAG App u...  
What is RAG -    • Let's Talk about AI Buzzword: RAG |  ...  
============================================
=======================================================
SUPPORT US - Buy Me Coffee - https://buymeacoffee.com/coderzcolumn
=======================================================
=======================================================
NEWSLETTER - http://eepurl.com/gRW2u9
=======================================================
=======================================================
WEBSITE - https://coderzcolumn.com
=======================================================

Important Chapters:

0:00 - Build RAG LLM App using Ollama Python Library
1:14 - RAG App Workflow
2:14 - Install Ollama
2:44 - Download "llama-2 (TB)" Model
3:08 - Bring up Ollama Server
4:00 - Code Explanation
6:02 - Load External Docs Data
7:32 - Generate Embeddings
8:26 - Create Vector Store Index
10:12 - Create a Retriever to Retrieve Relevant Docs
13:10 - Complete Ollama Retrieval Chain (RAG Pipeline)

#python #datascience #datasciencetutorial #python #pythonprogramming #pythoncode #pythontutorial #llama2 #ollama #rag-llm-tutorial #building-a-rag-application #rag-llm-explained #retrieval-augmented-generation-rag #rag-implementation #rag-ollama #ollama-rag #how-does-rag-work #llm-rag-example #ollama-rag-tutorial #ollama-chains #ollama-llama-tutorial #ollama-llm-app #ollama-llama-2 #rag-application-using-ollama #rag-application-using-ollama

Комментарии

Информация по комментариям в разработке