Local RAG with Llama 3.1 for PDFs | Private Chat with Your Documents using LangChain & Streamlit

Описание к видео Local RAG with Llama 3.1 for PDFs | Private Chat with Your Documents using LangChain & Streamlit

Learn how to build a completely local RAG for efficient and accurate document processing using Large Language Models (LLMs). Learn how to:

Extract high-quality text from PDFs using pypdfium2
Split and format documents for optimal LLM performance
Create and configure vector stores with Qdrant
Implement advanced retrieval techniques with FlashrankRerank and LLMChainFilter
Seamlessly integrate local and remote LLMs for the best performance

Demo: https://ragbase.streamlit.app/

Follow me on X:   / venelin_valkov  
AI Bootcamp: https://www.mlexpert.io/bootcamp
Discord:   / discord  
Subscribe: http://bit.ly/venelin-subscribe
GitHub repository: https://github.com/curiousily/AI-Boot...

👍 Don't Forget to Like, Comment, and Subscribe for More Tutorials!

00:00 - What is RagBase?
01:44 - Text tutorial on MLExpert.io
02:21 - How RagBase works
06:30 - Project Structure
09:48 - UI with Streamlit
16:00 - Config
17:42 - File Upload
18:40 - Document Processing (Ingestion)
22:58 - Retrieval (Reranker & LLMChainFilter)
29:01 - QA Chain
33:25 - Chat Memory/History
33:52 - Create Models
35:36 - Start RagBase Locally
39:16 - Deploy to Streamlit Cloud
41:10 - Conclusion

Join this channel to get access to the perks and support my work:
   / @venelin_valkov  

#rag #langchain #chatbot #llama #chatgpt #llm #artificialintelligence

Комментарии

Информация по комментариям в разработке