Local RAG with llama.cpp

Описание к видео Local RAG with llama.cpp

In this video, we're going to learn how to do naive/basic RAG (Retrieval Augmented Generation) with llama.cpp on our own machine.

Mixed Bread AI - https://huggingface.co/mixedbread-ai/...
Llama3 - https://huggingface.co/bartowski/Llam...
llama.cpp - https://llama-cpp-python.readthedocs....
Qdrant - https://github.com/qdrant/qdrant-client
langchain-text-splitters - https://pypi.org/project/langchain-te...
LangChain Q&A with RAG - https://python.langchain.com/v0.1/doc...
This Day in AI Podcast - https://podcast.thisdayinai.com/episo...


Ingestion code - https://github.com/mneedham/LearnData...
Querying code - https://github.com/mneedham/LearnData...

Комментарии

Информация по комментариям в разработке