Invoice Data Processing with Llama2 13B LLM RAG on Local CPU [Weaviate, Llama.cpp, Haystack]

Описание к видео Invoice Data Processing with Llama2 13B LLM RAG on Local CPU [Weaviate, Llama.cpp, Haystack]

I explained how to set up local LLM RAG to process invoice data with Llama2 13B. Based on my experiments, Llama2 13B works better with tabular data compared to Mistral 7B model. This example presents a production LLM RAG setup with Weaviate database for vector embeddings, Haystack for LLM API, and Llama.cpp to run Llama2 13b on a local CPU.

GitHub repo:
https://github.com/katanaml/llm-rag-i...

0:00 Intro
1:10 Examples
6:35 Setup
8:20 Config
9:45 Weaviate Docker
10:05 Data Ingest Code
10:30 Inference with Llama2 13B Code
13:45 Summary

CONNECT:
Subscribe to this YouTube channel
Twitter:   / andrejusb  
LinkedIn:   / andrej-baranovskij  
Medium:   / andrejusb  

#llm #rag #python

Комментарии

Информация по комментариям в разработке