Hands-on RAG Tutorial using LlamaIndex, Gemini, and Pinecone Vector DB

Описание к видео Hands-on RAG Tutorial using LlamaIndex, Gemini, and Pinecone Vector DB

Let's talk about building a simple RAG app using LlamaIndex (v0.10+) Pinecone, and Google's Gemini Pro model. A step-by-step tutorial if you're just getting started!

--

Useful links:
Google AI Studio: https://ai.google.dev
Pinecone: https://www.pinecone.io
LlamaIndex: https://www.llamaindex.ai
👉 Source code: https://www.gettingstarted.ai/how-to-... (Updated for LlamaIndex v0.10+)

--

Timeline:

00:00 Introduction
00:43 Basic definitions
02:18 How Retrieval Augmented Generation (RAG) works
03:55 Creating a Pinecone Index and getting an API Key
05:25 Getting a Google Gemini API Key
06:25 Creating a virtual environment
06:48 Installing LlamaIndex (and core packages)
07:41 Installing other dependencies
08:03 General application setup
10:42 Setting up environment variables
12:45 Validating configuration
14:11 Retrieving content from the Web
15:38 Explaining IngestionPipeline
16:49 Creating a LlamaIndex IngestionPipeline
17:16 Defining a Pinecone vector store
18:29 Running the IngestionPipeline (with Transformations)
19:37 Performing a similarity search
20:13 Creating a VectorStoreIndex
20:32 Creating a VectorIndexRetriever
21:04 Creating a RetrieverQueryEngine
22:05 Querying Google Gemini (Running the Pipeline)
22:47 Where to find the complete source code
23:15 Conclusion


‪@LlamaIndex‬ ‪@pinecone-io‬ ‪@GoogleDevelopers‬ ‪@Google‬

Комментарии

Информация по комментариям в разработке