Gen AI Petabyte scale vector store

Описание к видео Gen AI Petabyte scale vector store

Session: A petabyte-scale vector store for generative AI
Patrick McFadin, VP Developer Relations @ DataStax

This talk will focus on the work in the Apache Cassandra® project to develop a vector store capable of handling petabytes of data, discussing why this capacity is critical for future AI applications. I will also connect how this pertains to the exciting new Generative AI technologies like Improved Large Language Models (LLMs), Retrieval-Augmented Generation (RAG), and Forward-Looking Active Retrieval Augmented Generation (FLARE) that all contribute to the growing need for such scalable solutions. The needs of autonomous agents will drive the next wave of data infrastructure. Are you ready?

Key Takeaways:

Understand the future of generative AI and why current laptop-scale models will soon be obsolete
Apache Cassandra® and its role in creating a petabyte-scale vector store for AI applications
Vector-powered AI technologies such as LLMs, RAG, and FLARE
How AI agents can leverage such scalable solutions for better decision-making
Importance of planning and managing future growth in AI applications and how to avoid painful migrations later
Use cases with frameworks like LangChain, LlamaIndex, and CassIO

Комментарии

Информация по комментариям в разработке