Serving 100s of LLMs on 1 GPU with LoRAX - Travis Addair | Stanford MLSys #84

Описание к видео Serving 100s of LLMs on 1 GPU with LoRAX - Travis Addair | Stanford MLSys #84

Episode 84 of the Stanford MLSys Seminar Series!

Serving 100s of Fine-Tuned LLMs on 1 GPU with LoRAX
Speaker: Travis Addair

Abstract:
Smaller, specialized language models such as LLaMA-2-7b can outperform larger general-purpose models like GPT-4 when fine-tuned on proprietary data to perform a single task. But serving many fine-tuned LLMs in production can quickly add up to tens of thousands of dollars per month in cloud costs when each model requires its own dedicated GPU resources. LoRA Exchange (LoRAX) is an LLM inference system built for serving numerous fine-tuned LLMs using a shared set of GPU resources. With LoRAX, users can pack over 100 task-specific models into a single GPU, significantly reducing the expenses associated with serving fine-tuned models by orders of magnitude over dedicated deployments. In this seminar, we'll explore the challenges of serving fine-tuned LLMs in production, and the motivation behind building a system like LoRAX. We'll introduce parameter efficient fine-tuning adapters like Low Rank Adaptation (LoRA), and show how LoRAX dynamically loads and exchanges different adapters at runtime, leveraging a tiered weight cache to speed up this exchange process. Additionally, we'll show how LoRAX achieves high throughput with continuous multi-adapter batching, allowing requests from different fine-tuned adapters to batch together within a single decoding step.

Bio:
Travis Addair is co-founder and CTO of Predibase, the AI platform for engineers. Within the Linux Foundation, he serves as lead maintainer for the Horovod distributed deep learning framework and is a co-maintainer of the Ludwig automated deep learning framework. In the past, he led Uber's deep learning training team as part of the Michelangelo machine learning platform.

--

Stanford MLSys Seminar hosts: Simran Arora, Dan Fu

Twitter:
  / simran_s_arora  
  / realdanfu​  

--

Check out our website for the schedule: http://mlsys.stanford.edu
Join our mailing list to get weekly updates: https://groups.google.com/forum/#!for...

#machinelearning #ai #artificialintelligence #systems #mlsys #computerscience #stanford

Комментарии

Информация по комментариям в разработке