Understanding LLM Inference | NVIDIA Experts Deconstruct How AI Works

Описание к видео Understanding LLM Inference | NVIDIA Experts Deconstruct How AI Works

In the last eighteen months, large language models (LLMs) have become commonplace. For many people, simply being able to use AI chat tools is enough, but for data and AI practitioners, it is helpful to understand how they work.

In this session, you'll learn how large language models generate words. Our two experts from NVIDIA will present the core concepts of how LLMs work, then you'll see how large scale LLMs are developed. You'll also see how changes in model parameters and settings affect the output.

Key Takeaways:
- Learn how large language models generate text.
- Understand how changing model settings affects output.
- Learn how to choose the right LLM for your use cases.

Resources: https://bit.ly/3UrPMea

Комментарии

Информация по комментариям в разработке