Learn how to use OpenAI's Chat Completion API | Temperature, Tokens, Top P, Presence Penalty

Описание к видео Learn how to use OpenAI's Chat Completion API | Temperature, Tokens, Top P, Presence Penalty

In this video I'll teach you how to use OpenAI's Chat Completion API. We we'll explore parameters like temperature, top_p, presence_penalty, stop, and max_completion_tokens. We'll also look at how tokens work in LLMs.

Timestamps
00:00 - Introduction
00:18 - Create OpenAI's API Key
00:42 - Setup OpenAI Python SDK
02:19 - Chat Completion Request format
03:42 - Chat Completion Response format
06:42 - How LLMs work
08:36 - logprobs parameter
10:11 - temperature parameter
12:45 - top_logprobs parameter
16:30 - top_p parameter
19:30 - Message Roles
27:02 - presence_penalty parameter
29:45 - stop parameter
31:15 - Context window and Max output tokens
32:53 - max_completion_tokens parameter
34:28 - Outro

Комментарии

Информация по комментариям в разработке