PostHog-LLM: Finding toxic users | Cohorts & Actions - Part 2

Описание к видео PostHog-LLM: Finding toxic users | Cohorts & Actions - Part 2

Creating a simple cohort of toxic users from the WildChat 1M conversational dataset, with insights at the end for the newly created cohort. Which dialogue domains are these toxic users most associated with? This is the second and final video of the series.

Project: https://minuva.com/
PostHog-LLM GitHub Page: https://github.com/postlang/posthog-llm
WildChat-1M: https://huggingface.co/datasets/allen...
domain classifier: https://huggingface.co/nvidia/domain-...

#llm #analytics #chatgpt #chatgpt #llama3 #monitoring #mistral #ollama #conversationalai #coding #datainsights #llmops #trends #timeseriesanalysis #classification #domain #toxic #toxicity #cohort #insights #llama3

Chatbots chatbot analytics llm analysis llama3 llm monitoring llm observability Conversational Analytics user analytics llm domain conversations toxicity toxic users cohort posthog

Комментарии

Информация по комментариям в разработке