Logo video2dn
  • Сохранить видео с ютуба
  • Категории
    • Музыка
    • Кино и Анимация
    • Автомобили
    • Животные
    • Спорт
    • Путешествия
    • Игры
    • Люди и Блоги
    • Юмор
    • Развлечения
    • Новости и Политика
    • Howto и Стиль
    • Diy своими руками
    • Образование
    • Наука и Технологии
    • Некоммерческие Организации
  • О сайте

Скачать или смотреть Massive Scale Training and Inference: AT&T, RelationalAI & ScalarLM Break #1 on Spider with AMD GPUs

  • TensorWave
  • 2025-11-03
  • 232
Massive Scale Training and Inference: AT&T, RelationalAI & ScalarLM Break #1 on Spider with AMD GPUs
  • ok logo

Скачать Massive Scale Training and Inference: AT&T, RelationalAI & ScalarLM Break #1 on Spider with AMD GPUs бесплатно в качестве 4к (2к / 1080p)

У нас вы можете скачать бесплатно Massive Scale Training and Inference: AT&T, RelationalAI & ScalarLM Break #1 on Spider with AMD GPUs или посмотреть видео с ютуба в максимальном доступном качестве.

Для скачивания выберите вариант из формы ниже:

  • Информация по загрузке:

Cкачать музыку Massive Scale Training and Inference: AT&T, RelationalAI & ScalarLM Break #1 on Spider with AMD GPUs бесплатно в формате MP3:

Если иконки загрузки не отобразились, ПОЖАЛУЙСТА, НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если у вас возникли трудности с загрузкой, пожалуйста, свяжитесь с нами по контактам, указанным в нижней части страницы.
Спасибо за использование сервиса video2dn.com

Описание к видео Massive Scale Training and Inference: AT&T, RelationalAI & ScalarLM Break #1 on Spider with AMD GPUs

Watch the complete expert panel featuring Greg Diamos (ScalarLM Architect), Molham Aref (CEO, RelationalAI), Farbod Tavakkoli (Data Scientist, AT&T), and Ilya Tabakh (VP Innovations, TensorWave) as they reveal how open-source AI is transforming enterprise decision intelligence.

📌 Chapters & Timestamps

00:00 – Welcome & Event Intro (TensorWave)
00:47 – Panel Format & Speaker Overview
01:00 – Intro: Greg Diamos, ScalarLM
02:00 – Intro: Molham Aref, RelationalAI
03:00 – Intro: Farbod Tavakkoli, AT&T

04:00 – What is ScalarLM? Origins & Open Source
05:02 – AMD MI300/MI325 Cluster & Kubernetes Deployment
06:10 – Distributed Training Challenges

07:00 – BI vs. Decision Intelligence (Enterprise Reality)
09:00 – Openness & Avoiding Vendor Lock-In
10:15 – Private Enterprise Data: The Next LLM Frontier

11:20 – Super Alignment: Structured Data → LLM Knowledge
14:00 – Scaling Laws in Enterprise Datasets
15:10 – #1 Result on Spider SQL Benchmark
16:20 – Benchmark Complexity Explained

17:00 – BIRD Benchmark: Better Than Human Performance
18:30 – Why Innovation Moves Above the Stack
19:40 – Open Source vs. Proprietary Frameworks

20:00 – Inside “Ask AT&T” GenAI Platform
21:00 – 9B Tokens/Day: Massive Internal Usage
22:10 – Fine-Tuning Saves Big: Cost & Performance Gains

23:00 – Network Event Classification via Logs → LLM
24:45 – 156 Fine-Tuning Experiments → Breakthrough Result
25:30 – AMD GPU Efficiency at Scale
26:30 – Optimizing Compute Pipeline Efficiency

27:00 – GSMA Global Telecom Model Collaboration
28:00 – Multilingual & Multimodal Roadmap (EN + Arabic)
29:00 – Call Analytics & Competitive Signal Detection

31:00 – Reflections: Impact of Open Ecosystems
33:00 – Openness Builds Trust & Adoption
34:00 – GPU Agnostic Deployment Momentum

35:00 – Q&A: Small Models vs. Large Models in Production
37:00 – Q&A: Closed-Loop Operations & Automation
42:00 – Why GSMA Work Remains Open Source

43:20 – Closing Remarks & Networking

🚀 Key Highlights:

• #1 on Spider SQL Benchmark – Super Alignment model beats GPT-5, Claude, and Grok using private enterprise data
• AT&T’s Ask AT&T Platform: 100K+ employees, 9B tokens/day, 910M API calls, 20% coding efficiency gain
• Fine-tuned 4B model beats 100B+ LLMs on telecom log classification – 90% cost savings
• GSMA Global Telecom AI Initiative – Multi-company effort to build open-source telecom foundation models (text → multilingual + vision by 2026)
• GPU-Agnostic Training on AMD MI300/MI325 via ScalarLM + Kubernetes + Helm – unified training & inference at scale
• Super Alignment Explained: Convert Snowflake relational data → LLM tokens while preserving privacy and semantics

🛠 Tech Stack in Action:

• ScalarLM (open-source): Megatron-Core + Hugging Face + vLLM
• AMD GPU Clusters (TensorWave)
• RelationalAI + Snowflake for decision intelligence
• Open standards: Iceberg, Delta, OpenTable Formats – no vendor lock-in

🎯 Who Should Watch:

• AI Engineers training on private enterprise data
• Data scientists using Snowflake / BigQuery
• CTOs building GenAI platforms at scale
• Open-source AI advocates
• Telecom & enterprise AI leaders

🌊 About TensorWave

TensorWave is the AI neocloud purpose-built for performance. Powered exclusively by AMD Instinct™ Series GPUs, we deliver high-bandwidth, memory-optimized infrastructure that scales with your most demanding models—training or inference.

Ready to get started? Connect with a Sales Engineer @ tesnorwave.com/connect

Комментарии

Информация по комментариям в разработке

Похожие видео

  • О нас
  • Контакты
  • Отказ от ответственности - Disclaimer
  • Условия использования сайта - TOS
  • Политика конфиденциальности

video2dn Copyright © 2023 - 2025

Контакты для правообладателей [email protected]