Apple strikes massive deal with OpenAI (& much more) | Trends in AI - June 2024

Описание к видео Apple strikes massive deal with OpenAI (& much more) | Trends in AI - June 2024

In this month’s edition, we discuss the impact of GPT-4o, while we patiently wait for GPT-5. OpenAI is on a roll with the new Apple deal but remains embroiled in drama. A report on the latest AI essentials from Google I/O and Microsoft Build. A brand new version of the Sentence Transformers library is out, and a new embedding model from NVIDIA is topping the leaderboards. Mega funding rounds for xAI, Wayve, Suno.ai, and the new European champion H. Plus a roundup of the hottest R&D papers, like xLSTM, Alpha Fold 3, Chameleon, Zipper, Contextual Position Encoding, Symbolic Chain-of-Thought and much much more.

Dissecting the current Trends in AI: News, R&D breakthroughs, trending papers and code, and the latest gossip. Live talk show from LAB42 with the Zeta Alpha crew, and online on Zoom.

Dive deeper into the papers we covered in this episode: https://search.zeta-alpha.com/tags/81190

Sign up for the series, and catch us live on the next edition! https://us06web.zoom.us/webinar/regis...

Timestamps:
0:00 Intro by Jakub Zavrel and Dinos Papakostas
1:15 NVIDIA hits $3tn in valuation
1:58 Kling, the Chinese open-source Sora
2:46 Copilot+PCs, the end of the Wintel era
4:32 AI startup funding news
8:12 Apple's deal with OpenAI, and Ilya's departure
10:53 Google IO highlights
12:52 GPT-4o: OpenAI goes all in with multimodality
14:37 AI model releases
16:41 LLM arenas and leaderboards
18:30 Embedding model releases
19:45 AI projects shoutouts
21:53 Progress in robotics and self-driving cars
23:05 Zeta Alpha: Neural Discovery Platform
24:04 Top-10 trending papers of the month
25:39 Accurate structure prediction of biomolecular interactions with AlphaFold 3
26:57 NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models
29:34 The Platonic Representation Hypothesis
31:42 gzip Predicts Data-dependent Scaling Laws
34:11 Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?
36:22 Faithful Logical Reasoning via Symbolic Chain-of-Thought
37:57 Chameleon: Mixed-Modal Early-Fusion Foundation Models
39:17 Contextual Position Encoding: Learning to Count What's Important
39:45 xLSTM: Extended Long Short-Term Memory
41:19 What's next & outro

Комментарии

Информация по комментариям в разработке