Transformers Meet Directed Graphs | Simon Geisler

Описание к видео Transformers Meet Directed Graphs | Simon Geisler

Valence Labs is a research engine within Recursion committed to advancing the frontier of AI in drug discovery. Learn more about our open roles: https://www.valencelabs.com/careers

Join the Learning on Graphs and Geometry Reading Group on Slack: https://join.slack.com/t/logag/shared...

Abstract: Transformers were originally proposed as a sequence-to-sequence model for text but have become vital for a wide range of modalities, including images, audio, video, and undirected graphs. However, transformers for directed graphs are a surprisingly underexplored topic, despite their applicability to ubiquitous domains, including source code and logic circuits. In this work, we propose two direction- and structure-aware positional encodings for directed graphs: (1) the eigenvectors of the Magnetic Laplacian - a direction-aware generalization of the combinatorial Laplacian; (2) directional random walk encodings. Empirically, we show that the extra directionality information is useful in various downstream tasks, including correctness testing of sorting networks and source code understanding. Together with a data-flow-centric graph construction, our model outperforms the prior state of the art on the Open Graph Benchmark Code2 relatively by 14.7%.

Speaker: Simon Geisler -   / simon-geisler-ai  

Twitter Hannes:   / hannesstaerk  
Twitter Dominique:   / dom_beaini  

~

Chapters

00:00 - Intro
03:18 How do Language Models Encode Code
05:56 - Sinusoidal Encodings
08:58 - Signal Processing: DFT
13:41 - Graph Fourier Basis
22:04 - Magnetic Laplacian
28:58 - Harmonics for Directed Graphs
31:13 - Ambiguity of Eigenvectors
40:59 - Architecture
45:03 - Distance Prediction
53:11 - Correctness Prediction of Sorting Networks
57:05 - OpenGraphBenchmark Code 2
01:01:01 - Summary
01:02:31 - Q+A

Комментарии

Информация по комментариям в разработке