Pure Transformers are Powerful Graph Learners | Jinwoo Kim

Описание к видео Pure Transformers are Powerful Graph Learners | Jinwoo Kim

Join the Learning on Graphs and Geometry Reading Group: https://hannes-stark.com/logag-readin...

Paper “Pure Transformers are Powerful Graph Learners": https://arxiv.org/abs/2207.02505

Abstract: We show that standard Transformers without graph-specific modifications can lead to promising results in graph learning both in theory and practice. Given a graph, we simply treat all nodes and edges as independent tokens, augment them with token embeddings, and feed them to a Transformer. With an appropriate choice of token embeddings, we prove that this approach is theoretically at least as expressive as an invariant graph network (2-IGN) composed of equivariant linear layers, which is already more expressive than all message-passing Graph Neural Networks (GNN). When trained on a large-scale graph dataset (PCQM4Mv2), our method coined Tokenized Graph Transformer (TokenGT) achieves significantly better results compared to GNN baselines and competitive results compared to Transformer variants with sophisticated graph-specific inductive bias.

Authors: Jinwoo Kim, Tien Dat Nguyen, Seonwoo Min, Sungjun Cho, Moontae Lee, Honglak Lee, Seunghoon Hong

Twitter Hannes:   / hannesstaerk  
Twitter Dominique:   / dom_beaini  
Twitter Valence Discovery:   / valence_ai  

Reading Group Slack: https://join.slack.com/t/logag/shared...

~

Chapters

00:00 - Intro
01:15 - Key Takeaway: Tokenized Graph Transformers (TokenGT)
11:44 - Transformers for Graphs
18:07 - Method: Tokenizing a Graph
25:52 - How Does TokenGT Work?
33:05 - Theory Overview + Discussion
50:01 - Background Info: k-IGN
01:12:52 - Approximating k-IGN
01:18:16 - Experimental Results
01:30:09 - Self-Attention Distance Visualization
01:31:09 - Conclusion and Future Work
01:35:57 - Q+A

Комментарии

Информация по комментариям в разработке