Type-checking Graph Neural Networks (Dr. Petar Veličković)
LOGML Summer School 2022
Talk Title: Type-checking Graph Neural Networks
Abstract: Recent advances in neural algorithmic reasoning with graph neural networks (GNNs) are propped up by the notion of algorithmic alignment. Broadly, a neural network will be better at learning to execute a reasoning task (in terms of sample complexity) if its individual components align well with the target algorithm. Specifically, GNNs are claimed to align with dynamic programming (DP), a general problem-solving strategy which expresses many polynomial-time algorithms. However, has this alignment truly been demonstrated and theoretically quantified? Here we show, using methods from category theory and abstract algebra, that there exists an intricate connection between GNNs and DP, going well beyond the initial observations over individual algorithms such as Bellman-Ford. Exposing this connection, we easily verify several prior findings in the literature. But more generally, our approach allowed us to immediately detect a misalignment of previous proposals for using GNN architectures for edge-centric tasks, and propose a fixed implementation, in a manner not at all unlike type checking. Our proposal demonstrates strong empirical results on the CLRS algorithmic reasoning benchmark, and we hope our exposition will serve as a foundation for building stronger algorithmically aligned GNNs in the future.
Speaker Bio: Petar Veličković is a Senior Research Scientist at DeepMind. He holds a PhD in Computer Science from the University of Cambridge (Trinity College), obtained under the supervision of Pietro Liò. His research interests involve devising neural network architectures that operate on nontrivially structured data (such as graphs), and their applications in algorithmic reasoning and computational biology. He has published relevant research in these areas at both machine learning venues (NeurIPS, ICLR, ICML-W) and biomedical venues and journals (Bioinformatics, PLOS One, JCB, PervasiveHealth). In particular, he is the first author of Graph Attention Networks—a popular convolutional layer for graphs—and Deep Graph Infomax—a scalable local/global unsupervised learning pipeline for graphs (featured in ZDNet). Further, his research has been used in substantially improving the travel-time predictions in Google Maps (covered by outlets including the CNBC, Endgadget, VentureBeat, CNET, the Verge and ZDNet).
Видео Type-checking Graph Neural Networks (Dr. Petar Veličković) канала LOGML Summer School
Talk Title: Type-checking Graph Neural Networks
Abstract: Recent advances in neural algorithmic reasoning with graph neural networks (GNNs) are propped up by the notion of algorithmic alignment. Broadly, a neural network will be better at learning to execute a reasoning task (in terms of sample complexity) if its individual components align well with the target algorithm. Specifically, GNNs are claimed to align with dynamic programming (DP), a general problem-solving strategy which expresses many polynomial-time algorithms. However, has this alignment truly been demonstrated and theoretically quantified? Here we show, using methods from category theory and abstract algebra, that there exists an intricate connection between GNNs and DP, going well beyond the initial observations over individual algorithms such as Bellman-Ford. Exposing this connection, we easily verify several prior findings in the literature. But more generally, our approach allowed us to immediately detect a misalignment of previous proposals for using GNN architectures for edge-centric tasks, and propose a fixed implementation, in a manner not at all unlike type checking. Our proposal demonstrates strong empirical results on the CLRS algorithmic reasoning benchmark, and we hope our exposition will serve as a foundation for building stronger algorithmically aligned GNNs in the future.
Speaker Bio: Petar Veličković is a Senior Research Scientist at DeepMind. He holds a PhD in Computer Science from the University of Cambridge (Trinity College), obtained under the supervision of Pietro Liò. His research interests involve devising neural network architectures that operate on nontrivially structured data (such as graphs), and their applications in algorithmic reasoning and computational biology. He has published relevant research in these areas at both machine learning venues (NeurIPS, ICLR, ICML-W) and biomedical venues and journals (Bioinformatics, PLOS One, JCB, PervasiveHealth). In particular, he is the first author of Graph Attention Networks—a popular convolutional layer for graphs—and Deep Graph Infomax—a scalable local/global unsupervised learning pipeline for graphs (featured in ZDNet). Further, his research has been used in substantially improving the travel-time predictions in Google Maps (covered by outlets including the CNBC, Endgadget, VentureBeat, CNET, the Verge and ZDNet).
Видео Type-checking Graph Neural Networks (Dr. Petar Veličković) канала LOGML Summer School
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
LOGML 2021: Intro session + Michael Bronstein: Geometric Deep Learning: from Euclid to Drug DesignBack soon!LOGML - Niloy Mitra: Deep 3D Generative ModelingLOGML - Maks Ovsjanikov: Robust learning-based methods for shape correspondenceKeynote: Computational Topology and Applications to Biological Data (Prof. Heather Harrington)LOGML + WiC Tutorial: Getting started on graph MLLOGML - Marinka Zitnik: Graph Representation Learning for Biomedical DiscoveryLOGML - Martin RumpfBack soon!LOGML - Sanja FidlerLOGML 2022 IS COMING!LOGML - Gabriele SteidlTutorial: PyTorch Geometric (Jianxuan You, Rex Ying)Tutorial: Topological Data Analysis (Katherine Benjamin)LOGML - Smita KrishnaswamyFisher Information Geometry of Beta and Dirichlet Distributions (Dr. Alice Le Brigant)From Equivariance to Naturality (Dr. Taco Cohen)LOGML - Michael Betancourt: Unravelling A Geometric ConspiracyLOGML - Thomas Kipf: Relational Structure DiscoveryUniverses as Big-Data: Physics, Geometry & AI (Prof. Yang-Hui He)