Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

【ERC Coffee House Tech Talk Series】Parallel and Distributed Graph Neural Networks: An In-Depth Concurrency Analysis [Maciej Besta]

Date

Thursday 9 June @ 09:30 – 10:30 (UK time)

Presenter

Maciej Besta

Affiliation

ETH Zurich

Location

[Online]
Meeting link: https://welink.zhumu.com/j/158022218

 

Abstract

Graph neural networks (GNNs) are among the most powerful tools in deep learning. They routinely solve complex problems on unstructured networks, such as node classification, graph classification, or link prediction, with high accuracy. However, both inference and training of GNNs are complex, and they uniquely combine the features of irregular graph processing with dense and regular computations. This complexity makes it very challenging to execute GNNs efficiently on modern massively parallel architectures. To alleviate this, we first design a taxonomy of parallelism in GNNs, considering data and model parallelism, and different forms of pipelining. Then, we use this taxonomy to investigate the amount of parallelism in numerous GNN models, GNN-driven machine learning tasks, software frameworks, or hardware accelerators. We use the work-depth model, and we also assess communication volume and synchronization. We specifically focus on the sparsity/density of the associated tensors, in order to understand how to effectively apply techniques such as vectorization. We also formally analyze GNN pipelining, and we generalize the established Message-Passing class of GNN models to cover arbitrary pipeline depths, facilitating future optimizations. Finally, we investigate different forms of asynchronicity, navigating the path for future asynchronous parallel GNN pipelines. The outcomes of our analysis are synthesized in a set of insights that help to maximize GNN performance, and a comprehensive list of challenges and opportunities for further research into efficient GNN computations. Our work will help to advance the design of future GNNs.

Short Bio

Maciej works in the areas of high-performance irregular computations at SPCL at ETH Zurich. He received his PhD from ETH Zurich in 2021. He published over 40 peer-reviewed scientific conference and journal articles. He won, among others, the competition for the Best Student of Poland (2012), the first Google Fellowship in Parallel Computing (2013), the ACM/IEEE-CS High-Performance Computing Fellowship (2015), the ETH Medal for outstanding doctoral thesis (2021), the IEEE TCSC Outstanding PhD Dissertation Award (2021), and the SPEC Kaivalya Dixit Distinguished Dissertation Award (2022). He received Best Paper awards and Best Student Paper awards at ACM/IEEE Supercomputing 2013, 2014, and 2019, at ACM HPDC 2015 and 2016, ACM Research Highlights 2018, and several more best paper nominations and top paper picks. He is also a Fellow of The Explorers Club (2022).

Links

 

Everyone is welcome!

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel