04-03, 11:10–12:30 (CET), Rotterdam hall 1A
Session Chair: Gennady Pekhimenko (Toronto/CentML)
GraphPipe: Improving Performance and Scalability of DNN Training with Graph Pipeline Parallelism
Byungsoo Jeon (NVIDIA), Mengdi Wu (Carnegie Mellon Univerisity), Shiyi Cao (UC Berkeley), Sunghyun Kim (Massachusetts Institute of Technology), Sunghyun Park (NVIDIA), Neeraj Aggarwal (Carnegie Mellon University), Colin Unger (Stanford University), Daiyaan Arfeen (Carnegie Mellon University), Peiyuan Liao (Carnegie Mellon University), Xupeng Miao (Carnegie Mellon University), Mohammad Alizadeh (Massachusetts Institute of Technology), Gregory R. Ganger (Carnegie Mellon University), Tianqi Chen (Carnegie Mellon University), Zhihao Jia (Carnegie Mellon University)
Paper
Cascade: A Dependency-aware Efficient Training Framework for Temporal Graph Neural Network
Yue Dai (Department of Computer Science, University of Pittsburgh), Xulong Tang (Department of Computer Science, University of Pittsburgh), Youtao Zhang (Department of Computer Science, University of Pittsburgh)
Paper
Frugal: Efficient and Economic Embedding Model Training with Commodity GPUs
Minhui Xie (Tsinghua University,Renmin University of China), Shaoxun Zeng (Tsinghua University), Hao Guo (Tsinghua University), Shiwei Gao (Tsinghua University), Youyou Lu (Tsinghua University)
Paper
FastGL: A GPU-Efficient Framework for Accelerating Sampling-Based GNN Training at Large Scale
Zeyu Zhu (Institute of Automation, Chinese Academy of Sciences,School of Future Technology, University of Chinese Academy of Sciences), Peisong Wang (Institute of Automation, Chinese Academy of Sciences), Qinghao Hu (Institute of Automation, Chinese Academy of Sciences,AiRiA), Gang Li (Shanghai Jiao Tong University), Xiaoyao Liang (Shanghai Jiao Tong University), Jian Cheng (Institute of Automation, Chinese Academy of Sciences,AiRiA)
Paper