Recent Advancements in High-Performance Graph Computing: From Parallel Processing to Neural Networks
Main Article Content
Abstract
Graph computing has emerged as a fundamental component of high-performance computing and data science, enabling efficient analysis of complex relationships across domains such as social networks, bioinformatics, and cybersecurity. This survey presents a comprehensive review of recent advancements in parallel and distributed graph processing, GPU-accelerated techniques, and dynamic graph maintenance. Additionally, we conduct an empirical performance evaluation based on reported benchmarks, comparing execution times and scalability trends across GPU, shared-memory, and distributed frameworks.
Our findings demonstrate that GPU-based frameworks such as Gunrock and cuGraph achieve significant speedups for traversalbased algorithms, while distributed systems like PowerGraph provide better scalability for large-scale graphs but incur higher communication overhead. We analyze hybrid partitioning strategies, which outperform traditional edge-cut and vertex-cut approaches by reducing inter-node communication by up to 40%. Furthermore, we provide an in-depth examination of Graph Neural Networks (GNNs), covering parallel training strategies, model scalability, and optimizations for irregular data structures. Our comparison of distributed GNN frameworks reveals that asynchronous training methods achieve up to a 3.5x speedup compared to synchronous approaches for large-scale graphs.
Despite these advancements, several challenges remain, including efficient handling of streaming graph updates, minimizing communication bottlenecks in large-scale systems, and developing privacy-preserving techniques for GNNs. By synthesizing stateof-the-art methodologies, empirical performance insights, and open research directions, this survey aims to guide future innovations in high-performance graph analytics and scalable machine learning on graphs.