|M.Sc Student||Samuel Asaf|
A Scalable Application Aware Routing Algorithm
|Department||Department of Electrical Engineering||Supervisors||Professor Isaac Keslassy|
|Full Thesis text|
High Performance Computing (HPC) systems are built from thousands and up to hundred thousands physical hosts, connected in some network topology, to provide super high performance.
The rapid expansion of the cloud and of machine-learning-based parallel programs have led to a wider use of their HPC systems. Moreover, as Moore's Law keeps slowing down, HPC applications naturally become increasingly parallel and involve an ever larger number of hosts. As a result, the network plays a key role in HPC system efficiency. Unfortunately, current traditional oblivious and congestion-aware HPC routing solutions are not aware to the applications' demands, and therefore cannot deal with the sudden HPC traffic bursts and their resulting congestion peaks. The problem gets even worst when applications use barriers to synchronize themselves.
In this thesis, we address the problem of barrier-based applications. We investigated intra-application and inter-application contention. We find that decoupling them and treating them in different ways yield significant performance improvement.
We further introduce Routing Keys, a scalable routing paradigm for HPC networks that decouples intra- and inter-application flow contention. Our Application Routing Key algorithm pro-actively allows each self-aware application instance to mark its flows according to a predetermined routing key, i.e. obtain contention-free routing for its own intra-application flows. In addition, in our Network Routing Key algorithm, at run-time, a centralized scheduler chooses between several routing solutions of each application instance to reduce inter-application contention while maintaining intra-application contention-free routing and avoiding scalability issues. Using extensive evaluations, we show that both ARK and NRK significantly improve the communication runtime by up to 2.7x.