|Ph.D Student||Mendelson Gal|
|Subject||Load Balancing in Communication Networks: Asymptotic|
Performance and Theoretical Guarantees
|Department||Department of Electrical Engineering||Supervisors||Professor Rami Atar|
|Professor Isaac Keslassy|
One of the main models we consider in this thesis is the basic load balancing model in which there are K servers, each with its own buffer and a single load balancer. Jobs arrive to the load balancer and must immediately be routed to the servers for processing. We study this model under different modelling assumptions and desired performance-cost trade-offs. We analyze existing load balancing algorithms and design new ones. We provide probabilistic analysis and simulation results to shed light on the performance of the aforementioned algorithms. Our work can be divided into three areas:
Redundancy and cancellation: Under the Redundancy-d policy, jobs are replicated into d tasks, which are sent to d different servers. When the first task has started being serviced by a server, or, alternatively, finished being serviced, the d-1 other tasks are cancelled. We study the Redundancy- d policy under diffusion scale and also study its stability region. We prove that a generalization of the state space collapse phenomenon in heavy traffic, which we term Sub Diffusivity of the Deviation Process, holds for time varying queueing models and prove asymptotic optimality. We prove similar results for two other load balancing models: 'join the shortest d queues' and 'longest queue first'. We propose a new algorithm called Replicate to the Shortest Queues and analyze it at diffusion scale in heavy traffic.
Low communication: We design a new load balancing algorithm called Persistent Idle. This policy relies on server idleness signals and therefore requires a small amount of communication between the load balancer and the servers. We prove that it achieves the stability region in heterogeneous systems using non-standard Lyapunov stability techniques and provide simulation results indicating it performs well.
Scalable consistent hashing: Here we are interested in recurring jobs. A consistent hash deterministically maps these jobs to servers such that (1) the jobs are assigned in a balanced way regardless of the number of servers, and (2) when servers are added or removed only jobs that must be remapped are remapped. We design a new, scalable consistent hash called AnchorHash. We prove that AnchorHash has a low computational complexity, a small memory footprint and a fast update time. We provide simulation results comparing AnchorHash to other hashing algorithms.