טכניון מכון טכנולוגי לישראל
הטכניון מכון טכנולוגי לישראל - בית הספר ללימודי מוסמכים  
Ph.D Thesis
Ph.D StudentZahavi Eitan
SubjectForwarding in Computer Cluster Networks
DepartmentDepartment of Electrical Engineering
Supervisors Professor Emeritus Avinoam Kolodny
Professor Isaac Keslassy
Professor Emeritus Israel Cidon
Full Thesis textFull thesis text - English Version


Abstract

Large data centers housing thousands of computers have become a common resource in science, commerce, and day to day life. They consist of computers, data-storage systems and the network connecting them. The computers are similar to off-the-shelf models used at home. However, data centers may host today up to 50,000 such computers. The Big Data stored in the cloud resides in distributed storage systems that serve it to many computers at the same time.


The network connecting the data center computers and disks has a significant role in enabling the data centers to scale. This network, which is the target of our research, is different from the Internet network in many ways. First, as of 2015, a single data center may carry bandwidth comparable to the entire Internet. Second, while the Internet carries traffic originating from millions of sources in an un-coordinated manner, data center network traffic tends to be synchronized. The typical network topology used in data centers is the Fat-Tree.


We focus on how to avoid congestion on Fat-Trees. We propose to modify the forwarding to fit the topology and traffic, in a way that congestion and delays are avoided, even for the most synchronized traffic.

Initially, we inspect traffic demands of scientific programs and show they mostly utilize traffic patterns known as shift patterns. We devise an algorithm that optimally forwards shifts on Fat-Trees with no contention. However, the topology of many existing data centers is not exactly a Fat-Tree. We introduce two new formal models of such topologies: Parallel Ports and Quasi Fat-Trees. Based on these definitions we provide optimal forwarding of shift patterns.


Our second thrust is about arbitrary and unknown traffic patterns. We investigate two opposite approaches: One uses central and exact control over the injection time and route of each piece of data. Imagine a road control system that requires everybody to call a central office to ask when to leave home and which route to take, with the promise that no car has to stop at any traffic light. This approach does not scale to large networks as the central office gets overloaded. The other approach, known as adaptive routing lets switches decide when and where to forward the data. We ask how long it takes for adaptive routing, if at all, to reach a state with no traffic congestion. Our investigation shows that this time is extremely long when traffic flows consume more than half of the network link bandwidth but is very short otherwise.


We further try to reduce congestion observed when multiple applications share the network. Moreover, we want each application to use the most appropriate packet forwarding method without compromising other applications' performance. We are able, on existing data centers, to provide each application a dedicated fat-tree like private topology. We prove that tenant servers must meet some specific criteria to enable such allocation. Providing isolated dedicated sub-networks to each application allows it to use the best forwarding for its traffic.