Ph.D Thesis | |

Ph.D Student | Kanizo Josef Hai |
---|---|

Subject | Fast Decisions in High-Speed Networking Devices |

Department | Department of Computer Science |

Supervisors | PROF. Isaac Keslassy |

ASSOCIATE PROF. David Hay | |

Full Thesis text |

Tables that match a given key to its value (e.g., hash tables) have become crucial algorithmic building blocks for contemporary networking devices, which typically handle large amounts of data at high speeds. For example, these tables are used for heavy-hitter flow identification, flow state keeping, virus signature scanning, flow counter management, IP address lookup algorithms, and access control. Networking devices tables require a strict worst-case operation time along with limited table size, unlike traditional table-based data structures where amortized operation time is typically the main figure of merit.

In this
dissertation, we first consider multiple-choice hashing schemes, which are
particularly suited to such a worst-case operation time bound. Assuming a
uniform memory model and an overflow list, often implemented using
Content-Addressable Memory (CAM), we start by considering multiple-choice
hashing schemes without deletion operations, i.e. only insertion and query
operations. In this case, we find *optimal* online hashing schemes with
guaranteed worst case operation time.

Then, we
extend the model by considering also cases where deleting elements is allowed.
While previous results show the same asymptomatic behavior between the two models,
we show that when bucket sizes are bounded the performance may dramatically
decrease. In particular, in the deletions case, optimal online schemes decrease
the expected overflow fraction as slowly as *Ω(1/a)*, compared to *Ω(e ^{-a})*
in case deletion are not allowed, where

In addition,
we consider a combined memory model (e.g., of both SRAM and DRAM), in which
some of the elements are stored in a fast memory, while others are stored in a
significantly slower memory. In this case, we provide a tight lower bound on
the number of elements that can be stored using a multiple-choice hashing
scheme with *d=2* choices.

Finally, we show how network rules usually implemented using Ternary Content-Addressable Memories (TCAMs) at the ingress points of the network can be split among all routers in the network, such that each switch can maintain a smaller TCAM, while preserving the same overall classification functionality.