|Ph.D Student||Eran Haggai|
|Subject||SmartNIC Inline Processing|
|Department||Department of Electrical and Computer Engineering||Supervisor||ASSOCIATE PROF. Mark Silberstein|
The inline processing technique enables data transformation as a system transfers it to or from a processing node. With rising network rates, cloud vendors increasingly deploy SmartNICs, incorporating programmable FPGA logic or ARM cores within the NIC, capable of inline processing. Their primary use has been offloading hypervisor networking infrastructure and enabling efficient software-defined networking by offloading computations and accelerating data-intensive communication tasks. However, SmartNICs are also capable of accelerating applications.
Server applications such as key-value stores, IoT cloud services, and disaggregated accelerators can use inline processing to improve CPU utilization, message latency, and throughput, but doing so poses several challenges. First, inline processing breaks existing operating system and network stack layers, requiring new APIs. Second, current design methodologies and SmartNIC designs make it difficult to reuse existing software and hardware for new accelerators. Finally, pooling SmartNIC resources in a cloud environment requires performance isolation mechanisms for multiple tenants.
This work presents new operating system abstractions to allow application layer inline processing on SmartNICs and new SmartNIC designs, enabling fine-grain hardware virtualization and reusing existing ASIC NIC functionality with FPGA-based SmartNICs.
In addition, we present a complementary line of research that uses SmartNICs to accelerate datacenter networks transparently. By implementing a new tunnel layer, which handles congestion control and takes queuing out of the core network, we allow legacy network stacks to take advantage of modern network features and fairly share the network among competing RDMA and TCP network flows.