|M.Sc Student||Milman Gal|
|Subject||BQ: A Lock-Free Queue with Batching|
|Department||Department of Computer Science||Supervisor||Professor Erez Petrank|
|Full Thesis text|
Concurrent data structures provide fundamental building blocks for concurrent programming. Standard concurrent data structures may be extended by allowing a sequence of operations to be submitted as a batch for later execution. A sequence of such operations can then be executed more efficiently than the standard execution of one operation at a time. We develop a novel algorithmic extension to the prevalent FIFO queue data structure that exploits such batching scenarios. Our extension allows to apply a sequence of enqueue and dequeue operations to the queue all at once as a batch. We present a novel combinatorial observation on the effect of a mixed sequence of enqueues and dequeues on the queue. Relying on it, a thread that initiates a batch operation performs more of the operation's processing locally, before accessing the shared queue to apply its batch, as well as thereafter. This results in less processing in the shared queue. Moreover, there are fewer accesses to the shared queue due to applying several operations as a unified batch. The shortened processing in the shared queue as well as fewer accesses to it yield improved scalability. An implementation in C on a multicore demonstrates a significant performance improvement of up to 16x (depending on the batch lengths and the number of threads), compared to previous queue implementations.