|M.Sc Student||Erez Michalak|
|Subject||An Adaptive Technique for Constructing Robust and|
High-Throughput Shared Objects
|Department||Department of Industrial Engineering and Management||Supervisors||Full Professor Kutten Shay|
|Dr. Danny Hendler|
|Full Thesis text|
Shared counters are the key to solving a variety of coordination problems on multiprocessor machines, such as barrier synchronization and index distribution. It is desired that they, like shared objects in general, be robust, linearizable and scalable. We present the first linearizable and wait-free shared counter algorithm that achieves high throughput without any a-priori knowledge of the execution. In particular, in an
N-process execution E, our algorithm achieves high throughput of Omega(N/((KE logKE)2logN)), where KE is E's unknown level of asynchrony. Our algorithm adapts dynamically to the unknown KE, but only up to some threshold. Over this threshold, our algorithm shifts to an asynchronous mode of operation, in which better throughput is achieved for that high level of asynchrony. Moreover, let K'E be a fixed upper bound on the relative speeds of different processes, at a time that both of them participated in E and none of them failed. Then, we can generalize the guarantee of our algorithm (for any constant number of failures) to the high throughput of Omega(N/((K'E logK'E)2logN)).
Our algorithm is an adaptive version of a prior art linearizable and lock-free algorithm. The original non-adaptive version guarantees high throughput only when some fixed K was given to the algorithm in advance as a (supposed) upper bound of KE. If the given K happened to be lower than KE, or much greater than KE, then the throughput of that algorithm degraded significantly. In addition, our algorithm is more robust, as it satisfies the wait-freedom progress guarantee, while the non-adaptive version is only lock-free.
Our algorithm can be easily adapted for obtaining linearizable, wait-free and high throughput implementation of any shared object that supports a combinable operation (such as queues and stacks).
In order to ensure high throughput and wait-freedom, we needed a method that guarantees (for some common kind of concurrent procedures) the completion of a procedure in a finite and bounded time, regardless of the contention on the shared memory. This method may prove useful by itself, for solving other problems.