|M.Sc Student||Danial Lilia|
|Subject||Preserving Locality Across Virtual and Physical Address|
Spaces To Accelerate Address Translation
|Department||Department of Electrical Engineering||Supervisors||Professor Yoav Etsion|
|Dr. Lluis Vilanova|
|Full Thesis text|
Applications' memory consumption is growing rapidly, which increases their dependence on the memory system. Since each virtual-to-physical memory translation in modern systems requires cache lookups or multiple memory accesses, the run time overhead of virtual memory translation is on the rise. Meanwhile, the increasing prevalence of the cloud ecosystem and server consolidation moves more applications to run in virtual machines. This further aggravates the overhead of memory translation; in virtual machines, the processor needs to translate a virtual address in the guest to a physical address in the host, which quadruples the number of memory accesses required for translation. To alleviate the translation overhead, modern processors include additional levels of translation caches, such as 2nd level TLB and partial walk caches.
Our observations show that a careful construction of the page table can make better use of the added translation caches. We introduce a smart physical frame allocation algorithm as a complement for current mapping schemes for virtual machine memory subsystem. The main idea behind the algorithm is to map a contiguous region for each application in order to exhibit spatial locality. We show that careful allocation can improve performance by up to 50% for some applications.