Benchmarking of Memory Fragmentation Metrics on Real Allocation Traces
Faculty Supervisor:
Dr. Gokhan Kul, Computer & Information Science
Committee Members:
Dr. Adnan El-Nasan, Computer & Information Science
Dr. John Rahme, Computer & Information Science
Abstract: Memory allocators perform a fundamental task in ensuring dynamic memory is allocated to processes in a way that is efficient in both time and space domains. They do this by reducing the number of kernel calls performed, maintaining high spatial and cache locality, and keeping the number of pages in use to a minimum. However, there is a fundamental constraint towards the efficiency of an allocator. External fragmentation is an unavoidable consequence of dynamic allocation, yet there is little consensus on how precisely to measure fragmentation; There is vanishingly little analysis on how different fragmentation metrics perform or when each should be used. A better understanding of fragmentation and how to measure it is vital in advancing allocator design and designing programs that respect allocators’ assumptions. There is a large myriad of work introducing various metrics for measuring fragmentation or describing allocators that reduce fragmentation in some novel way under particular workloads. However, there is a gap in modern work determining the efficacy of the numerous proposed fragmentation metrics, when they should be used, and how to compare them between each other. Many papers discussing fragmentation produced under some workload or allocator don’t describe which metric is being used. This disjoint approach to measuring fragmentation leads to confusion and imprecision. Much of the work on fragmentation is old, dating back 50 years. Newer work tends to be focused on allocation on either large clusters or databases, or in optical network allocation rather than memory. Additionally, there has been a focus on analyzing synthetic traces (and how precisely to generate synthetic traces), rather than using traces from real workloads on real machines. This work attempts to unify the understanding of fragmentation through comparative analysis on real workloads. This work introduces a pipeline for benchmarking external fragmentation metrics using real memory traces. First is a method of collecting memory traces of multi-threaded and multi-process systems, tracking every call to the allocation library. Next a method of computing the fragmentation of these traces at variable timesteps through the parent process’ lifetime. And finally, a method of plotting metrics over time against each other and against the overall memory usage of that process. 6 metrics are compared in this work, including two novel metrics; AEFM and NAEFM, and one metric that has escaped academic review; ESP. The ESP metric performs extremely well, holding high average correlation with other metrics, while maintaining much more stability on shorter applications. For longer running applications, many metrics quickly lose specificity, reaching their maximum value quite quickly and staying there. NAEFM, SSFM, and EBFM maintain usefulness across the lifetime of longer running programs. This work has applications in a wide range of areas. It is naturally of interest to kernel or allocator developers. Future work which takes advantage of findings from this comparative analysis may produce more efficient allocators on small and personal computers. A better understanding of memory degradation over time will result in more efficient applications, reducing the overall resource burden of everyday computing. Additionally, there may be applications in adjacent fields as found with optical network allocation.
For further information please contact Dr. Gokhan Kul at gkul@umassd.edu.
Dion 311 and Teams (https://teams.microsoft.com/meet/28305274898079?p=Hq5YC1bJl1SW8ioWEn)
Dr. Gokhan Kul
gkul@umassd.edu
https://teams.microsoft.com/meet/28305274898079?p=Hq5YC1bJl1SW8ioWEn