Loading…
Minimizing interference through application mapping in multi-level buffer caches
In this paper, we study the impact of cache sharing on co-mapped applications in multi-level buffer cache hierarchies. When the number of applications exceeds the number of resources, resource sharing is inevitable. However, unless applications are co-mapped carefully, destructive interference may c...
Saved in:
Main Authors: | , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In this paper, we study the impact of cache sharing on co-mapped applications in multi-level buffer cache hierarchies. When the number of applications exceeds the number of resources, resource sharing is inevitable. However, unless applications are co-mapped carefully, destructive interference may cause applications to thrash and spend most of their time paging data to and from disks. We propose two novel models which predict the performance of an application in the presence of other applications and an algorithm which uses the output of these models to perform application-to-node mapping in a multi-level buffer cache hierarchy. Our models use the reuse distances of the application reference streams and their respective I/O rates. This information can be obtained either online or offline. Our main advantage is that we do not require profile information of all application pairs to predict their interferences. The goal of this mapping is to minimize destructive interference during execution. We validate the effectiveness of our models and mapping scheme using several I/O-intensive applications, and found that the error in prediction of our two models is only 3.9% and 2.7% respectively, on average. Further, using our approach, we were effectively able to co-map applications to maximize the performance of the buffer cache hierarchy by 43.6% and 56.8% on average over the median and worst mappings respectively in the entire I/O stack. |
---|---|
DOI: | 10.1109/ISPASS.2011.5762714 |