Loading…

Slow and Steady: Measuring and Tuning Multicore Interference

Now ubiquitous, multicore processors provide replicated compute cores that allow independent programs to run in parallel. However, shared resources, such as last-level caches, can cause otherwise-independent programs to interfere with one another, leading to significant and unpredictable effects on...

Full description

Saved in:
Bibliographic Details
Main Authors: Iorga, Dan, Sorensen, Tyler, Wickerson, John, Donaldson, Alastair F.
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 212
container_issue
container_start_page 200
container_title
container_volume
creator Iorga, Dan
Sorensen, Tyler
Wickerson, John
Donaldson, Alastair F.
description Now ubiquitous, multicore processors provide replicated compute cores that allow independent programs to run in parallel. However, shared resources, such as last-level caches, can cause otherwise-independent programs to interfere with one another, leading to significant and unpredictable effects on their execution time. Indeed, prior work has shown that specially crafted enemy programs can cause software systems of interest to experience orders-of-magnitude slowdowns when both are run in parallel on a multicore processor. This undermines the suitability of these processors for tasks that have real-time constraints. In this work, we explore the design and evaluation of techniques for empirically testing interference using enemy programs, with an eye towards reliability (how reproducible the interference results are) and portability (how interference testing can be effective across chips). We first show that different methods of measurement yield significantly different magnitudes of, and variation in, observed interference effects when applied to an enemy process that was shown to be particularly effective in prior work. We propose a method of measurement based on percentiles and confidence intervals, and show that it provides both competitive and reproducible observations. The reliability of our measurements allows us to explore auto-tuning, where enemy programs are further specialised per architecture. We evaluate three different tuning approaches (random search, simulated annealing, and Bayesian optimisation) on five different multicore chips, spanning x86 and ARM architectures. To show that our tuned enemy programs generalise to applications, we evaluate the slowdowns caused by our approach on the AutoBench and CoreMark benchmark suites. Our method achieves a statistically larger slowdown compared to prior work in 35 out of 105 benchmarldchip combinations, with a maximum difference of 3.8\times. We envision that empirical approaches, such as ours, will be valuable for 'first pass' evaluations when investigating which multicore processors are suitable for real-time tasks.
doi_str_mv 10.1109/RTAS48715.2020.000-6
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_9113125</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9113125</ieee_id><sourcerecordid>9113125</sourcerecordid><originalsourceid>FETCH-LOGICAL-i203t-f0edd3357121ac2e02075481d7777ff27832cc8dacd54a9afb8be876225fc2283</originalsourceid><addsrcrecordid>eNotjl1LwzAYhaMgOOd-gV70D7TmfZM0iXgzhh-DjcFar0eWvJFK7SRtkf1758e5OQ_n4uEwdgu8AOD2blvPK2k0qAI58oJznpdnbGa1AY0GlLQWz9kES4m5FrK8ZFd9_865KNGKCXuo2sNX5rqQVQO5cLzP1uT6MTXd2-9aj90Prsd2aPwhUbbsBkqREnWertlFdG1Ps_-estenx3rxkq82z8vFfJU3yMWQR04hCKE0IDiPdPqplTQQ9CkxojYCvTfB-aCksy7uzZ6MLhFV9IhGTNnNn7chot1naj5cOu4sgABU4hvr3UhK</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Slow and Steady: Measuring and Tuning Multicore Interference</title><source>IEEE Xplore All Conference Series</source><creator>Iorga, Dan ; Sorensen, Tyler ; Wickerson, John ; Donaldson, Alastair F.</creator><creatorcontrib>Iorga, Dan ; Sorensen, Tyler ; Wickerson, John ; Donaldson, Alastair F.</creatorcontrib><description>Now ubiquitous, multicore processors provide replicated compute cores that allow independent programs to run in parallel. However, shared resources, such as last-level caches, can cause otherwise-independent programs to interfere with one another, leading to significant and unpredictable effects on their execution time. Indeed, prior work has shown that specially crafted enemy programs can cause software systems of interest to experience orders-of-magnitude slowdowns when both are run in parallel on a multicore processor. This undermines the suitability of these processors for tasks that have real-time constraints. In this work, we explore the design and evaluation of techniques for empirically testing interference using enemy programs, with an eye towards reliability (how reproducible the interference results are) and portability (how interference testing can be effective across chips). We first show that different methods of measurement yield significantly different magnitudes of, and variation in, observed interference effects when applied to an enemy process that was shown to be particularly effective in prior work. We propose a method of measurement based on percentiles and confidence intervals, and show that it provides both competitive and reproducible observations. The reliability of our measurements allows us to explore auto-tuning, where enemy programs are further specialised per architecture. We evaluate three different tuning approaches (random search, simulated annealing, and Bayesian optimisation) on five different multicore chips, spanning x86 and ARM architectures. To show that our tuned enemy programs generalise to applications, we evaluate the slowdowns caused by our approach on the AutoBench and CoreMark benchmark suites. Our method achieves a statistically larger slowdown compared to prior work in 35 out of 105 benchmarldchip combinations, with a maximum difference of 3.8\times. We envision that empirical approaches, such as ours, will be valuable for 'first pass' evaluations when investigating which multicore processors are suitable for real-time tasks.</description><identifier>EISSN: 2642-7346</identifier><identifier>EISBN: 9781728154992</identifier><identifier>EISBN: 1728154995</identifier><identifier>DOI: 10.1109/RTAS48715.2020.000-6</identifier><language>eng</language><publisher>IEEE</publisher><subject>Benchmark testing ; Computer architecture ; Interference ; Multicore processing ; Program processors ; Semiconductor device measurement ; Software systems</subject><ispartof>2020 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS), 2020, p.200-212</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9113125$$EHTML$$P50$$Gieee$$H</linktohtml><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9113125$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Iorga, Dan</creatorcontrib><creatorcontrib>Sorensen, Tyler</creatorcontrib><creatorcontrib>Wickerson, John</creatorcontrib><creatorcontrib>Donaldson, Alastair F.</creatorcontrib><title>Slow and Steady: Measuring and Tuning Multicore Interference</title><title>2020 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)</title><addtitle>RTAS</addtitle><description>Now ubiquitous, multicore processors provide replicated compute cores that allow independent programs to run in parallel. However, shared resources, such as last-level caches, can cause otherwise-independent programs to interfere with one another, leading to significant and unpredictable effects on their execution time. Indeed, prior work has shown that specially crafted enemy programs can cause software systems of interest to experience orders-of-magnitude slowdowns when both are run in parallel on a multicore processor. This undermines the suitability of these processors for tasks that have real-time constraints. In this work, we explore the design and evaluation of techniques for empirically testing interference using enemy programs, with an eye towards reliability (how reproducible the interference results are) and portability (how interference testing can be effective across chips). We first show that different methods of measurement yield significantly different magnitudes of, and variation in, observed interference effects when applied to an enemy process that was shown to be particularly effective in prior work. We propose a method of measurement based on percentiles and confidence intervals, and show that it provides both competitive and reproducible observations. The reliability of our measurements allows us to explore auto-tuning, where enemy programs are further specialised per architecture. We evaluate three different tuning approaches (random search, simulated annealing, and Bayesian optimisation) on five different multicore chips, spanning x86 and ARM architectures. To show that our tuned enemy programs generalise to applications, we evaluate the slowdowns caused by our approach on the AutoBench and CoreMark benchmark suites. Our method achieves a statistically larger slowdown compared to prior work in 35 out of 105 benchmarldchip combinations, with a maximum difference of 3.8\times. We envision that empirical approaches, such as ours, will be valuable for 'first pass' evaluations when investigating which multicore processors are suitable for real-time tasks.</description><subject>Benchmark testing</subject><subject>Computer architecture</subject><subject>Interference</subject><subject>Multicore processing</subject><subject>Program processors</subject><subject>Semiconductor device measurement</subject><subject>Software systems</subject><issn>2642-7346</issn><isbn>9781728154992</isbn><isbn>1728154995</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2020</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotjl1LwzAYhaMgOOd-gV70D7TmfZM0iXgzhh-DjcFar0eWvJFK7SRtkf1758e5OQ_n4uEwdgu8AOD2blvPK2k0qAI58oJznpdnbGa1AY0GlLQWz9kES4m5FrK8ZFd9_865KNGKCXuo2sNX5rqQVQO5cLzP1uT6MTXd2-9aj90Prsd2aPwhUbbsBkqREnWertlFdG1Ps_-estenx3rxkq82z8vFfJU3yMWQR04hCKE0IDiPdPqplTQQ9CkxojYCvTfB-aCksy7uzZ6MLhFV9IhGTNnNn7chot1naj5cOu4sgABU4hvr3UhK</recordid><startdate>202004</startdate><enddate>202004</enddate><creator>Iorga, Dan</creator><creator>Sorensen, Tyler</creator><creator>Wickerson, John</creator><creator>Donaldson, Alastair F.</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>202004</creationdate><title>Slow and Steady: Measuring and Tuning Multicore Interference</title><author>Iorga, Dan ; Sorensen, Tyler ; Wickerson, John ; Donaldson, Alastair F.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i203t-f0edd3357121ac2e02075481d7777ff27832cc8dacd54a9afb8be876225fc2283</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Benchmark testing</topic><topic>Computer architecture</topic><topic>Interference</topic><topic>Multicore processing</topic><topic>Program processors</topic><topic>Semiconductor device measurement</topic><topic>Software systems</topic><toplevel>online_resources</toplevel><creatorcontrib>Iorga, Dan</creatorcontrib><creatorcontrib>Sorensen, Tyler</creatorcontrib><creatorcontrib>Wickerson, John</creatorcontrib><creatorcontrib>Donaldson, Alastair F.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE/IET Electronic Library (IEL) (UW System Shared)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Iorga, Dan</au><au>Sorensen, Tyler</au><au>Wickerson, John</au><au>Donaldson, Alastair F.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Slow and Steady: Measuring and Tuning Multicore Interference</atitle><btitle>2020 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)</btitle><stitle>RTAS</stitle><date>2020-04</date><risdate>2020</risdate><spage>200</spage><epage>212</epage><pages>200-212</pages><eissn>2642-7346</eissn><eisbn>9781728154992</eisbn><eisbn>1728154995</eisbn><abstract>Now ubiquitous, multicore processors provide replicated compute cores that allow independent programs to run in parallel. However, shared resources, such as last-level caches, can cause otherwise-independent programs to interfere with one another, leading to significant and unpredictable effects on their execution time. Indeed, prior work has shown that specially crafted enemy programs can cause software systems of interest to experience orders-of-magnitude slowdowns when both are run in parallel on a multicore processor. This undermines the suitability of these processors for tasks that have real-time constraints. In this work, we explore the design and evaluation of techniques for empirically testing interference using enemy programs, with an eye towards reliability (how reproducible the interference results are) and portability (how interference testing can be effective across chips). We first show that different methods of measurement yield significantly different magnitudes of, and variation in, observed interference effects when applied to an enemy process that was shown to be particularly effective in prior work. We propose a method of measurement based on percentiles and confidence intervals, and show that it provides both competitive and reproducible observations. The reliability of our measurements allows us to explore auto-tuning, where enemy programs are further specialised per architecture. We evaluate three different tuning approaches (random search, simulated annealing, and Bayesian optimisation) on five different multicore chips, spanning x86 and ARM architectures. To show that our tuned enemy programs generalise to applications, we evaluate the slowdowns caused by our approach on the AutoBench and CoreMark benchmark suites. Our method achieves a statistically larger slowdown compared to prior work in 35 out of 105 benchmarldchip combinations, with a maximum difference of 3.8\times. We envision that empirical approaches, such as ours, will be valuable for 'first pass' evaluations when investigating which multicore processors are suitable for real-time tasks.</abstract><pub>IEEE</pub><doi>10.1109/RTAS48715.2020.000-6</doi><tpages>13</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2642-7346
ispartof 2020 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS), 2020, p.200-212
issn 2642-7346
language eng
recordid cdi_ieee_primary_9113125
source IEEE Xplore All Conference Series
subjects Benchmark testing
Computer architecture
Interference
Multicore processing
Program processors
Semiconductor device measurement
Software systems
title Slow and Steady: Measuring and Tuning Multicore Interference
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-03-09T13%3A06%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Slow%20and%20Steady:%20Measuring%20and%20Tuning%20Multicore%20Interference&rft.btitle=2020%20IEEE%20Real-Time%20and%20Embedded%20Technology%20and%20Applications%20Symposium%20(RTAS)&rft.au=Iorga,%20Dan&rft.date=2020-04&rft.spage=200&rft.epage=212&rft.pages=200-212&rft.eissn=2642-7346&rft_id=info:doi/10.1109/RTAS48715.2020.000-6&rft.eisbn=9781728154992&rft.eisbn_list=1728154995&rft_dat=%3Cieee_CHZPO%3E9113125%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i203t-f0edd3357121ac2e02075481d7777ff27832cc8dacd54a9afb8be876225fc2283%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9113125&rfr_iscdi=true