Loading…

Data access reorganizations in compiling out-of-core data parallel programs on distributed memory machines

This paper describes optimization techniques for translating out-of-core programs written in a data parallel language to message passing node programs with explicit parallel I/O. We demonstrate that straightforward extension of in-core compilation techniques does not work well for out-of-core progra...

Full description

Saved in:
Bibliographic Details
Main Authors: Kandemir, M., Bordawekar, R., Choudhary, A.
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 564
container_issue
container_start_page 559
container_title
container_volume
creator Kandemir, M.
Bordawekar, R.
Choudhary, A.
description This paper describes optimization techniques for translating out-of-core programs written in a data parallel language to message passing node programs with explicit parallel I/O. We demonstrate that straightforward extension of in-core compilation techniques does not work well for out-of-core programs. We then describe how the compiler can optimize the code by (1) determining appropriate file layouts for out-of-core arrays, (2) permuting the loops in the nest(s) to allow efficient file access, and (3) partitioning the available node memory among references based on I/O cost estimation. Our experimental results indicate that these optimizations can reduce the amount of time spent in I/O by as much as an order of magnitude.
doi_str_mv 10.1109/IPPS.1997.580956
format conference_proceeding
fullrecord <record><control><sourceid>proquest_CHZPO</sourceid><recordid>TN_cdi_proquest_miscellaneous_26556979</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>580956</ieee_id><sourcerecordid>26556979</sourcerecordid><originalsourceid>FETCH-LOGICAL-i203t-8673f3121f444450461c29b81b5e19e18533ff30d31d37b83e0cb4ebcc2f35d13</originalsourceid><addsrcrecordid>eNotkL1PwzAUxC0BEm1hR0ye2FL88uokHlH5qlSJSsAcOc5zcZXEwU6G8tcTVG655e700zF2A2IJINT9Zrd7X4JS-VIWQsnsjM1FAUWW5wrzczYDkWGSA-Ilm8d4ECIVKLMZOzzqQXNtDMXIA_mw15370YPzXeSu48a3vWtct-d-HBJvE-MD8fqv1Ougm4Ya3ge_D7qN3He8dnEIrhoHqnlLrQ9H3mrz5TqKV-zC6ibS9b8v2Ofz08f6Ndm-vWzWD9vETUxDMjGjRUjBriZJscrApKoqoJIEiqCQiNaiqBFqzKsCSZhqRZUxqUVZAy7Y3Wl34voeKQ5l66KhptEd-TGWaSZlpqZfFuz2FHREVPbBtTocy9N_-AvskmXC</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype><pqid>26556979</pqid></control><display><type>conference_proceeding</type><title>Data access reorganizations in compiling out-of-core data parallel programs on distributed memory machines</title><source>IEEE Xplore All Conference Series</source><creator>Kandemir, M. ; Bordawekar, R. ; Choudhary, A.</creator><creatorcontrib>Kandemir, M. ; Bordawekar, R. ; Choudhary, A.</creatorcontrib><description>This paper describes optimization techniques for translating out-of-core programs written in a data parallel language to message passing node programs with explicit parallel I/O. We demonstrate that straightforward extension of in-core compilation techniques does not work well for out-of-core programs. We then describe how the compiler can optimize the code by (1) determining appropriate file layouts for out-of-core arrays, (2) permuting the loops in the nest(s) to allow efficient file access, and (3) partitioning the available node memory among references based on I/O cost estimation. Our experimental results indicate that these optimizations can reduce the amount of time spent in I/O by as much as an order of magnitude.</description><identifier>ISSN: 1063-7133</identifier><identifier>ISBN: 0818677937</identifier><identifier>ISBN: 9780818677939</identifier><identifier>DOI: 10.1109/IPPS.1997.580956</identifier><language>eng</language><publisher>IEEE</publisher><subject>Computational Intelligence Society ; Concurrent computing ; Cost function ; Data flow computing ; Large-scale systems ; Message passing ; Optimizing compilers ; Parallel machines ; Physics computing ; Program processors</subject><ispartof>Proceedings - International Parallel Processing Symposium, 1997, p.559-564</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/580956$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,314,780,784,789,790,4050,4051,23930,23931,25140,27924,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/580956$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Kandemir, M.</creatorcontrib><creatorcontrib>Bordawekar, R.</creatorcontrib><creatorcontrib>Choudhary, A.</creatorcontrib><title>Data access reorganizations in compiling out-of-core data parallel programs on distributed memory machines</title><title>Proceedings - International Parallel Processing Symposium</title><addtitle>IPPS</addtitle><description>This paper describes optimization techniques for translating out-of-core programs written in a data parallel language to message passing node programs with explicit parallel I/O. We demonstrate that straightforward extension of in-core compilation techniques does not work well for out-of-core programs. We then describe how the compiler can optimize the code by (1) determining appropriate file layouts for out-of-core arrays, (2) permuting the loops in the nest(s) to allow efficient file access, and (3) partitioning the available node memory among references based on I/O cost estimation. Our experimental results indicate that these optimizations can reduce the amount of time spent in I/O by as much as an order of magnitude.</description><subject>Computational Intelligence Society</subject><subject>Concurrent computing</subject><subject>Cost function</subject><subject>Data flow computing</subject><subject>Large-scale systems</subject><subject>Message passing</subject><subject>Optimizing compilers</subject><subject>Parallel machines</subject><subject>Physics computing</subject><subject>Program processors</subject><issn>1063-7133</issn><isbn>0818677937</isbn><isbn>9780818677939</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>1997</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotkL1PwzAUxC0BEm1hR0ye2FL88uokHlH5qlSJSsAcOc5zcZXEwU6G8tcTVG655e700zF2A2IJINT9Zrd7X4JS-VIWQsnsjM1FAUWW5wrzczYDkWGSA-Ilm8d4ECIVKLMZOzzqQXNtDMXIA_mw15370YPzXeSu48a3vWtct-d-HBJvE-MD8fqv1Ougm4Ya3ge_D7qN3He8dnEIrhoHqnlLrQ9H3mrz5TqKV-zC6ibS9b8v2Ofz08f6Ndm-vWzWD9vETUxDMjGjRUjBriZJscrApKoqoJIEiqCQiNaiqBFqzKsCSZhqRZUxqUVZAy7Y3Wl34voeKQ5l66KhptEd-TGWaSZlpqZfFuz2FHREVPbBtTocy9N_-AvskmXC</recordid><startdate>1997</startdate><enddate>1997</enddate><creator>Kandemir, M.</creator><creator>Bordawekar, R.</creator><creator>Choudhary, A.</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>1997</creationdate><title>Data access reorganizations in compiling out-of-core data parallel programs on distributed memory machines</title><author>Kandemir, M. ; Bordawekar, R. ; Choudhary, A.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i203t-8673f3121f444450461c29b81b5e19e18533ff30d31d37b83e0cb4ebcc2f35d13</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>1997</creationdate><topic>Computational Intelligence Society</topic><topic>Concurrent computing</topic><topic>Cost function</topic><topic>Data flow computing</topic><topic>Large-scale systems</topic><topic>Message passing</topic><topic>Optimizing compilers</topic><topic>Parallel machines</topic><topic>Physics computing</topic><topic>Program processors</topic><toplevel>online_resources</toplevel><creatorcontrib>Kandemir, M.</creatorcontrib><creatorcontrib>Bordawekar, R.</creatorcontrib><creatorcontrib>Choudhary, A.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE/IET Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kandemir, M.</au><au>Bordawekar, R.</au><au>Choudhary, A.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Data access reorganizations in compiling out-of-core data parallel programs on distributed memory machines</atitle><btitle>Proceedings - International Parallel Processing Symposium</btitle><stitle>IPPS</stitle><date>1997</date><risdate>1997</risdate><spage>559</spage><epage>564</epage><pages>559-564</pages><issn>1063-7133</issn><isbn>0818677937</isbn><isbn>9780818677939</isbn><abstract>This paper describes optimization techniques for translating out-of-core programs written in a data parallel language to message passing node programs with explicit parallel I/O. We demonstrate that straightforward extension of in-core compilation techniques does not work well for out-of-core programs. We then describe how the compiler can optimize the code by (1) determining appropriate file layouts for out-of-core arrays, (2) permuting the loops in the nest(s) to allow efficient file access, and (3) partitioning the available node memory among references based on I/O cost estimation. Our experimental results indicate that these optimizations can reduce the amount of time spent in I/O by as much as an order of magnitude.</abstract><pub>IEEE</pub><doi>10.1109/IPPS.1997.580956</doi><tpages>6</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1063-7133
ispartof Proceedings - International Parallel Processing Symposium, 1997, p.559-564
issn 1063-7133
language eng
recordid cdi_proquest_miscellaneous_26556979
source IEEE Xplore All Conference Series
subjects Computational Intelligence Society
Concurrent computing
Cost function
Data flow computing
Large-scale systems
Message passing
Optimizing compilers
Parallel machines
Physics computing
Program processors
title Data access reorganizations in compiling out-of-core data parallel programs on distributed memory machines
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T18%3A28%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Data%20access%20reorganizations%20in%20compiling%20out-of-core%20data%20parallel%20programs%20on%20distributed%20memory%20machines&rft.btitle=Proceedings%20-%20International%20Parallel%20Processing%20Symposium&rft.au=Kandemir,%20M.&rft.date=1997&rft.spage=559&rft.epage=564&rft.pages=559-564&rft.issn=1063-7133&rft.isbn=0818677937&rft.isbn_list=9780818677939&rft_id=info:doi/10.1109/IPPS.1997.580956&rft_dat=%3Cproquest_CHZPO%3E26556979%3C/proquest_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i203t-8673f3121f444450461c29b81b5e19e18533ff30d31d37b83e0cb4ebcc2f35d13%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=26556979&rft_id=info:pmid/&rft_ieee_id=580956&rfr_iscdi=true