Loading…

Extending DIRAC File Management with Erasure-Coding for efficient storage

The state of the art in Grid style data management is to achieve increased resilience of data via multiple complete replicas of data files across multiple storage endpoints. While this is effective, it is not the most space-efficient approach to resilience, especially when the reliability of individ...

Full description

Saved in:
Bibliographic Details
Published in:Journal of physics. Conference series 2015-12, Vol.664 (4), p.42051
Main Authors: Skipsey, Samuel Cadellin, Todev, Paulin, Britton, David, Crooks, David, Roy, Gareth
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c357t-e7e4dd1b6f5b664d6fbc23def3d51a92fdeb5915a123338fb4210df0ceeb3783
container_end_page
container_issue 4
container_start_page 42051
container_title Journal of physics. Conference series
container_volume 664
creator Skipsey, Samuel Cadellin
Todev, Paulin
Britton, David
Crooks, David
Roy, Gareth
description The state of the art in Grid style data management is to achieve increased resilience of data via multiple complete replicas of data files across multiple storage endpoints. While this is effective, it is not the most space-efficient approach to resilience, especially when the reliability of individual storage endpoints is sufficiently high that only a few will be inactive at any point in time. We report on work performed as part of GridPP[1], extending the Dirac File Catalogue and file management interface to allow the placement of erasure-coded files: each file distributed as N identically-sized chunks of data striped across a vector of storage endpoints, encoded such that any M chunks can be lost and the original file can be reconstructed. The tools developed are transparent to the user, and, as well as allowing up and downloading of data to Grid storage, also provide the possibility of parallelising access across all of the distributed chunks at once, improving data transfer and IO performance. We expect this approach to be of most interest to smaller VOs, who have tighter bounds on the storage available to them, but larger (WLCG) VOs may be interested as their total data increases during Run 2. We provide an analysis of the costs and benefits of the approach, along with future development and implementation plans in this area. In general, overheads for multiple file transfers provide the largest issue for competitiveness of this approach at present.
doi_str_mv 10.1088/1742-6596/664/4/042051
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2576533507</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2576533507</sourcerecordid><originalsourceid>FETCH-LOGICAL-c357t-e7e4dd1b6f5b664d6fbc23def3d51a92fdeb5915a123338fb4210df0ceeb3783</originalsourceid><addsrcrecordid>eNqFkE9Lw0AQxRdRsFa_ggQ8x-z_pMcSWw1UBOl92WRn65Y2qbsp6rd3Y6QencsMzO_NPB5CtwTfE1wUGck5TaWYyUxKnvEMc4oFOUOT0-L8NBfFJboKYYsxi5VPULX47KE1rt0kD9XrvEyWbgfJs271BvbQ9smH69-Shdfh6CEtux_Sdj4Ba13jBiL0nY_0Nbqwehfg5rdP0Xq5WJdP6erlsSrnq7RhIu9TyIEbQ2ppRR39GmnrhjIDlhlB9IxaA7WYEaEJjQ4LW3NKsLG4AahZXrApuhvPHnz3foTQq2139G38qKjIpWBM4DxScqQa34XgwaqDd3vtvxTBakhNDYGoIRwVXSiuxtSikI5C1x3-Lv8j-gbYXm5c</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2576533507</pqid></control><display><type>article</type><title>Extending DIRAC File Management with Erasure-Coding for efficient storage</title><source>Publicly Available Content (ProQuest)</source><source>Free Full-Text Journals in Chemistry</source><creator>Skipsey, Samuel Cadellin ; Todev, Paulin ; Britton, David ; Crooks, David ; Roy, Gareth</creator><creatorcontrib>Skipsey, Samuel Cadellin ; Todev, Paulin ; Britton, David ; Crooks, David ; Roy, Gareth</creatorcontrib><description>The state of the art in Grid style data management is to achieve increased resilience of data via multiple complete replicas of data files across multiple storage endpoints. While this is effective, it is not the most space-efficient approach to resilience, especially when the reliability of individual storage endpoints is sufficiently high that only a few will be inactive at any point in time. We report on work performed as part of GridPP[1], extending the Dirac File Catalogue and file management interface to allow the placement of erasure-coded files: each file distributed as N identically-sized chunks of data striped across a vector of storage endpoints, encoded such that any M chunks can be lost and the original file can be reconstructed. The tools developed are transparent to the user, and, as well as allowing up and downloading of data to Grid storage, also provide the possibility of parallelising access across all of the distributed chunks at once, improving data transfer and IO performance. We expect this approach to be of most interest to smaller VOs, who have tighter bounds on the storage available to them, but larger (WLCG) VOs may be interested as their total data increases during Run 2. We provide an analysis of the costs and benefits of the approach, along with future development and implementation plans in this area. In general, overheads for multiple file transfers provide the largest issue for competitiveness of this approach at present.</description><identifier>ISSN: 1742-6588</identifier><identifier>EISSN: 1742-6596</identifier><identifier>DOI: 10.1088/1742-6596/664/4/042051</identifier><language>eng</language><publisher>Bristol: IOP Publishing</publisher><subject>Cost benefit analysis ; Data management ; Data transfer (computers) ; Downloading ; Files management ; Physics ; Resilience</subject><ispartof>Journal of physics. Conference series, 2015-12, Vol.664 (4), p.42051</ispartof><rights>Published under licence by IOP Publishing Ltd</rights><rights>2015. This work is published under http://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c357t-e7e4dd1b6f5b664d6fbc23def3d51a92fdeb5915a123338fb4210df0ceeb3783</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2576533507?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,25753,27924,27925,37012,44590</link.rule.ids></links><search><creatorcontrib>Skipsey, Samuel Cadellin</creatorcontrib><creatorcontrib>Todev, Paulin</creatorcontrib><creatorcontrib>Britton, David</creatorcontrib><creatorcontrib>Crooks, David</creatorcontrib><creatorcontrib>Roy, Gareth</creatorcontrib><title>Extending DIRAC File Management with Erasure-Coding for efficient storage</title><title>Journal of physics. Conference series</title><addtitle>J. Phys.: Conf. Ser</addtitle><description>The state of the art in Grid style data management is to achieve increased resilience of data via multiple complete replicas of data files across multiple storage endpoints. While this is effective, it is not the most space-efficient approach to resilience, especially when the reliability of individual storage endpoints is sufficiently high that only a few will be inactive at any point in time. We report on work performed as part of GridPP[1], extending the Dirac File Catalogue and file management interface to allow the placement of erasure-coded files: each file distributed as N identically-sized chunks of data striped across a vector of storage endpoints, encoded such that any M chunks can be lost and the original file can be reconstructed. The tools developed are transparent to the user, and, as well as allowing up and downloading of data to Grid storage, also provide the possibility of parallelising access across all of the distributed chunks at once, improving data transfer and IO performance. We expect this approach to be of most interest to smaller VOs, who have tighter bounds on the storage available to them, but larger (WLCG) VOs may be interested as their total data increases during Run 2. We provide an analysis of the costs and benefits of the approach, along with future development and implementation plans in this area. In general, overheads for multiple file transfers provide the largest issue for competitiveness of this approach at present.</description><subject>Cost benefit analysis</subject><subject>Data management</subject><subject>Data transfer (computers)</subject><subject>Downloading</subject><subject>Files management</subject><subject>Physics</subject><subject>Resilience</subject><issn>1742-6588</issn><issn>1742-6596</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2015</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqFkE9Lw0AQxRdRsFa_ggQ8x-z_pMcSWw1UBOl92WRn65Y2qbsp6rd3Y6QencsMzO_NPB5CtwTfE1wUGck5TaWYyUxKnvEMc4oFOUOT0-L8NBfFJboKYYsxi5VPULX47KE1rt0kD9XrvEyWbgfJs271BvbQ9smH69-Shdfh6CEtux_Sdj4Ba13jBiL0nY_0Nbqwehfg5rdP0Xq5WJdP6erlsSrnq7RhIu9TyIEbQ2ppRR39GmnrhjIDlhlB9IxaA7WYEaEJjQ4LW3NKsLG4AahZXrApuhvPHnz3foTQq2139G38qKjIpWBM4DxScqQa34XgwaqDd3vtvxTBakhNDYGoIRwVXSiuxtSikI5C1x3-Lv8j-gbYXm5c</recordid><startdate>20151223</startdate><enddate>20151223</enddate><creator>Skipsey, Samuel Cadellin</creator><creator>Todev, Paulin</creator><creator>Britton, David</creator><creator>Crooks, David</creator><creator>Roy, Gareth</creator><general>IOP Publishing</general><scope>O3W</scope><scope>TSCCA</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>H8D</scope><scope>HCIFZ</scope><scope>L7M</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope></search><sort><creationdate>20151223</creationdate><title>Extending DIRAC File Management with Erasure-Coding for efficient storage</title><author>Skipsey, Samuel Cadellin ; Todev, Paulin ; Britton, David ; Crooks, David ; Roy, Gareth</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c357t-e7e4dd1b6f5b664d6fbc23def3d51a92fdeb5915a123338fb4210df0ceeb3783</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2015</creationdate><topic>Cost benefit analysis</topic><topic>Data management</topic><topic>Data transfer (computers)</topic><topic>Downloading</topic><topic>Files management</topic><topic>Physics</topic><topic>Resilience</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Skipsey, Samuel Cadellin</creatorcontrib><creatorcontrib>Todev, Paulin</creatorcontrib><creatorcontrib>Britton, David</creatorcontrib><creatorcontrib>Crooks, David</creatorcontrib><creatorcontrib>Roy, Gareth</creatorcontrib><collection>Open Access: IOP Publishing Free Content</collection><collection>IOPscience (Open Access)</collection><collection>CrossRef</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>Aerospace Database</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>ProQuest advanced technologies &amp; aerospace journals</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Journal of physics. Conference series</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Skipsey, Samuel Cadellin</au><au>Todev, Paulin</au><au>Britton, David</au><au>Crooks, David</au><au>Roy, Gareth</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Extending DIRAC File Management with Erasure-Coding for efficient storage</atitle><jtitle>Journal of physics. Conference series</jtitle><addtitle>J. Phys.: Conf. Ser</addtitle><date>2015-12-23</date><risdate>2015</risdate><volume>664</volume><issue>4</issue><spage>42051</spage><pages>42051-</pages><issn>1742-6588</issn><eissn>1742-6596</eissn><abstract>The state of the art in Grid style data management is to achieve increased resilience of data via multiple complete replicas of data files across multiple storage endpoints. While this is effective, it is not the most space-efficient approach to resilience, especially when the reliability of individual storage endpoints is sufficiently high that only a few will be inactive at any point in time. We report on work performed as part of GridPP[1], extending the Dirac File Catalogue and file management interface to allow the placement of erasure-coded files: each file distributed as N identically-sized chunks of data striped across a vector of storage endpoints, encoded such that any M chunks can be lost and the original file can be reconstructed. The tools developed are transparent to the user, and, as well as allowing up and downloading of data to Grid storage, also provide the possibility of parallelising access across all of the distributed chunks at once, improving data transfer and IO performance. We expect this approach to be of most interest to smaller VOs, who have tighter bounds on the storage available to them, but larger (WLCG) VOs may be interested as their total data increases during Run 2. We provide an analysis of the costs and benefits of the approach, along with future development and implementation plans in this area. In general, overheads for multiple file transfers provide the largest issue for competitiveness of this approach at present.</abstract><cop>Bristol</cop><pub>IOP Publishing</pub><doi>10.1088/1742-6596/664/4/042051</doi><tpages>7</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1742-6588
ispartof Journal of physics. Conference series, 2015-12, Vol.664 (4), p.42051
issn 1742-6588
1742-6596
language eng
recordid cdi_proquest_journals_2576533507
source Publicly Available Content (ProQuest); Free Full-Text Journals in Chemistry
subjects Cost benefit analysis
Data management
Data transfer (computers)
Downloading
Files management
Physics
Resilience
title Extending DIRAC File Management with Erasure-Coding for efficient storage
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T23%3A41%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Extending%20DIRAC%20File%20Management%20with%20Erasure-Coding%20for%20efficient%20storage&rft.jtitle=Journal%20of%20physics.%20Conference%20series&rft.au=Skipsey,%20Samuel%20Cadellin&rft.date=2015-12-23&rft.volume=664&rft.issue=4&rft.spage=42051&rft.pages=42051-&rft.issn=1742-6588&rft.eissn=1742-6596&rft_id=info:doi/10.1088/1742-6596/664/4/042051&rft_dat=%3Cproquest_cross%3E2576533507%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c357t-e7e4dd1b6f5b664d6fbc23def3d51a92fdeb5915a123338fb4210df0ceeb3783%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2576533507&rft_id=info:pmid/&rfr_iscdi=true