Loading…

MulConn: User-Transparent I/O Subsystem for High-Performance Parallel File Systems

Parallel file systems (PFS) are used to distribute data processing and establish shared access to large-scale data. Despite being able to provide high I/O bandwidth on each node, PFS has difficulty utilizing the I/O bandwidth due to a single connection between the client and server nodes. To mitigat...

Full description

Saved in:
Bibliographic Details
Main Authors: Kim, Hwajung, Bang, Jiwoo, Sung, Dong Kyu, Eom, Hyeonsang, Yeom, Heon Y., Sung, Hanul
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 62
container_issue
container_start_page 53
container_title
container_volume
creator Kim, Hwajung
Bang, Jiwoo
Sung, Dong Kyu
Eom, Hyeonsang
Yeom, Heon Y.
Sung, Hanul
description Parallel file systems (PFS) are used to distribute data processing and establish shared access to large-scale data. Despite being able to provide high I/O bandwidth on each node, PFS has difficulty utilizing the I/O bandwidth due to a single connection between the client and server nodes. To mitigate the performance bottleneck, users increase the number of connections between the nodes by modifying PFS or applications. However, it is difficult to modify PFS itself due to its complicated internal structure. Thus, PFS users manually increase the number of connections between the nodes by employing several methods. In this paper, we propose a user-transparent I/O subsystem, MulConn, to make users exploit high I/O bandwidth between nodes. To avoid the modifications of PFS and user applications, we have developed a horizontal mount procedure and two I/O scheduling policies, TtoS and TtoM, in the virtual file system (VFS) layer. We expose a single mount point that has multiple connections by modifying the mount path of VFS from vertical hierarchy to horizontal hierarchy. We also introduce two I/O scheduling policies to distribute I/O requests evenly to multiple connections. The experimental results show that MulConn improves write and read performance by up to 2.6x and 2.8x, respectively, compared with those of PFS using the existing kernel. In addition, we provide the best I/O performance that PFS can provide in the given experimental environments.
doi_str_mv 10.1109/HiPC53243.2021.00019
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_9680427</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9680427</ieee_id><sourcerecordid>9680427</sourcerecordid><originalsourceid>FETCH-LOGICAL-i203t-cfe725102354ee3a3e36b82e125211547eafb05d0c72cde5f17a9b0b56c57823</originalsourceid><addsrcrecordid>eNotjsFOwkAURUcTExH5Al3MDxTee9OZad2ZRiwJhkbqmkzLq9a0hcyUBX8vUVf3LE5OrhCPCHNESBd5W2RaUazmBIRzAMD0SsxSm6AxOkZAk1yLCZkYIlBobsVdCN8ABEh6It7fTl12GIYn-RHYR6V3Qzg6z8MoV4uN3J6qcA4j97I5eJm3n19Rwf7CvRtqloXzruu4k8u2Y7n9NcO9uGlcF3j2v1NRLl_KLI_Wm9dV9ryOWgI1RnXDljQCKR0zK6dYmSohvtwiRB1bdk0Feg-1pXrPukHr0goqbWptE1JT8fCXbZl5d_Rt7_x5l5oEYrLqB_7cT44</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>MulConn: User-Transparent I/O Subsystem for High-Performance Parallel File Systems</title><source>IEEE Xplore All Conference Series</source><creator>Kim, Hwajung ; Bang, Jiwoo ; Sung, Dong Kyu ; Eom, Hyeonsang ; Yeom, Heon Y. ; Sung, Hanul</creator><creatorcontrib>Kim, Hwajung ; Bang, Jiwoo ; Sung, Dong Kyu ; Eom, Hyeonsang ; Yeom, Heon Y. ; Sung, Hanul</creatorcontrib><description>Parallel file systems (PFS) are used to distribute data processing and establish shared access to large-scale data. Despite being able to provide high I/O bandwidth on each node, PFS has difficulty utilizing the I/O bandwidth due to a single connection between the client and server nodes. To mitigate the performance bottleneck, users increase the number of connections between the nodes by modifying PFS or applications. However, it is difficult to modify PFS itself due to its complicated internal structure. Thus, PFS users manually increase the number of connections between the nodes by employing several methods. In this paper, we propose a user-transparent I/O subsystem, MulConn, to make users exploit high I/O bandwidth between nodes. To avoid the modifications of PFS and user applications, we have developed a horizontal mount procedure and two I/O scheduling policies, TtoS and TtoM, in the virtual file system (VFS) layer. We expose a single mount point that has multiple connections by modifying the mount path of VFS from vertical hierarchy to horizontal hierarchy. We also introduce two I/O scheduling policies to distribute I/O requests evenly to multiple connections. The experimental results show that MulConn improves write and read performance by up to 2.6x and 2.8x, respectively, compared with those of PFS using the existing kernel. In addition, we provide the best I/O performance that PFS can provide in the given experimental environments.</description><identifier>EISSN: 2640-0316</identifier><identifier>EISBN: 9781665410168</identifier><identifier>EISBN: 1665410167</identifier><identifier>DOI: 10.1109/HiPC53243.2021.00019</identifier><identifier>CODEN: IEEPAD</identifier><language>eng</language><publisher>IEEE</publisher><subject>Bandwidth ; Conferences ; Data processing ; File system mount ; File systems ; High performance computing ; Multiple connections ; Network connection ; Parallel File System (PFS) ; Scalable data transfer ; Servers ; Throughput ; Virtual File System (VFS)</subject><ispartof>2021 IEEE 28th International Conference on High Performance Computing, Data, and Analytics (HiPC), 2021, p.53-62</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9680427$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9680427$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Kim, Hwajung</creatorcontrib><creatorcontrib>Bang, Jiwoo</creatorcontrib><creatorcontrib>Sung, Dong Kyu</creatorcontrib><creatorcontrib>Eom, Hyeonsang</creatorcontrib><creatorcontrib>Yeom, Heon Y.</creatorcontrib><creatorcontrib>Sung, Hanul</creatorcontrib><title>MulConn: User-Transparent I/O Subsystem for High-Performance Parallel File Systems</title><title>2021 IEEE 28th International Conference on High Performance Computing, Data, and Analytics (HiPC)</title><addtitle>HIPC</addtitle><description>Parallel file systems (PFS) are used to distribute data processing and establish shared access to large-scale data. Despite being able to provide high I/O bandwidth on each node, PFS has difficulty utilizing the I/O bandwidth due to a single connection between the client and server nodes. To mitigate the performance bottleneck, users increase the number of connections between the nodes by modifying PFS or applications. However, it is difficult to modify PFS itself due to its complicated internal structure. Thus, PFS users manually increase the number of connections between the nodes by employing several methods. In this paper, we propose a user-transparent I/O subsystem, MulConn, to make users exploit high I/O bandwidth between nodes. To avoid the modifications of PFS and user applications, we have developed a horizontal mount procedure and two I/O scheduling policies, TtoS and TtoM, in the virtual file system (VFS) layer. We expose a single mount point that has multiple connections by modifying the mount path of VFS from vertical hierarchy to horizontal hierarchy. We also introduce two I/O scheduling policies to distribute I/O requests evenly to multiple connections. The experimental results show that MulConn improves write and read performance by up to 2.6x and 2.8x, respectively, compared with those of PFS using the existing kernel. In addition, we provide the best I/O performance that PFS can provide in the given experimental environments.</description><subject>Bandwidth</subject><subject>Conferences</subject><subject>Data processing</subject><subject>File system mount</subject><subject>File systems</subject><subject>High performance computing</subject><subject>Multiple connections</subject><subject>Network connection</subject><subject>Parallel File System (PFS)</subject><subject>Scalable data transfer</subject><subject>Servers</subject><subject>Throughput</subject><subject>Virtual File System (VFS)</subject><issn>2640-0316</issn><isbn>9781665410168</isbn><isbn>1665410167</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2021</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotjsFOwkAURUcTExH5Al3MDxTee9OZad2ZRiwJhkbqmkzLq9a0hcyUBX8vUVf3LE5OrhCPCHNESBd5W2RaUazmBIRzAMD0SsxSm6AxOkZAk1yLCZkYIlBobsVdCN8ABEh6It7fTl12GIYn-RHYR6V3Qzg6z8MoV4uN3J6qcA4j97I5eJm3n19Rwf7CvRtqloXzruu4k8u2Y7n9NcO9uGlcF3j2v1NRLl_KLI_Wm9dV9ryOWgI1RnXDljQCKR0zK6dYmSohvtwiRB1bdk0Feg-1pXrPukHr0goqbWptE1JT8fCXbZl5d_Rt7_x5l5oEYrLqB_7cT44</recordid><startdate>202112</startdate><enddate>202112</enddate><creator>Kim, Hwajung</creator><creator>Bang, Jiwoo</creator><creator>Sung, Dong Kyu</creator><creator>Eom, Hyeonsang</creator><creator>Yeom, Heon Y.</creator><creator>Sung, Hanul</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>202112</creationdate><title>MulConn: User-Transparent I/O Subsystem for High-Performance Parallel File Systems</title><author>Kim, Hwajung ; Bang, Jiwoo ; Sung, Dong Kyu ; Eom, Hyeonsang ; Yeom, Heon Y. ; Sung, Hanul</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i203t-cfe725102354ee3a3e36b82e125211547eafb05d0c72cde5f17a9b0b56c57823</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Bandwidth</topic><topic>Conferences</topic><topic>Data processing</topic><topic>File system mount</topic><topic>File systems</topic><topic>High performance computing</topic><topic>Multiple connections</topic><topic>Network connection</topic><topic>Parallel File System (PFS)</topic><topic>Scalable data transfer</topic><topic>Servers</topic><topic>Throughput</topic><topic>Virtual File System (VFS)</topic><toplevel>online_resources</toplevel><creatorcontrib>Kim, Hwajung</creatorcontrib><creatorcontrib>Bang, Jiwoo</creatorcontrib><creatorcontrib>Sung, Dong Kyu</creatorcontrib><creatorcontrib>Eom, Hyeonsang</creatorcontrib><creatorcontrib>Yeom, Heon Y.</creatorcontrib><creatorcontrib>Sung, Hanul</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library Online</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kim, Hwajung</au><au>Bang, Jiwoo</au><au>Sung, Dong Kyu</au><au>Eom, Hyeonsang</au><au>Yeom, Heon Y.</au><au>Sung, Hanul</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>MulConn: User-Transparent I/O Subsystem for High-Performance Parallel File Systems</atitle><btitle>2021 IEEE 28th International Conference on High Performance Computing, Data, and Analytics (HiPC)</btitle><stitle>HIPC</stitle><date>2021-12</date><risdate>2021</risdate><spage>53</spage><epage>62</epage><pages>53-62</pages><eissn>2640-0316</eissn><eisbn>9781665410168</eisbn><eisbn>1665410167</eisbn><coden>IEEPAD</coden><abstract>Parallel file systems (PFS) are used to distribute data processing and establish shared access to large-scale data. Despite being able to provide high I/O bandwidth on each node, PFS has difficulty utilizing the I/O bandwidth due to a single connection between the client and server nodes. To mitigate the performance bottleneck, users increase the number of connections between the nodes by modifying PFS or applications. However, it is difficult to modify PFS itself due to its complicated internal structure. Thus, PFS users manually increase the number of connections between the nodes by employing several methods. In this paper, we propose a user-transparent I/O subsystem, MulConn, to make users exploit high I/O bandwidth between nodes. To avoid the modifications of PFS and user applications, we have developed a horizontal mount procedure and two I/O scheduling policies, TtoS and TtoM, in the virtual file system (VFS) layer. We expose a single mount point that has multiple connections by modifying the mount path of VFS from vertical hierarchy to horizontal hierarchy. We also introduce two I/O scheduling policies to distribute I/O requests evenly to multiple connections. The experimental results show that MulConn improves write and read performance by up to 2.6x and 2.8x, respectively, compared with those of PFS using the existing kernel. In addition, we provide the best I/O performance that PFS can provide in the given experimental environments.</abstract><pub>IEEE</pub><doi>10.1109/HiPC53243.2021.00019</doi><tpages>10</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2640-0316
ispartof 2021 IEEE 28th International Conference on High Performance Computing, Data, and Analytics (HiPC), 2021, p.53-62
issn 2640-0316
language eng
recordid cdi_ieee_primary_9680427
source IEEE Xplore All Conference Series
subjects Bandwidth
Conferences
Data processing
File system mount
File systems
High performance computing
Multiple connections
Network connection
Parallel File System (PFS)
Scalable data transfer
Servers
Throughput
Virtual File System (VFS)
title MulConn: User-Transparent I/O Subsystem for High-Performance Parallel File Systems
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-23T13%3A16%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=MulConn:%20User-Transparent%20I/O%20Subsystem%20for%20High-Performance%20Parallel%20File%20Systems&rft.btitle=2021%20IEEE%2028th%20International%20Conference%20on%20High%20Performance%20Computing,%20Data,%20and%20Analytics%20(HiPC)&rft.au=Kim,%20Hwajung&rft.date=2021-12&rft.spage=53&rft.epage=62&rft.pages=53-62&rft.eissn=2640-0316&rft.coden=IEEPAD&rft_id=info:doi/10.1109/HiPC53243.2021.00019&rft.eisbn=9781665410168&rft.eisbn_list=1665410167&rft_dat=%3Cieee_CHZPO%3E9680427%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i203t-cfe725102354ee3a3e36b82e125211547eafb05d0c72cde5f17a9b0b56c57823%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9680427&rfr_iscdi=true