Loading…

MulConn: User-Transparent I/O Subsystem for High-Performance Parallel File Systems

Parallel file systems (PFS) are used to distribute data processing and establish shared access to large-scale data. Despite being able to provide high I/O bandwidth on each node, PFS has difficulty utilizing the I/O bandwidth due to a single connection between the client and server nodes. To mitigat...

Full description

Saved in:
Bibliographic Details
Main Authors: Kim, Hwajung, Bang, Jiwoo, Sung, Dong Kyu, Eom, Hyeonsang, Yeom, Heon Y., Sung, Hanul
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Parallel file systems (PFS) are used to distribute data processing and establish shared access to large-scale data. Despite being able to provide high I/O bandwidth on each node, PFS has difficulty utilizing the I/O bandwidth due to a single connection between the client and server nodes. To mitigate the performance bottleneck, users increase the number of connections between the nodes by modifying PFS or applications. However, it is difficult to modify PFS itself due to its complicated internal structure. Thus, PFS users manually increase the number of connections between the nodes by employing several methods. In this paper, we propose a user-transparent I/O subsystem, MulConn, to make users exploit high I/O bandwidth between nodes. To avoid the modifications of PFS and user applications, we have developed a horizontal mount procedure and two I/O scheduling policies, TtoS and TtoM, in the virtual file system (VFS) layer. We expose a single mount point that has multiple connections by modifying the mount path of VFS from vertical hierarchy to horizontal hierarchy. We also introduce two I/O scheduling policies to distribute I/O requests evenly to multiple connections. The experimental results show that MulConn improves write and read performance by up to 2.6x and 2.8x, respectively, compared with those of PFS using the existing kernel. In addition, we provide the best I/O performance that PFS can provide in the given experimental environments.
ISSN:2640-0316
DOI:10.1109/HiPC53243.2021.00019