Loading…
Exploring stream parallel patterns in distributed MPI environments
•We present GrPPI MPI, an execution policy for distributed-hybrid environments for the Pipeline and Farm parallel patterns.•We describe the GrPPI interface and the MPI policy that permit the distributed and hybrid execution of DaSP applications.•We present the two-sided and one-sided distributed mul...
Saved in:
Published in: | Parallel computing 2019-05, Vol.84, p.24-36 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | •We present GrPPI MPI, an execution policy for distributed-hybrid environments for the Pipeline and Farm parallel patterns.•We describe the GrPPI interface and the MPI policy that permit the distributed and hybrid execution of DaSP applications.•We present the two-sided and one-sided distributed multiple-producer/multiple-consumer queues for the GrPPI stream operators.•We evaluate the usability and the performance of distributed Pipeline and Farm stream patterns using two DaSP applications.•We analyze the pattern usability and perform a side-by-side comparison of both GrPPI and MPI programming interfaces.
In recent years, the large volumes of stream data and the near real-time requirements of data streaming applications have exacerbated the need for new scalable algorithms and programming interfaces for distributed and shared-memory platforms. To contribute in this direction, this paper presents a new distributed MPI back end for GrPPI, a C++ high-level generic interface of data-intensive and stream processing parallel patterns. This back end, as a new execution policy, supports distributed and hybrid (distributed+shared-memory) parallel executions of the Pipeline and Farm patterns, where the hybrid mode combines the MPI policy with a GrPPI shared-memory one. These patterns internally leverage distributed queues, which can be configured to use two-sided or one-sided MPI primitives to communicate items among nodes. A detailed analysis of the GrPPI MPI execution policy reports considerable benefits from the programmability, flexibility and readability points of view. The experimental evaluation of two different streaming applications with different distributed and shared-memory scenarios reports considerable performance gains with respect to the sequential versions at the expense of negligible GrPPI overheads. |
---|---|
ISSN: | 0167-8191 1872-7336 |
DOI: | 10.1016/j.parco.2019.03.004 |