Loading…

A domain-specific high-level programming model

Summary Nowadays, computing hardware continues to move toward more parallelism and more heterogeneity, to obtain more computing power. From personal computers to supercomputers, we can find several levels of parallelism expressed by the interconnections of multi‐core and many‐core accelerators. On t...

Full description

Saved in:
Bibliographic Details
Published in:Concurrency and computation 2016-03, Vol.28 (3), p.750-767
Main Authors: Mansouri, Farouk, Huet, Sylvain, Houzet, Dominque
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Summary Nowadays, computing hardware continues to move toward more parallelism and more heterogeneity, to obtain more computing power. From personal computers to supercomputers, we can find several levels of parallelism expressed by the interconnections of multi‐core and many‐core accelerators. On the other hand, computing software needs to adapt to this trend, and programmers can use parallel programming models (PPM) to fulfil this difficult task. There are different PPMs available that are based on tasks, directives, or low‐level languages or library. These offer higher or lower ion levels from the architecture by handling their own syntax. However, to offer an efficient PPM with a greater (additional) high‐level ion level while saving on performance, one idea is to restrict this to a specific domain and to adapt it to a family of applications. In the present study, we propose a high‐level PPM specific to digital signal‐processing applications. It is based on data‐flow graph models of computation, and a dynamic run‐time model of execution (StarPU). We show how the user can easily express this digital signal‐processing application and can take advantage of task, data, and graph parallelism in the implementation, to enhance the performances of targeted heterogeneous clusters composed of CPUs and different accelerators (e.g., GPU and Xeon Phi). Copyright © 2015 John Wiley & Sons, Ltd.
ISSN:1532-0626
1532-0634
DOI:10.1002/cpe.3622