Loading…
The data-parallel Ada run-time system, simulation and empirical results
The Parallel Ada Run-Time System (PARTS), developed at TUB, is the target of an experimental translator that maps sequential Ada to a shared-memory multi-processor. Other modules of the parallel compiler are not explained. The paper summarizes the multi-processor run-time system; it explains those i...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The Parallel Ada Run-Time System (PARTS), developed at TUB, is the target of an experimental translator that maps sequential Ada to a shared-memory multi-processor. Other modules of the parallel compiler are not explained. The paper summarizes the multi-processor run-time system; it explains those instructions that activate multiple processors leading to SPMD execution and discusses the scheduling policy Default architectural attributes of PARTS can be custom-tailored for each run without re-compile. The experiments exposed different machine personalities by measuring execution time profiles of the vector product run on different architectures. The goal is to find experimentally, how well a shared-memory architecture scales up to an increasing problem size, and how well the problem size scales up for a fixed multi-processor configuration. The measurements expose the advantages of shared-memory multi-processor architectures to exploit one dimension of parallelism. However, scalability is limited to the number of memory ports. Therefore another architectural dimension of parallelism, distributed-memory, must be combined with shared memories to achieve Tera-FLOP performance.< > |
---|---|
DOI: | 10.1109/IPPS.1993.262808 |