Loading…
Dynamic techniques for mapping general parallel nested loops on multiprocessor systems
Schedules of parallel loops on shared memory multiprocessor machines are discussed. These schedules are general schemes to instruct nested parallel loops for processors. The schemes used to schedule the tasks of a program on a parallel system can be broadly distinguished into two classes: static and...
Saved in:
Main Authors: | , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Schedules of parallel loops on shared memory multiprocessor machines are discussed. These schedules are general schemes to instruct nested parallel loops for processors. The schemes used to schedule the tasks of a program on a parallel system can be broadly distinguished into two classes: static and dynamic. In static scheduling, processors are assigned tasks before execution starts. When execution starts, each processor knows exactly which tasks to execute. In dynamic scheduling, the processor allocation takes place during the program execution. The authors present the high-gain two-level guided self-scheduling algorithm, a new approach for scheduling arbitrarily nested parallel programs on shared memory multiprocessor systems. The proposed algorithm is a modification for the low level part of the two-level guided self-scheduling algorithm. After presenting the proposed algorithm, a simulation and a performance measurement scheme are described to show to what extent such an algorithm is useful.< > |
---|---|
DOI: | 10.1109/MWSCAS.1992.271060 |