Loading…
CONFIGURING LARGE HIGHPERFORMANCE CLUSTERS AT LIGHTSPEED: A CASE STUDY
Over a decade ago, the TOP500 list was started as a way to measure supercomputers by their sustained performance on a particular linear algebra benchmark. . . . This paper describes a weekend activity where two existing 128-node commodity clusters were fused into a single 256-node cluster for the sp...
Saved in:
Published in: | The international journal of high performance computing applications 2004-10, Vol.18 (3), p.317-326 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Over a decade ago, the TOP500 list was started as a way to measure supercomputers by their sustained performance on a particular linear algebra benchmark. . . . This paper describes a weekend activity where two existing 128-node commodity clusters were fused into a single 256-node cluster for the specific purpose of running the benchmark used to rank the machines in the TOP500 supercomputer list. . . . This paper describes early (pre-weekend) benchmark activities to empirically determine reasonably good parameters for the High Performance Unpack (HPL) code on both Ethernet and Myrinet interconnects. It fully describes the physical layout of the machine, the description-based installation methods used in Rocks to re-deploy two independent clusters as a single cluster, and gives the benchmark results that were gathered over the 40-h period allotted for the complete experiment. In addition, we describe some of the on-line monitoring and measurement techniques that were employed during the experiment. Finally, we point out the issues uncovered with a commodity cluster of this size. The techniques presented in this paper truly bring supercomputers into the hands of the masses of computational scientists. |
---|---|
ISSN: | 1094-3420 |
DOI: | 10.1177/1094342004046056 |