Reviews and Benchmarks

Here is where the rubber meets the road, the wheat is separated from the chafe, the hard questions get answered. Clusters are all about performance and getting the best bang for your buck. Join us as we go under the hood and look at the power plants that are driving HPC today.

Note: this paper was prepared for a conference that we decided not to attend (Okay, it was not accepted). It is written in a more formal style than the normal ClusterMonkey articles and is sponsored by the The Beowulf Foundation


Popular homogeneous clustered HPC systems (e.g., commodity x86 servers connected by a high-speed interconnect) have given way to heterogeneous clusters comprised of multi-core servers, high speed interconnects, accelerators (often GPU based), and custom storage arrays. Cluster designers are often faced with finding a balance between purpose-built (tailored to specific problem domains ) and general use systems. Traditional cluster-based approaches, however, all share a hard boundary between internal server buses (mainly PCIe) and the rest of the cluster. In heterogeneous environments, the server boundary often creates inefficient resource management, limits solution flexibility, and heavily influences the design of clustered HPC applications. This paper explores the malleability of the GigaIO™ FabreX™ PCIe memory fabric in relation to HPC cluster applications. A discussion of emerging concepts (e.g., a routable PCIe bus) and hands-on benchmarks using shared GPUs will be provided. In addition, results of a simple integration with SLURM resource scheduler will be discussed as way to make composable/malleable computing transparently available to end-users. Keywords. Composable computing, malleable computing, PCIe, HPC cluster, SLURM, benchmark, FabreX , GigaIO, resource scheduler


Getting the data where it needs to go is only half the story. getting it there quickly and with minimal latency is the issue with clusters. Whether it is one byte or a gigabyte, interconnects are the get the work done.

Choosing cluster hardware can be difficult without some real application data and experience. Our hardware reviews will try and offer some insights into today's hardware choices.

We list all the books on clusters we could find. We even read most of these books. Where we felt qualified, we provide a short review.

Not only are we going to provide the benchmark numbers, we also provide the benchmark methods and techniques. How is that for service. Now you can run your own benchmarks.


Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.


This work is licensed under CC BY-NC-SA 4.0

©2005-2023 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.