Article Index

In past columns, we've been talking about PVFS. We talked about how to configure it for performance, flexibility, and fault tolerance. If you are interested in performance, you need some way of measuring how the performance changes when you make changes. In this column, I'll talk about how one benchmarks parallel file systems. Of course, when I talk about benchmarks I don't mean comparing parallel file systems to one another. Rather, I mean the ability to determine the effects of changes on the performance of the parallel file system. This information gives you the ability to tune applications to maximize performance on a given parallel file system or to tune a parallel file system for a given set of codes.

Serial Benchmarking

Let's start out the discussion by looking at how plain old serial file systems are benchmarked. The absolute best way to test any file system is to test it with your application. However, this is not always possible. Consequently people have developed applications that simulate some pattern of file system usage. The most common benchmarks are Bonnie++, IOZone, Postmark, and homegrown tests that use commands such as dd, tar, or cpio.

Bonnie++ is a benchmark suite that performs a number of simple tests that will exercise the storage/file system combination. It primarily tests database type access to a single file or a set of files. It tests the creation, reading, and deleting of small files that can simulate the access pattern of codes such as Squid, INN, or Maildir. Despite the database focus of Bonnie++, people have found it to be a very good benchmarking program that allows various file systems to be compared to one another. It is also used to examine the effects of changing file system parameters.

As with all benchmarking codes, there are a number of caveats to running Bonnie++ that must be followed to produce any sort of meaningful result. The most important one is to use a file size that is much larger than physical memory to avoid the effects of buffering on the measurements. There are several recommendations for file sizes ranging from twice the size of physical memory to 10 times the size of physical memory. I usually recommend five times the size of physical memory.

IOZone is another common file system benchmarking tool. It generates and measures a variety of file operations such as read, write, re-read, re-write, read backwards, read strided, fread, fwrite, random read, pread, mmap, aio_read, and aio_write. The goal of IOZone is to test a broad range of functions so that vendors cannot tune their hardware and file systems to enhance the performance of just a few applications.

Another popular benchmark is Postmark. It is used to test the performance of file systems on small files. Examples of applications that are dominated by small file performance are email applications, news servers, and web based commerce applications. The developer of Postmark, Netapp, feels that the performance of file systems on small files is often over looked and small files that make up a very important part of Internet servers. Consequently, it simulates heavy small-file system loads.

The Postmark benchmark was designed to create a large pool of small files that is continuously changing to reflect the transient nature of the files on a large Internet email server. It measures the transaction rates where a transaction can be to create or delete a file, or to read or append to a file. How the transactions are processed is designed to minimize the effects of buffering and caching. However, this is dependent upon the parameters chosen for the benchmark run.

These benchmarks, Bonnie++, IOZone, and Postmark, are good if your application(s) have similar access patterns. For those application(s) that are not represented well by these benchmarks, people have developed their own benchmarks. One example of this is to use the dd command to write a number of blocks to a file system. You can choose the block size and the number of blocks to be written to best represent your application(s).

There have also been a number of file system benchmarks written to stress the metadata operations of a file system. These types of benchmarks will perform operations that, for instance, create directories, retrieve statistics on files, truncate files, delete files, create files, etc.

All of these benchmarks are good for testing applications that use small files on serial file systems. They can prove somewhat useful for certain parallel file systems such as PVFS, but may not be useful for others. They can prove useful if you are going to use NFS for some of your cluster data storage needs – for example, mounting user's home directories on the cluster nodes.

Parallel file systems are used for high-speed I/O for data, primarily larger size files. Serial benchmarks do not take into account the memory access patterns nor file access patterns of parallel codes. Consequently, using serial benchmarks for testing parallel file systems is not appropriate.

Existing Parallel File System Benchmarks

In general, how one benchmarks parallel file systems depends upon the I/O requirements of the application. However, the range of I/O requirements is very large. For example, some codes require storing large datasets, some require out-of-core data access, some have database access requirements, and some require the reading and writing of large amounts of data in a serial fashion. Also, some codes push all of the I/O through the rank-0 process or have each process write a small part of the output that is then assembled together into a single file after all of the computation is finished. Consequently, the tools to evaluate a parallel file system for such a range of codes are very disparate.

Writing benchmarks for parallel file systems is much more difficult than writing serial benchmarks. Parallel file systems have the issue of how the data is accessed in memory on the nodes and how this maps to the file(s) on the parallel file system. Moreover, there are issues with how the data is written to the file system. Parallel file systems are used to store large amounts of data at high speeds so other issues such as buffer sizes and caching techniques become much more important and relevant than serial file systems. Even something as simple as timing operations is difficult because the nodes may have different times.

There have been some efforts to create benchmarks for parallel file systems. Some of them cover a range of access patterns and while others are for specific workloads. Let's examine five of the more common benchmarks.

You have no rights to post comments


Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.


This work is licensed under CC BY-NC-SA 4.0

©2005-2023 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.