Linux Software RAID5 Performance

Michael Prinkey mikeprinkey at
Sun Mar 31 14:33:59 EST 2002

Some time ago, a thread discussed the relative performance and stability 
merits of different RAID solutions.  At that time, I gave some results for 
640-GB arrays that I had build using EIDE drives and Software RAID5.  I just 
recently constructed and installed a 1.0-TB array and had some performance 
numbers to share for it as well.  They are interesting for two reasons:  
First, the filesystem in use is ext3, rather than ext2.  Second, the read 
performance is significantly better (almost 2x) than that of the 640-GB 

The system uses 11 120-GB Maxtor 5400-RPM drives, two Promise Ultra66 
controllers, a P4 1.6-GHz CPU, an Intel 850 motherboard, and 512 MB ECC 
RDRAM.  Drives are configured in RAID5 (9 data, 1 parity, 1 hot spare).  
Four drives are on each Promise controller.  Three are on the on-board EIDE 
controller (UDMA100).  A small boot drive is also on the on-board 
controller.  I had intended to use Ultra100 TX2 controllers, but the latest 
EIDE driver updates with TX2 support are not making it into the latest 
kernels (I'm using 2.4.18), so I opted for the older, slower controllers 
rather than patching.  So, I am both cautious and lazy.  8)

Again, performance (see below) is remarkably good, especially considering 
all of the strikes against this configuration:  EIDE instead of SCSI, UDMA66 
instead of 100/133, 5400-RPM instead of 7200-RPM, and master/slave drives on 
each port instead of a single drive per port.  With some hdparm tuning (-c 3 
-u 1), the read performance went from 83 MB/sec to 93 MB/sec.  Write 
performance remained essentially unchanged by tuning at 26 MB/sec.  For 
comparison, the 640-GB arrays gave read performance of about 56 MB/sec, 
write performance of 28.5 MB/sec.

Had I more time, I would have tested ext2 vs ext3 to ascertain how much that 
change effected performance.  Likewise, I was considering the use of a raid1 
array as the ext3 journal device to perhaps improve write performance.  Any 


Mike Prinkey
Aeolus Research, Inc.


[root at tera /root]# df; mount; cat /proc/mdstat; cat bonnie10.log
Filesystem           1k-blocks      Used Available Use% Mounted on
/dev/hda6             38764268   2601128  34193976   8% /
/dev/hda1               101089      4965     90905   6% /boot
/dev/md0             1063591944  58195936 1005396008   6% /raid
raid640:/raid/home   630296592 284066148 346230444  46% /mnt/tmp
/dev/hda6 on / type ext2 (rw)
none on /proc type proc (rw)
/dev/hda1 on /boot type ext2 (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/md0 on /raid type ext3 (rw)
automount(pid580) on /misc type autofs 
raid640:/raid/home on /mnt/tmp type nfs (rw,addr=
Personalities : [raid5]
read_ahead 1024 sectors
md0 : active raid5 hdl1[10] hdk1[9] hdj1[8] hdi1[7] hdh1[6] hdg1[5] hdf1[4] 
hde1[3] hdd1[2] hdc1[1] hdb1[0]
      1080546624 blocks level 5, 32k chunk, algorithm 2 [10/10] [UUUUUUUUUU]

unused devices: <none>
Bonnie 1.2: File '/raid/Bonnie.1027', size: 1048576000, volumes: 10
Writing with putc()...         done:  14810 kB/s  88.9 %CPU
Rewriting...                   done:  22288 kB/s  13.4 %CPU
Writing intelligently...       done:  26438 kB/s  21.7 %CPU
Reading with getc()...         done:  17112 kB/s  97.9 %CPU
Reading intelligently...       done:  93332 kB/s  32.2 %CPU
Seek numbers calculated on first volume only
Seeker 1...Seeker 2...Seeker 3...start 'em...done...done...done...
              ---Sequential Output (nosync)--- ---Sequential Input-- --Rnd 
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --04k 
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU   /sec 
raid05 10*1000 14810 88.9 26438 21.7 22288 13.4 17112 97.9 93332 32.2  206.3 

Get your FREE download of MSN Explorer at

Beowulf mailing list, Beowulf at
To change your subscription (digest mode or unsubscribe) visit

More information about the Beowulf mailing list