building a RAID system

Michael T. Prinkey mprinkey at
Thu Oct 9 07:59:54 EDT 2003

I would also echo most of Mark's points aside from the 8 MB cache issue.  
I have seen some noticeable speed improvements using 2 MB vs 8 MB drives.

I would also offer one other point.  No matter whether you use SCSI or 
IDE drives, be absolutely certain that you keep the drives cool.  The 
"internal" 3.5 bays in most cases are normally useless because they place 
several drives in almost direct contact.  The drive(s) sandwiched in the 
middle have only their edges exposed to air and have to dissipate the bulk 
of their heat through the neighboring drives.  I like mount the drives in 
5.25 bays.  This at least provides an air gap for some cooling.  For large 
raid servers, I like to use the cheap fan coolers.  They can be had for $5 
- $8 each and include 2 or 3 small fans that fill in the 5.25 opening and 
the 5.25-to-3.5 mounting brackets.  Of course, that makes for a lot of fan 

We typically build 2 identical raid servers connected by a dedicated
gigabit link to do nightly backups, both to protect from raid failure and
user error.

I would like to ask if anyone has investigated Benjamin LaHaise netmd
application yet?

I think there was some discussion of it a few months ago, but I haven't 
seen anything lately.


Mike Prinkey
Aeolus Research, Inc.

On Wed, 8 Oct 2003, Mark Hahn wrote:

> > 	- get those drives w/ 8MB buffer disk cache
> what reason do you have to regard 8M as other than a useless
> marketing feature?  I mean, the kenel has a cache that's 100x
> bigger, and a lot faster.
> > 	- slower rpm disks ... usually it tops out at 7200rpm
> unless your workload is dominated by tiny, random seeks,
> the RPM of the disk isn't going to be noticable.
> > 	- it supposedly can sustain 133MB/sec transfers
> it's not hard to saturate a 133 MBps PCI with 2-3 normal IDE
> disks in raid0.  interestingly, the chipset controller is normally
> not competing for the same bandwidth as the PCI, so even with 
> entry-level hardware, it's not hard to break 133.
> > 	- if you use software raid, you can monitor the raid status
> this is the main and VERY GOOD reason to use sw raid.
> > 	- some say scsi disks are faster ... 
> usually lower-latency, often not higher bandwidth.  interestingly,
> ide disks usually fall off to about half peak bandwidth on inner 
> tracks.  scsi disks fall off too, but usually less so - they 
> don't push capacity quite as hard.
> > 	- it supposedly can sustain 320MB/sec transfers
> that's silly, of course.  outer tracks of current disks run at 
> between 50 and 100 MB/s, so that's the max sustained.  you can even
> argue that's not really 'sustained', since you'll eventually get
> to slower inner tracks.
> > independent of which raid system is built, you wil need 2 or 3
> > more backup systems to backup your Terabyte sized raid systems
> backup is hard.  you can get 160 or 200G tapes, but they're almost 
> as expensive as IDE disks, not to mention the little matter of a 
> tape drive that costs as much as a server.  raid5 makes backup
> less about robustness than about archiving or rogue-rm-protection.
> I think the next step is primarily a software one - 
> some means of managing storage, versioning, archiving, etc...
> _______________________________________________
> Beowulf mailing list, Beowulf at
> To change your subscription (digest mode or unsubscribe) visit

Beowulf mailing list, Beowulf at
To change your subscription (digest mode or unsubscribe) visit

More information about the Beowulf mailing list