large filesystem & fileserver architecture issues.

Craig Tierney ctierney at
Wed Aug 6 15:09:59 EDT 2003

On Wed, 2003-08-06 at 06:07, Gerry Creager N5JXS wrote:
> We just implemented an IDE RAID system for some meteorology data/work. 
> We're pretty happy with the results so far.  Our hardwre complement is:
> SuperMicro X5DAE Motherboard
> dual Xeon 2.8GHz processors
> 2 GB Kingston Registered ECC RAM
> 2 HighPoint RocketRAID 404 4-channel IDE RAID adapters
> 10 Maxtor 250 GB 7200 RPM disks
> 1 Maxtor 60 GB drive for system work
> 1 long multi-drop disk power cable...
> SuperMicro case (nomenclature escapes me, however, it has 1 disk bays 
> and fits the X5DAE MoBo
> Cheapest PCI video card I could find (no integrated video on MoBo)
> Add-on Intel GBE SC fiber adapter

Hardware choices look good.  How did you configure it?
Are there 1 or 2 filesystems?  Raid 0, 1, 5?  Do you
have any performance numbers on the setup (perferably 
large file, dd type tests)?


> Drawbacks:
> 1.  I should have checked for integrated video for simplicity
> 2.  Current HighPoint drivers for RH9 are not RAID yet; use RH7.3 with 
> ALL the patches
> 3.  Make sure you order the rack mount parts when you order the case; it 
>   only appeared they were included...
> 4.  Questions have been raised about the E-1000 integrated GBE copper 
> NIC on the Mobo; Doesn't matter: it's gonna be connected to a 100M 
> switch and GBE will be on fiber like God intended data to be passed (No, 
> I don't trust most terminations for GBE on copper!)
> It's up and working.  Burning in for the last 2 weeks with no problems, 
> it's going to the Texas GigaPoP today where it'll be live on Internet2.
> HTH, Gerry
> Nicholas Henke wrote:
> > On Tue, 2003-08-05 at 11:45, Michael T. Prinkey wrote:
> > 
> >>On 4 Aug 2003, Nicholas Henke wrote:
> >>
> >>We have a lot of experience with IDE RAID arrays at client sites.  The DOE
> >>lab in Morgantown, WV has about 4 TBs of IDE RAID that we built for them.  
> >>The performance is quite good (840 GBs, 140 MB/sec read, 80 MB/sec write)
> >>and the price is hard to beat.  The raid array that serves home
> >>directories to their clusters and workstations is backed up nightly to a
> >>second raid server, similarly to your system.  To speed things along we
> >>installed an extra gigabit card in the primary and backup servers and
> >>connected the two directly.  The nightly backup (cp -auf via NFS) of 410
> >>GBs take just over an hour using the dedicated gbit link.  Rsync would
> >>probably be faster.  Without the shortcircuit gigabit link, it used to run
> >>four or five times longer and seriously impact NFS performance for the
> >>rest of the systems on the LAN.
> >>
> >>Hope this helps.
> >>
> >>Regards,
> >>
> >>Mike Prinkey
> >>Aeolus Research, Inc.
> > 
> > 
> > Definately does -- can you recommend hardware for the IDE RAID, or list
> > what you guys have used ?
> > 
> > Nic
Craig Tierney <ctierney at>

Beowulf mailing list, Beowulf at
To change your subscription (digest mode or unsubscribe) visit

More information about the Beowulf mailing list