large filesystem & fileserver architecture issues.

Doug Farley Douglas.L.Farley at nasa.gov
Wed Aug 6 08:35:10 EDT 2003


I noticed with acnc's 14 unit raid they used an IDE-SCSI U3 something or 
another, anyone know what type of hardware they used to convert the drives 
for this array? Just direct IDE-SCSI adaptors (which I've not seen cheaper 
than $80) on each drive and then connecting to something like an adaptec 
Raid card?  Does anyone have any experience with doing this (with off the 
shelf parts) to create a semi-cheep raid (maybe 10 x $250 for 250G disk, + 
10 x $80 IDE-SCSI converter + $800 expensive adaptec 2200 esq card 
)?  Those costs are higher (~$420/disk ) than doing 10 disks on a 3ware 
7500-12 (~$320/disk)  (costs excluding host system), so is whatever gained 
really worth it?

Doug

At 09:11 AM 8/5/2003 -1000, you wrote:
>I have 2 IDE RAID boxes from AC&C (http://www.acnc.com) They are not true
>NFS boxes, rather they connect to a cheap $1500 server via a scsi-3 cable.
>Although they do offer a NFS box that will turn one of these arrays into a
>standalone.  We have had great success with these units
>(http://neptune.navships.com/images/harddrivearrays.jpg) .  We first
>acquired the 8 slot chassis 2 years ago and filled it with 8 IBM 120GXP's.
>We have set it up in a RAID-5 configuration and have not yet had to replace
>even one of the drives (Knockin on wood).  After a year we picked up the
>14slot chassis and filled it with 160 maxtor drives and it has performed
>flawless...  I think we paig about $4000 for the 14 slot chassis. you can
>add 14 160 gb seagates for $129 from newegg.com and and a cheap fileserver
>for $1500 and you got about 2TB of storage for around $7000
>
>Mitchel Kagawa
>
>----- Original Message -----
>From: "Nicholas Henke" <henken at seas.upenn.edu>
>To: "Michael T. Prinkey" <mprinkey at aeolusresearch.com>
>Cc: <beowulf at beowulf.org>
>Sent: Tuesday, August 05, 2003 5:47 AM
>Subject: Re: large filesystem & fileserver architecture issues.
>
>
> > On Tue, 2003-08-05 at 11:45, Michael T. Prinkey wrote:
> > > On 4 Aug 2003, Nicholas Henke wrote:
> > >
> > > We have a lot of experience with IDE RAID arrays at client sites.  The
>DOE
> > > lab in Morgantown, WV has about 4 TBs of IDE RAID that we built for
>them.
> > > The performance is quite good (840 GBs, 140 MB/sec read, 80 MB/sec
>write)
> > > and the price is hard to beat.  The raid array that serves home
> > > directories to their clusters and workstations is backed up nightly to a
> > > second raid server, similarly to your system.  To speed things along we
> > > installed an extra gigabit card in the primary and backup servers and
> > > connected the two directly.  The nightly backup (cp -auf via NFS) of 410
> > > GBs take just over an hour using the dedicated gbit link.  Rsync would
> > > probably be faster.  Without the shortcircuit gigabit link, it used to
>run
> > > four or five times longer and seriously impact NFS performance for the
> > > rest of the systems on the LAN.
> > >
> > > Hope this helps.
> > >
> > > Regards,
> > >
> > > Mike Prinkey
> > > Aeolus Research, Inc.
> >
> > Definately does -- can you recommend hardware for the IDE RAID, or list
> > what you guys have used ?
> >
> > Nic
> > --
> > Nicholas Henke
> > Penguin Herder & Linux Cluster System Programmer
> > Liniac Project - Univ. of Pennsylvania
> >
> > _______________________________________________
> > Beowulf mailing list, Beowulf at beowulf.org
> > To change your subscription (digest mode or unsubscribe) visit
>http://www.beowulf.org/mailman/listinfo/beowulf
> >
> >
>
>
>_______________________________________________
>Beowulf mailing list, Beowulf at beowulf.org
>To change your subscription (digest mode or unsubscribe) visit 
>http://www.beowulf.org/mailman/listinfo/beowulf

==============================
Doug Farley

Data Analysis and Imaging Branch
Systems Engineering Competency
NASA Langley Research Center

< D.L.FARLEY at LaRC.NASA.GOV >
< Phone +1 757 864-8141 >

_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list