[Beowulf] Raid disk read performance issue in Linux 2.6, anyone seen this?
tmattox at gmail.com
Thu Dec 9 10:58:25 EST 2004
This just released article from LWN may shed some light on
your IO performance issues with the 2.6 kernel:
"Which is the fairest I/O scheduler of them all?"
That is currently a subscriber only article, but will be available
to all on December 16th. (I highly recommend a subscription,
LWN is a fantastic resource.)
You should look into which IO scheduler works best for
your workload. The 2.6 kernel has a few to choose from...
Also, you may be interested in the latest CentOS 3.3 distribution,
since it has an actively supported x86_64 port. The upcoming
cAos-2 distribution is also worth a look for x86-64 users... for
info about both see:
On Wed, 8 Dec 2004 17:10:40 +0100 (CET), Bogdan Costescu
<bogdan.costescu at iwr.uni-heidelberg.de> wrote:
> On Tue, 7 Dec 2004, Craig Tierney wrote:
> > I meant White Box distribution which is a rebuild of Red Hat Enterprise
> > 3. It was an early release for Opteron and Itanium. Since then,
> > the Gelato Foundation has taken over the rebuilds and should be
> > available.
> I don't think that this is "official" WhiteBox Linux. Their web page
> only mentions x86 and x86_86 architectures. They might even don't know
> about it... ;-)
> The guy that initially built WhiteBox x64_64 and ia64, Pasi Pirhonen,
> is now their TaoLinux maintainer (along with s390(x)).
> > It isn't exact, but every single 2.6 based system I have tried
> > has shown this problem. Every 2.4 based system has not shown this
> > problem.
> It is indeed strange...
> > I primarily used lmdd to test the performance of the filesystem. All
> > I care about is big streaming IO.
> ext3 and xfs (at least) care about the underlying sector size or RAID
> stripe size. Have you paid attention to this when you formatted the
> device (if you formatted after moving to the new computers) ? Do you
> use something else between the physical device and the file-system,
> like lvm (lvm1 in 2.4, lvm2 in 2.6) or software RAID ?
> > I did try sgpdd which accesses the device directly and I saw the
> > same behavior.
> I never heard of this tool. A simple search also didn't found
> anything. Care to provide a link ?
> Could you run hdparm/zcav/dd/etc. reading directly from real SCSI
> device (no lv*, md*) ? Yes, I know, some of these tools are not very
> precise, but we're talking about almost an order of magnitude...
> Bogdan Costescu
> IWR - Interdisziplinaeres Zentrum fuer Wissenschaftliches Rechnen
> Universitaet Heidelberg, INF 368, D-69120 Heidelberg, GERMANY
> Telephone: +49 6221 54 8869, Telefax: +49 6221 54 8868
> E-mail: Bogdan.Costescu at IWR.Uni-Heidelberg.De
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
Tim Mattox - tmattox at gmail.com - http://homepage.mac.com/tmattox/
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf