Advice on Cluster Hardware.

Nirmal Bissonauth Nirmal.Bissonauth at durham.ac.uk
Wed Feb 6 10:44:55 EST 2002


Hi 

I am actually using a propoer UDMA100 cable.(80 pins)
There was a patch made available on the net but it never became part of
the mainstream kernel and I never got round to incorporating it in mine.
I can't find it on the web now.

Nirmal


On Wed, 6 Feb 2002, Alcino Dall Igna Junior wrote:

> Problably a silly sugestion, but you are using a proper
> cable to connect the HD?
> 
> Alcino
> 
> On Wed, 6 Feb 2002, Nirmal Bissonauth wrote:
> 
> > Hi All,
> > 
> > I have 6 Tyan Thunder K7 motherboards (Dual 1.2 GHz Athlon MP) in my
> > cluster. I am using Redhat7.2, kernel 2.4.9-13smp. The problem I get is
> > with the E-IDE driver. It does not correctly initialise the AMD7411 IDE 
> > controller and thus it can't go any faster than UDMA33 with the IDE hard
> > drives. You may have the same problem with the Tyan Tiger MP.
> > 
> > Here are the kernel messages during boot up. I can use the override
> > parameters idebus=66, but I can't confirm that it actually makes it go any
> > faster. The hard drive is capable of UDMA 100 by the way. 
> > 
> > Uniform Multi-Platform E-IDE driver Revision: 6.31
> > ide: Assuming 33MHz PCI bus speed for PIO modes; override with idebus=xx
> > AMD7411: IDE controller on PCI bus 00 dev 39
> > AMD7411: chipset revision 1
> > AMD7411: not 100% native mode: will probe irqs later
> > AMD7411: disabling single-word DMA support (revision < C4)
> >     ide0: BM-DMA at 0xf000-0xf007, BIOS settings: hda:DMA, hdb:pio
> >     ide1: BM-DMA at 0xf008-0xf00f, BIOS settings: hdc:pio, hdd:pio
> > hda: IC35L040AVER07-0, ATA DISK drive
> > ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
> > hda: 80418240 sectors (41174 MB) w/1916KiB Cache, CHS=79780/16/63,
> > UDMA(33)
> > 
> > 
> > Regards  
> > Nirmal Bissonauth
> > 
> > 
> > On Tue, 5 Feb 2002, Alberto Ramos wrote:
> > 
> > > 
> > >   Here in a university of Madrid, we are designing a Beowulf for paralel
> > > computing in QCD. We will begin with a small cluster to see the performance
> > > and later try with other comunications, like mirynet.
> > > 
> > >   The Hardware will be 4 nodes each consisting of:
> > >   
> > >   - 2xAMD MP 1800+ CPU
> > >   - 1x512MB RAM DDR
> > >   - 1xTyan tiger MP
> > >   - 2x3-COM 905B NIC
> > >   - 20GB HD
> > >   
> > >   The master node has 1GB of RAM DDR, and one aditional HD to use as /home.
> > >   
> > >   The conection will be trought a HP Procurve Switch 408 with 8 ports to use
> > > Chanel Bonding.
> > > 
> > >   Now the questions:
> > >   
> > >   - Any known problems with the hardware?
> > >   - Are the NIC and the Switch good choices?
> > >   - Will the Intel FORTRAN 90 Compiler work with this hardware.
> > >   
> > >   Thank you very much for your time.
> > >   
> > >   Alberto.
> > >   
> > > _______________________________________________
> > > Beowulf mailing list, Beowulf at beowulf.org
> > > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
> > > 
> > 
> > _______________________________________________
> > Beowulf mailing list, Beowulf at beowulf.org
> > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
> > 
> 
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
> 

_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list