[Beowulf] Slection from processor choices; Requesting Giudence

Douglas Eadline deadline at clustermonkey.net
Sun Jun 18 11:41:02 EDT 2006


>> >> desktop (32 bit PCI) cards. I managed to get 14.6 HPL GFLOPS
>> >> and 4.35 GROMACS GFLOPS out of 8 nodes consisting of hardware
>> > ...
>> >> As a point of reference, a quad opteron 270 (2GHz) reported
>> >> 4.31 GROMACS GFLOPS.
>> >
>> > that's perplexing to me, since the first cluster has semp/2500's,
>> > right?  that's a 1.75 GHz K8 core with 128K L2 and 64b memory
>> > interface.  versus the same number of 2.0 GHz, 1M cores each with
>> > 4x 128b memory.  I really wouldn't expect them to be that close -
>> > any speculation on why GROMACS runs so poorly on the much better
>> > SMP machine?
>>
>> <googling for motherboard specs>
>> Aha, Socket 462. The Semprons he used are K7 based.
>
> OK, even more so - how does an even older cpu with lower clock,
> slower memory and only gigabit interconnect beat a quad-opt.
> it seems like some other factors were determining performance.


Well I cannot speak for how the quad opteron ran the code. All
I went by was the number reported on the GROMACS page. I am
curious myself. BTW, I recently tested GROMACS on a 3.2G PentD
Presler (7.57 GFLOPS on 8 processors with one core each, 10.84 GFLOPS on 8
processors with two cores each - 16 cores total) White paper is
forthcoming.

--
Doug


>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>


-- 
Doug
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list