[Beowulf] Re: dual core Opteron performance - re suse 9.3
kinghorn at pqs-chem.com
Tue Jul 12 11:36:15 EDT 2005
Hi Vincent, ...all,
The code was built on a SuSE9.2 machine with gcc/g77 3.3.4. The same
executable was run on both systems.
Kernel for the 2 dual-node setup was SuSE stock 2.6.8-24-smp
for the 9.3 setup with the dual-core cpus it was the stock install kernel
Memory was fully populated on the 2 node setup -- 4 one GB modules per board,
there are only 4 slots on the Tyan 2875 (I had mistakenly reported yesterday
that there was only 2GB/per board for the benchmark numbers)
The dual-core system had 4 one GB modules arranged 2 for each cpu.
Important(?) bios settings were;
Bank interleaving "Auto"
Node interleaving "Auto"
MemoryHole "Disabled" for both hardware and software settings
The speedup we saw on the dual-core was less than 10% for the most jobs. MP2
jobs with heavy i/o (worst case) was around a %20 hit (there were twice as
many processes hitting the raid scratch space at the same time)
I still have lots of testing and tuning to do. These tests were just to see if
was going to work and how much trouble it was going to be. ( It was a LOT of
trouble getting SuSE9.3 installed but I think worth it in the end)
Best to all
> If you 'did get better performance', that's possibly because
> you have some kernel 2.6.x now, allowing NUMA, and a new
> compiler version of gcc like 4.0.1 that has been bugfixed more than
> the very buggy series 3.3.x and 3.4.x
> Can you show us the differences between the compiler versions and kernel
> versions you had and whether it's NUMA?
> Also how is your memory banks configured, for 64 bits usage or 128 bits
> single cpu usage, or are all banks filled up?
Dr. Donald B. Kinghorn Parallel Quantum Solutions LLC
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf