[Beowulf] opteron 248 vs dual-core opteron 275

Joe Landman landman at scalableinformatics.com
Sun Dec 4 15:07:19 EST 2005


Hi Marc:

Marc Noguera wrote:

[...]

> My doubts arise about the performance of 4 single-threades applications, 
> on a one per-core basis. Will this performance be better on a 
> 2proc./4cores node or on two 2proc/2node configuration. I am talking 
> about totally independent processes. I believe that the two 2proc/2node 
> configuration will perform better, doen any body know if it is like this 
> and if it is which relationship exists for these performances?.

This is entirely program dependent.  If you run programs that consume 
large bandwidth to memory, you may have issues.  If your programs are 
less memory bandwidth sensitive, then you might have some good results. 
  We did a study of this a while ago.  You can download it from this URL:

http://enterprise2.amd.com/downloadables/Dual_Core_Performance.pdf

> On the other hand, it seems clear to me that heat dissipation of the 
> dual-core configuration will be much better. Can anyone confirm that. 

For a dual core and single core running at the same frequency clock, 
yes, the dual core is better in terms of dissipating less heat per core.

> Does the dual-core cpu dissipate approx. the same heat as a single-core 
> one?
> Are there many problems on the use of dual-core application by linux OS, 
> or is it really a no-problemo thing as I have read?

We have not run into serious issues.  Mostly we have run into problems 
with certain North American Dominant Linux vendors distributions (cough 
cough) being rather poor in their support for the NUMA nature of the 
system, or using ancient kernels missing many features, or using kernels 
lacking basic drivers for modern chipsets (leading to kernel panics 
immediately after booting).

> BTW, we use our cluster mostly for computational chemistry calculations, 
> mainly gaussian, turbomole, molcas and such.

Depending upon how you run Gaussian, it could be more sensitive to IO 
conditions than other issues.  It will consume memory bandwidth for a 
number of calculations.

We have a customer who purchased a (small/tiny and not worth talking 
about according to some here on the list) cluster with 144 cores in 
dual core units, whom have run Gaussian continuously on this system from 
the day after it was up.  No issues that we are aware of, and they have 
used about 10-CPU years of cycles running these codes in the last 7 
weeks.   They are running lots of chemistry applications with no issues 
on the cluster.

> Thanks for your answers

No problem, glad to help if we can.

Joe

> Marc
> ---------------------------------------------------------------
> Marc Noguera Julian
> Tecnic suport a la Recerca
> Despatx c7-149
> Quimica Física - Universitat Autonoma Barcelona
> 08193, Cerdanyola del Vallès.
> Barcelona, Catalunya.
> Tef: 34 93 5812173
> Fax: 34 93 5822920
> --------------------------------------------------------------


-- 
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: landman at scalableinformatics.com
web  : http://www.scalableinformatics.com
phone: +1 734 786 8423
fax  : +1 734 786 8452
cell : +1 734 612 4615
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list