charmm scalability on 2.4 kernels

Tru tru at pasteur.fr
Mon Jan 7 13:22:39 EST 2002


On Sun, Jan 06, 2002 at 05:44:50PM -0500, Rob Latham wrote:
> On Sat, Jan 05, 2002 at 11:55:42PM +0200, Eray Ozkural (exa) wrote:
> > On Saturday 05 January 2002 20:10, Steve Fellini wrote:
>  
> > This is rather surprising, I'd expect 2.4.x to actually improve upon 2.2.x. 
> > It was said before on this list that 2.4 already incorporates TCP fixes. Has 
> > anybody met a similar situation before?
> 
> it was said before that the tcp fixes don't make any difference.
> 

I think the bad speedup comes from dual VS single cpu nodes
regarding parallel behaviour of CHARMM.

YMMV but on single cpu athlon nodes over fast ethernet,
here is what we have:

#cpus speedup (elapsed time)
2	1.2
4	2.0 
8	3.3

We still gain something although it is not good!

Best regards,

Tru


more details follow:
---------------------
100 steps run
CHARMM c27b4
LAM MPI 6.5.4 (recompiled)
redhat 7.1xfs (kernel 2.4.9-13SGI_XFS_1.0.2)
single cpu node 1.2GHz (9x133) athlon cpu
fast ethernet 3com 3c905C-TX

stock gnu compilers
k7-9x133_c27b4-lam-6.5.4_milan_pme.07-01_16h53.out-x1:
 Parallel load balance (sec.):
 Node Eext      Eint   Wait    Comm    List   Integ   Total
   0   134.1   137.1     0.0     0.0    60.6     5.0   336.8
                    $$$$$ JOB ACCOUNTING INFORMATION $$$$$
                     ELAPSED TIME:     5.80  MINUTES 
                         CPU TIME:     5.81  MINUTES 

k7-9x133_c27b4-lam-6.5.4_milan_pme.07-01_15h01.out-x2:
PARALLEL> Average timing for all nodes:
 Node Eext      Eint   Wait    Comm    List   Integ   Total
   2    66.1   157.1     0.0     4.9    31.0     2.9   261.9
                    $$$$$ JOB ACCOUNTING INFORMATION $$$$$
                     ELAPSED TIME:     4.50  MINUTES 
                         CPU TIME:     3.17  MINUTES 

k7-9x133_c27b4-lam-6.5.4_milan_pme.07-01_16h16.out-x2:
PARALLEL> Average timing for all nodes:
 Node Eext      Eint   Wait    Comm    List   Integ   Total
   2    66.1   157.0     0.0     4.9    33.8     2.9   264.7
                    $$$$$ JOB ACCOUNTING INFORMATION $$$$$
                     ELAPSED TIME:     4.60  MINUTES 
                         CPU TIME:     3.26  MINUTES 

k7-9x133_c27b4-lam-6.5.4_milan_pme.07-01_14h38.out-x4:
PARALLEL> Average timing for all nodes:
 Node Eext      Eint   Wait    Comm    List   Integ   Total
   4    33.4   103.6     0.0     7.1    18.1     1.6   163.8
                    $$$$$ JOB ACCOUNTING INFORMATION $$$$$
                     ELAPSED TIME:     2.90  MINUTES 
                         CPU TIME:     1.80  MINUTES 

k7-9x133_c27b4-lam-6.5.4_milan_pme.07-01_15h53.out-x4:
PARALLEL> Average timing for all nodes:
 Node Eext      Eint   Wait    Comm    List   Integ   Total
   4    33.4   102.9     0.0     7.3    18.0     1.6   163.1
                    $$$$$ JOB ACCOUNTING INFORMATION $$$$$
                     ELAPSED TIME:     2.88  MINUTES 
                         CPU TIME:     1.80  MINUTES 

k7-9x133_c27b4-lam-6.5.4_milan_pme.07-01_14h22.out-x8:
PARALLEL> Average timing for all nodes:
 Node Eext      Eint   Wait    Comm    List   Integ   Total
   8    17.0    58.5     0.1     8.2    10.8     0.9    95.5
                    $$$$$ JOB ACCOUNTING INFORMATION $$$$$
                     ELAPSED TIME:     1.75  MINUTES 
                         CPU TIME:     1.00  MINUTES 

k7-9x133_c27b4-lam-6.5.4_milan_pme.07-01_15h38.out-x8:
PARALLEL> Average timing for all nodes:
 Node Eext      Eint   Wait    Comm    List   Integ   Total
   8    17.0    58.0     0.1     9.4    10.8     1.0    96.2
                    $$$$$ JOB ACCOUNTING INFORMATION $$$$$
                     ELAPSED TIME:     1.75  MINUTES 
                         CPU TIME:     1.03  MINUTES 

-- 
Dr Tru Huynh          | http://www.pasteur.fr/recherche/unites/Binfs/
mailto:tru at pasteur.fr | tel/fax +33 1 45 68 87 37/19
Institut Pasteur, 25-28 rue du Docteur Roux, 75724 Paris CEDEX 15 France  
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list