MPI & linux compilers

William Gropp gropp at
Sun Aug 3 14:18:17 EDT 2003

At 06:14 AM 8/3/2003 +0000, Levente Horvath wrote:
>To whom it may concern,
>We have 12 PCs set up for parallel computation. All are running linux 
>(Redhat 7.3) and MPI.
>We would like to compute eigenvalues and eigenvectors for large matrices.
>We have managed to do up to 10000x10000 matrix no problem. Our program 
>uses Scalapack and Blacs
>routines. These routines require two matrix to be declared. On single 
>precision two 10000x10000
>matrix occupies 800Mb of memory which is already exceeds the 512Mb local 
>memory of
>each computer in our cluster. This memory were equally distributed over 
>the 12 computers
>upon computation. So, we think that in theory we shouldn't have any 
>problem going
>to large matrices; as our distributed memory is quite large 12*512Mb.

You need to declare only the local part of the matrix that is distributed 
across the processes, not the entire matrix.  MPI doesn't provide any 
support for automatically distributing the data, though libraries written 
using MPI can do this if the data is allocated dynamically by the 
library.  Languages such as HPF can do this for you, but have their own 


Beowulf mailing list, Beowulf at
To change your subscription (digest mode or unsubscribe) visit

More information about the Beowulf mailing list