[Beowulf] MPI performance gain with jumbo frames

Douglas Eadline deadline at eadline.org
Mon Jun 11 11:43:02 EDT 2007


1) The results you reference are rather old. Does this
   reflect your hardware?

2) To support Jumbo Frames you need both NICs and a switch
   that support them.

3) It is possible to achieve wire speed from
   GigE, you need something other then 32 bit PCI
   connections, however. (PCIe, PCI-X)

4) While Jumbo Frames can help NFS, the effect on MPI
   can vary by application. Have you run any tests to
   see exactly what your network performance is?
   (i.e. Netpipe)

You may find these articles helpful:

http://www.clustermonkey.net//content/view/38/34/

http://www.clustermonkey.net//content/view/39/34/

--
Doug


> hi all,
>
> new to this list, so don't know if this is offtopic.
>
> i'd like to know experiences about MPI performance gain with jumbo frames.
> i
> manage a beowulf cluster (42 athlon xp, gentoo linux) with gigabit
> ethernet
> where fluent, openfoam and other mpi apps are run.
>
> with NFS i'm sure wich kind of gain i would have, but with MPI apps i'm
> worried about after seeing this page
> http://www.scl.ameslab.gov/Projects/IBMCluster/Benchmarks.html
>
> regards
>
>
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
> 
>


--
Doug
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

!DSPAM:466d6dd5165571246014193!



More information about the Beowulf mailing list