The MPI-HMMER team is pleased to announce the release of MPI-HMMER. MPI-HMMER is a multiple-level optimization of the original HMMER 2.3.2 code by Sean Eddy of the HHMI Janelia Farms facility. Our implementation consists of two distinct optimizations: a portably tuned P7Viterbi function as well as an MPI implementation. Our MPI implementation is based on the original PVM HMMER code, with enhancements to improve the scalability and I/O of both hmmpfam and hmmsearch. Both optimizations are independent of one another, allowing future enhancements to be easily added and tested. The MPI implementation exhibits excellent speedups over the base PVM implementation. Further, we provide a verification mode in both hmmpfam and hmmsearch that ensures (at a cost of speed) results are returned in exactly the same order as the serial version.
Our code has been tested for stability up to 256 nodes, but should scale to as many nodes as the master node's memory allows. For 16 and 20 nodes we see timings of:
- Hmmsearch rrm.hmm against uniref100.fasta, base - 3627 seconds
- Hmmsearch rrm.hmm against uniref100.fasta, 20 nodes - 192 seconds
- Hmmsearch rrm.hmm against uniref100.fasta, 16 nodes - 243 seconds
- Hmmpfam Pfam_fs against Artemia.fa, base â 356 seconds
- Hmmpfam Pfam_fs against Artemia.fa, 20 nodes â 25 seconds
- Hmmpfam Pfam_fs against Artemia.fa, 16 nodes â 30 seconds
- Obtain the source from http://code.google.com/p/mpihmmer
- compile with CC=mpicc ./configure
UsageAssuming mpirun is a part of your path and your MPI takes a -machinefile argument:
-machinefile ./hmmsearch --mpi mpirun -np -machinefile ./hmmpfam --mpi
Bug reports and comments may be sent to jwalters (you know what to put here) wayne.edu
John Paul Walters
Visit Scalable Informatics for more informtion.