Select News
The news in this category has been selected by us because we thought it would be interestingto hard core cluster geeks. Of course, you don't have to be a cluster geek to read the news stories.
- Details
- Written by Douglas Eadline
- Hits: 7106
How Low can you go?
Mellanox has just announced their new ConnectX HCA's that provide 1.2 μsecond MPI ping latency. Other features include, 10 or 20Gb/s InfiniBand ports CPU offload of transport operations, End-to-end QoS and congestion control, Hardware-based I/O virtualization, and TCP/UDP/IP stateless offload. The press release follows.
- Details
- Written by Administrator
- Hits: 7757
New Workstation from Appro Packs in the Cores.
Appro International has just announced the XtremeWorkstation. -- a deskside workstation that can hold up to four (4) AMD Opteron Processors. So let's do some math. With four sockets, that is eight cores in one box, and when the Barcelona quad-core comes out, that is 16 cores in one box. Remember your first 16 node cluster? Now you can get 16 cores in a single desk side SMP system (Symmetric Multi-Processing).

Here are some details, the XtremeWorkstation offers a maximum of 128GB of memory, up to 4 TB of SATA disk space, dual GigE, up to two PCI-Express x16 slots for high-end graphics cards such as the nVidia Quadro FX4500 X2 or the nVidia Quadro FX5500 and plenty more (pdf data sheet). Supports Linux and Windows.
There are no interconnects other than HyperTransport. The good news is you can run all your existing MPI codes so there is no need to pull out messages and add threads to your application (unless you want to of course).
- Details
- Written by Douglas Eadline
- Hits: 6973
The Advanced Research Computing (ARC) team at Georgetown University is running another of their successful Introduction to Beowulf Design, Planning, Building and Administering trainings on January 23-26th (next week!). The previous sessions were a success and we can look forward to other trainings as well.
- Details
- Written by Douglas Eadline
- Hits: 7640
- Details
- Written by Douglas Eadline
- Hits: 8105
The MPI-HMMER team is pleased to announce the release of MPI-HMMER. MPI-HMMER is a multiple-level optimization of the original HMMER 2.3.2 code by Sean Eddy of the HHMI Janelia Farms facility. Our implementation consists of two distinct optimizations: a portably tuned P7Viterbi function as well as an MPI implementation. Our MPI implementation is based on the original PVM HMMER code, with enhancements to improve the scalability and I/O of both hmmpfam and hmmsearch. Both optimizations are independent of one another, allowing future enhancements to be easily added and tested. The MPI implementation exhibits excellent speedups over the base PVM implementation. Further, we provide a verification mode in both hmmpfam and hmmsearch that ensures (at a cost of speed) results are returned in exactly the same order as the serial version.