[Beowulf] Purdue Supercomputer
gerry.creager at tamu.edu
Wed May 7 10:09:00 EDT 2008
We're not as big as Purdue in this but we just installed a 10TF Dell
system. We specifically designed with 1Gbe to reinforce the concept that
our new cluster is a high-throughput system rather than HPC. Jobs that
can concentrate well on a node (or two) should do nicely, while HPC jobs
can run on other campus resources with bigger, badder, faster interconnects.
Mark Hahn wrote:
>> everything was going. This morning, we hit the last few mis-installs.
>> Our DOA nodes were around 1% of the total order..
> one advantage of having the vendor pre-rack is that they usually also
> pre-test. did you consider having dell pre-assemble the cluster, and
> reject it for cost reasons?
>> The physical networking was done in a new way for us.. We used a large
>> Foundry switch and the MRJ21 cabling system for it. Each racks gets 24
>> nodes, a 24 port passive patch panel, and 4 MRJ21 cables that run back
>> to the
> if I understand, this means each node has a 1Gb link to a large switch,
> right? I'm a little surprised this was cost-effective - what is the
> workload of the cluster? (I mean given that Gb is usually considered
> high-latency and low-bandwidth.) I'd be curious to hear about your
> consideration of both 10G and IB.
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
Gerry Creager -- gerry.creager at tamu.edu
Texas Mesonet -- AATLT, Texas A&M University
Cell: 979.229.5301 Office: 979.862.3982 FAX: 979.862.3983
Office: 1700 Research Parkway Ste 160, TAMU, College Station, TX 77843
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf