RLX?

Donald Becker becker at scyld.com
Fri Oct 17 14:38:41 EDT 2003


On Fri, 17 Oct 2003, Eduardo Cesar Cabrera Flores wrote:

> Have you ever try or test RLX server for HPC?

Yes, we had access to their earliest machines and I was there at the
NYC announcement.

> What is their performance?

It depends on the generation.

The first generation was great at what it was designed to do: pump out
data, such as static web pages, from memory to two 100Mbps Ethernet
ports per blade.  It used Transmeta chips, 2.5" laptop drives and
fans only on the chassis to fit 24 blades in 3U.
The blades didn't do well at computational tasks or disk I/O.  A third
Ethernet port on each blade was connected to an internal repeater.  They
could only PXE boot using that port, making a flow-controlled boot
server important.

The second generation switched to Intel ULV (Ultra Low Voltage)
processors in the 1GHz range.  This approximately doubled the speed over
Transmeta chips, especially with floating point.  But ULV CPUs are
designed for laptops, and the interconnect was no faster.  Thus this
still was not a computational cluster box.

The current generation blades are much faster, with full speed (and
heat) CPUs and chipset, fast interconnect and good I/O potential.

But lets look at the big picture for HPC cluster packaging:
  --> Beowulf clusters have crossed the density threshold <--
This happened about two years ago.

At the start of the Beowulf project a legitimate problem with clusters
was the low physical density.  This didn't matter in some installations,
as much larger machines were retired leaving plenty of empty space, but
it was a large (pun intended) issue for general use.

As we evolved to 1U rack-mount servers, the situation changed.  Starting
with the API CS-20, Beowulf cluster hardware met and even exceeded the
compute/physical density of contemporary air-cooled Crays.

Since standard 1U dual processor machines can now exceed the air cooled
thermal density supported by an average room, selecting non-standard
packaging (blades, back-to-back mounting, or vertical motherboard
chassis) must be motivated by some other consideration that justifies
the lock-in and higher cost.  At least with blade servers there are a
few opportunities:
   Low-latency backplane communication
   Easier connections to shared storage
   Hot-swap capability to add nodes or replace failed hardware



-- 
Donald Becker				becker at scyld.com
Scyld Computing Corporation		http://www.scyld.com
914 Bay Ridge Road, Suite 220		Scyld Beowulf cluster system
Annapolis MD 21403			410-990-9993


_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list