A Petaflop machine in 20 racks?

Christoph Best cb4 at tigertiger.de
Sun Oct 19 21:00:53 EDT 2003


 > > http://www.wired.com/news/technology/0,1282,60791,00.html

Greg Lindahl writes:
 > I think it's the Return of the Array Processor.
 > 
 > There's very little new in computing these days -- and it has the
 > usual flaws of APs: low bandwidth communication to the host.
 > 
 > So if you have a problem that actually fits in the limited memory, and
 > doesn't need to communicate with anyone else very often, it may be a
 > win for you.

They actually say in this document
 http://www.clearspeed.com/downloads/overview_cs301.pdf
that the chip can be used as stand-alone processor and resembles a
standard RISC processor. I do not see whether it would be SIMD or MIMD
- the block diagram at least does not show a central control unit
separate from the PEs.

Given the small on-chip memory, they will have to connect external
memory. The thing that would worry me is that the external machine
balance is 32 Flops/Word (on 32-bit words), so it will only be useful
for applications that do a lot of operations inside a few 100Kb of
memory.

IBM is following a slightly different approach with the QCDOC and
BlueGene/L supercomputers which are based on systems-on-a-chip where
they put a two PowerPC cores and all support logic on a single chip,
wire it up with one or two GB of memory and connect a lot (64K) of
these chips together. They expect 5.5 GFlops/s per node peak and to
have 360 TFlops operational in 2004/5 (in 64 racks). You would need 
about 200 racks to get to a PetaFlops machine...
  http://sc-2002.org/paperpdfs/pap.pap207.pdf
  http://www.arxiv.org/abs/hep-lat/0306023
[QCDOC is a Columbia University project in collaboration with IBM -
IBM is transitioning the technology from high-energy physics to
biology which makes a lot of sense... :-)]

To put 64 processors on a chip, I am sure ClearSpeed have to sacrifice
a lot in memory and functionality/programmability, and who wins in
this tradeoff remains to be seen. Depends on the application, too, of
course.

BTW, who or what is behind ClearSpeed? Their Bristol address is
identical to Infineon's Design Centre there, and Hewlett Packard seems
to have a lab there, too. If they have that kind of support, I am sure
they thought hard before making these design choices, and it may just
be tarketed at certain problems (vector/matrix/FFT-like stuff).

-Christoph
-- 
Christoph Best                                      cbst at tigertiger.de
Bioinformatics group, LMU Muenchen                http://tigertiger.de/cb
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list