beowulf in space

Jim Lux James.P.Lux at jpl.nasa.gov
Thu Apr 17 12:41:39 EDT 2003


>If you have an orbital project or application that needs considerably
>more speed than the undoubtedly pedestrian clock of these devices can
>provide, you have a HUGE cost barrier to developing a faster processor,
>and that barrier is largely out of your (DoD) or Nasa's control -- you
>can only ask/hope for an industrial partner to make the investment
>required to up the chip generation in hardened technology with the
>promise of at least some guaranteed sales.  You also have a known per
>kilogram per liter cost for lifting stuff into space, and this is at
>least modestly under your own control.  So (presuming an efficiently
>parallelizable task) instead of effectively financing a couple of
>billion dollars in developing the nextgen hard chips to get a speedup of
>ten or so, you can engineer twelve systems based on the current,
>relatively cheap chips into a robust and fault tolerant cluster and pay
>the known immediate costs of lifting those twelve systems into orbit.
>



>A question that you or Gerry or Jim may or may not be able to answer
>(with which Chip started this discussion):  Are there any specific
>non-classified instances that you know of where an actual "cluster"
>(defined loosely as multiple identical CPUs interconnected with some
>sort of communications bus or network and running a specific parallel
>numerical task, not e.g.  task-specific processors in several parts of a
>military jet) has been engineered, built, and shot into space?

I was involved with development of a breadboard scatterometer ( a 
specialized type of radar that measures the radar reflectivity of the 
target (the ocean surface, in this case)) using multiple off the shelf 
space qualified DSP processors to get the numerical processing crunch 
needed. It was more a proof of concept or feasibility demonstration than a 
flight instrument, and designed to provide a reasonable basis for cost 
estimates for an eventual flight instrument.

It was specifically the concept you address above: You're not going to get 
one special processor built custom for you at a reasonable price, but you 
can get a bunch of generic ones and gang em together.  The "going in 
constraint" was that the approach had to use existing off the shelf flight 
qualified technology, which in this case is the rad tolerant ADSP21020 
clone funded by ESA, made by Atmel/Temic.

We used SpaceWire as the interconnect (it's a routable high speed serial 
link, based on IEEE 1355), wrote drivers that implement a subset of MPI, 
and did all the fancy stuff in fairly vanilla C doing the interprocessor 
comms with calls to the MPI-like API.  The breadboard illustrated 
scalability (i.e. you could add and drop identical processors to achieve 
any desired performance; manifested as either "amount of signal processing 
required" or "max pulse repetition frequency handled")

Interestingly, mass wasn't a big design driver (adding a processor to the 
cluster only adds <1kg to an instrument that already is on the order of 
100kg). Power was a bit of a concern (mostly because it hadn't ever been 
built), but the real hurdle for the review boards was just the 
unfamiliarity with the concept of accepting inefficiency in exchange for 
use of generic parts.  Most spacecraft systems are very purpose designed 
and highly customized.

>This has been interesting enough that if there are any, I may indeed add
>a chapter to the book, if/when I next actually work on it.  I got dem
>end of semester blues, at the moment...:-)
>
>   rgb
>
>--
>Robert G. Brown                        http://www.phy.duke.edu/~rgb/
>Duke University Dept. of Physics, Box 90305
>Durham, N.C. 27708-0305
>Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu

James Lux, P.E.
Spacecraft Telecommunications Section
Jet Propulsion Laboratory, Mail Stop 161-213
4800 Oak Grove Drive
Pasadena CA 91109
tel: (818)354-2075
fax: (818)393-6875

_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list