[Beowulf] Clusters just got more important - AMD's roadmap
Lux, Jim (337C)
james.p.lux at jpl.nasa.gov
Wed Feb 8 12:55:23 EST 2012
We can probably look back to the history of non-integrated floating point for this kind of thing. 8087/8086, etc.
I used to work with a guy who was a key mover at Floating Point Systems, probably one of the first applications of "attached special purpose processor", and ALL of the issues we're talking about here came up in that connection, just as with coprocessors since time immemorial.
I think the real question is: "does the fact we're doing this at a different scale, change any of the fundamental limitations or make something easier than it was the last time"
From: beowulf-bounces at beowulf.org [mailto:beowulf-bounces at beowulf.org] On Behalf Of Mark Hahn
Sent: Wednesday, February 08, 2012 9:15 AM
To: Beowulf Mailing List
Subject: Re: [Beowulf] Clusters just got more important - AMD's roadmap
> The APU concept has a few interesting points but certainly also a few
> major problems (when comparing it to a cpu + stand alone gpu setup):
> * Memory bandwidth to all those FPUs
well, sorta. my experience with GP-GPU programming today is that your first goal is to avoid touching anything offchip anyway (spilling, etc), so I'm not sure this is a big problem. obviously, the integrated GPU is a small slice of a "real" add-in GPU, so needs proportionately less bandwidth.
> * Power (CPUs in servers today max out around 120W with GPUs at >250W)
sure, though the other way to think of this is that you have 250W or so of power overhead hanging off your GPU cards. you can amortize the "host overhead" by adding several GPUs, but...
think of it this way: an APU is just a low-mid-end add-in GPU with the host integrated onto it ;)
I think the real question is whether someone will produce a minimalist APU node. since Llano has on-die PCIE, it seems like you'd need only APU, 2-4 dimms and a network chip or two. that's going to add up to very little beyond the the APU's 65 or 100W TDP... (I figure 150/node including PSU overhead.) _______________________________________________
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
More information about the Beowulf