- Published on Monday, 01 December 2008 11:26
- Written by Jeff Layton
- Hits: 4150
Nothing stops a cluster geek, not even surgery
As some of you might know I ended up spending a great deal of SC08 either in the hospital or in my hotel room recovering from emergency surgery. Not the best way to spend SC since it only comes once a year, but my body just didn't allow it. However, I did get out a little on the last day to run around the show floor like a mad man.
SC is always a very interesting conference for me for many reasons. I get to see some cool new toys, see old friends, make new ones, and totally geek-out for a week without my family rolling their eyes at me in total embarrassment. So, without further ado, here are my (limited) impressions of SC08
Austin's Really Great
I get to spend a great deal of time in Austin because of my day job but it's not downtown. So it was very interesting for me to be downtown in Austin, especially near the night-life around 6th street. It's a much, much better destination that Reno for SC07 and Tampa for SC06. There are tons of places to eat including some places with good steaks and BBQ (I'm a huge BBQ nut). The prices can be a little steep but at least we can find places to eat (unlike Tampa) and we didn't have to cut through the smoke to get anywhere (like Reno). So, hat's off to the SC committee - Austin was a pretty good pick. BTW - the hospital near the downtown, Brackenridge, is a top notch hospital and is a main trauma center. The people there were spectacular to say the least. But then again, I'm not going to judge the location for SC based on the quality of the hospitals. But given my rapidly advancing age, it may become one of my key criteria for future SC conferences.
I think several other people (e.g. Doug and Joe Landman), have mentioned that the show floor felt less full than usual. I don't know what the final attendance was, but I do know that a number of people who were supposed to come, canceled at the last minute. I guess the reality of operating expenses has hit just about everyone. But walking around the floor, I got the distinct impression that the attendance was down.
Another impression I got was that the number of "customer" booths was way up. Remember that SC is a unique conference in that the vendors and their customers all share the same exhibit floor. To me, it seemed as though the number of customer booths, primarily universities and national labs, was up considerably. I didn't stop to talk to many of them but my usual favorites, TACC and aggregate.org were there and in rare form. I did see more universities from Asia which I think is a good sign. At the same time, the national lab booths just seem to be getting bigger and more elaborate every year. I waiting for the day when the largest and loudest booth with the best swag is not a vendor but rather a national lab. When that happens I think it will speak volumes about the HPC industry and funding. But, I digress.
One other impression I have is the general buzz was different as well. It didn't seem as "fun" as past SC shows. There seemed to be a "bite" in every conversation. My favorite conversations were academics talking to vendors or sometimes at other non-vendor booths. These conversations got very heated with some academics raising their voice telling the vendors that they were dead wrong and they were hurting the industry and if they only listened to them they had a solution to whatever problem there was. In 5+ years of SC's I never quite heard conversations get this heated. Things ware usually very pleasant and at the every least fun technically. But when you get people who are absolutely convinced they are absolutely right and their sensitivity, for whatever reason, is heightened, makes for a really argumentative environment. And I didn't get this impression from one or two discussions but many of them. Sigh... I hope it was just because some people were grumpy, but if the general attitude is true then I don't think it's a good sign for the community (I won't even talk about the beowulf list which has almost become next to useless, but that's another story... :) ).
Cool Stuff for HPC
Since I didn't walk around the show floor too much, I will have to rely on press releases and website information to help. I always look for a "theme" or two in the shows and this year I think I can definitely find one theme and perhaps a second and third theme. The main theme of this year's show, at least to me, was GPUs.
Everyone was talking about or demoing GPUs for HPC. I've been following GPUs for a number of years and I was glad to see them come to the forefront this year. A number of vendors were demo-ing systems with GPUs such as Cray, Bull, Dell, NEC, HP, BOXX, Mathematica, Lenovo and others. Plus there seemed to be lots of discussion about tools for GPUS with many people expressing hope that OpenCL would be the savior of GPU coding.
Nvidia had a press release about what other companies are doing to incorporate Nvidia GPUs into Personal Supercomputers. In general, the plan is to use Nvidia's C1060 card in a workstation or rack mount system. They even have a website that discusses personal supercomputers using Tesla cards.
You can go to the Home of CUDA, Nvidia's freely available tool for building GPU codes. There are a number of examples of speedups obtained from running on GPUs. However, getting your application to run on GPUs is not as simple as a "make" or adding a new variable to a command line (e.g. "-gpu"). You still have to rethink your algorithms to take advantage of the GPU. While this sounds easy, it's not. You have to retrain the way you think to take advantage of the GPU. But if you can coerce your code into running on GPUs, the potential for magnitude increases in performance is there. Keep in mind that not all codes or algorithms may be able to take advantage of GPUs.
While Nvidia was the main talk in regard to GPUs, Aprius was also there showing off an interesting box called the CA8000 Computational Acceleration System. It's a 4U box that contain up to eight (8) PCIe boards - most likely computational acceleration cards (e.g. GPUs). Each card can be a PCIe x16 Gen 2 card that is double wide that draws up to 300W. Ideally, you populate the CA8000 with a few cards such as GPUs, and then use the Aprius PCIe Optical adapters in the box that allow you to connect the box to a single node or multiple nodes. You can use up to four (4) of these adapters and four (4) cards. This is perfect for situations where the compute nodes cannot handle a GPU directly (either they don't have the right kind of slot or they don't have enough power). Using the connectors you can get a 2:1 or a 4:1 accelerators/node ratio with this box.
Since AMD doesn't have an external GPU box as Nvidia does, the CA8000 is perfect for AMD GPU solutions. It also matches Nvidia's recommended ratio of no more than 2 cores per CPU. But the CA8000 does not offer the density that the Nvidia S1070 1U box offers. Nonetheless, I think this box is very interesting for a variety of reasons - it allows nodes that can't have a GPU to connect to GPUs, it gives AMD an external solution that comes close to Nvidia's solution.Nvidia was also on the floor in full force. Their booth is always good and they have some real technical experts floating around (unlike some companies who stuff the booths with eye-candy and they don't send anyone with technical skills to back them up - but that's another story). Due to my horribly limited time I didn't get to chat with Nvidia. I'm sorry I missed that since that's always a highlight of the show.
One of the coolest announcements and one I was really looking forward to digging into was that the Portland Group announced the new version (8.0) of their compiler suite. While the compilers are always good and PGI continues to make them better, I think this suite could represent the beginning of a huge trend for GPUs - integrating GPU code generation into standard compilers
The idea is that standard compilers have the ability to generate code for GPUs. Of course you have to write code that the compilers recognize or even better, the compiler could have a compile option such as "-gpu" that would look at the code and generate GPU code where appropriate. I know this is wishful thinking, but the compiler writers at PGI are exceptional. The advantage of this approach is it allows people to use standard compilers, that they already may be using, to build applications that will run on GPUs. This approach is even more important for Fortran since there are no really good ways to easily port Fortran code to run on GPUs.
Keep watching CM for a follow-up article I hope to do on the new PGI compilers
- Next >>
Login And Newsletter