Select News

The news in this category has been selected by us because we thought it would be interestingto hard core cluster geeks. Of course, you don't have to be a cluster geek to read the news stories.

Read about three new processors hitting the market and nothing about basketball

I waited to make a joint news story about the new processors that were due to arrive this month. In this way, you could click from here to some good reviews and insights rather than hunting on the Internet. We are now permanent residents of multi-core-ville. Remember those single core processors? Seems like a while ago. Although, I bet you still use hardware that has some of these "old" processors. Of course the Intel and AMD announcements are pointed toward servers, but the goodness always trickles down. Other than making the Internet faster, I am still trying to figure out what Joe or Jane Sixpack will do with 6-cores in their bargain of the week lap/desktop from the big box store.

In case you were not paying attention

I usually don't post things vendors send to ClusterMoney because we are not a big time news site (nor to I care for marketing piffle). I will at times summarize some important news and events with enough links to send you on your way to HPC enlightenment. Each Year NVidia sends me a year in review which is a good summary of Tesla HPC events -- complete with many URLs so readers can explore further. [A note to vendors: Company news (aka press releases) with URLs and good background and no jargon may get posted here.] The NVidia round-up begins below:

NVIDIA Tesla - 2009 Year in Review

GPU Computing had a ground breaking year in 2009. In just two and a half years from its launch, the Tesla brand has truly established itself in the HPC community. This wouldn't have happened without the efforts of real GPU Computing pioneers such as Prof. Wen-mei Hwu at University of Illinois who taught the very first courses in parallel programming on the GPU and Prof. Satoshi Matsuoka at Tokyo Institute of Technology who put the first Tesla GPU-enabled supercomputer onto the Top 500 (Top 30 in fact), just one and half years after we launched the brand.

A "tipping point" is defined as a level at which momentum for change becomes unstoppable - we genuinely believe that we are witnessing the tipping point for GPUs in the high performance computing space and the SC09 conference in Portland, Ore. in November cemented that belief.....but we'll come to that in due course :)

Some conveniently late news (after SC09) for the HPC market

I may write more about SC09 in the next few weeks, but for now the video is starting to show up on at Linux Magazine and I already posted a SC09: Three Trends Worth Watching over at HPC Community. You can get a good feel for things from those two articles. I'll also be posting more video on Linux Magazine over the next several weeks.

The weeks after SC are usually quiet as the holidays approach. This year seems to be a little different. There have been more than a few interesting annoucements about the HPC market. I find it kid of interesting how all these show up after SC09. Perhaps these are the kind of announcements you don't want gobbled up by the press. In any case, I have collected the stories and added a little commentary to each. An interesting time in HPC to say the least.

That is parallelization not paralyzation

Recently, I had a conversation with Dmitry Tkachev of T-Platforms. Of course many of you may not know about T-Platforms because you don't buy clusters in Russia. Dmitry was not trying to sell me a cluster, however. He was pointing me to some auto-parallelizing compiler technology from a Russian software company called Optimitech.

I asked Dmitry if he could provide a brief summary of what Optimitech was developing for the HPC crowd as they offer some auto-parallelizing patches to gcc/gfortran. In any case, I'll let Dmitry continue (I helped with the grammar a bit, although some may consider my help questionable.)

I'm at the NVIDIA GPU conference and yesterday they announced their next generation GPU called Fermi. Here are the key points:

  • C++, complementing existing support for C, Fortran, Java, Python, OpenCL and DirectCompute.
  • ECC, a critical requirement for datacenters and supercomputing centers deploying GPUs on a large scale
  • 512 CUDA Cores™ featuring the new IEEE 754-2008 floating-point standard, surpassing even the most advanced CPUs
  • 8x the peak double precision arithmetic performance over NVIDIA’s last generation GPU. Double precision is critical for high-performance computing (HPC) applications such as linear algebra, numerical simulation, and quantum chemistry
  • NVIDIA Parallel DataCache™ - the world’s first true cache hierarchy in a GPU that speeds up algorithms such as physics solvers, raytracing, and sparse matrix multiplication where data addresses are not known beforehand
  • NVIDIA GigaThread™ Engine with support for concurrent kernel execution, where different kernels of the same application context can execute on the GPU at the same time (eg: PhysX® fluid and rigid body solvers)

To get the full technical story, grab the Fermi White Paper. The bottom line, NVIDIA is paying attention to HPC in a BIG way.

Search

Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.

Feedburner


This work is licensed under CC BY-NC-SA 4.0

©2005-2023 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.