If Nikola could see us now!
Each year NVidia provides a "year in review" that I find very interesting. It is a good summary of the years events (of course from NVidia's's perspective), but none the less informative. Plus, there are plenty of links to facilitate further exploration. This years round-up follows.
Tesla - A Year in Review - 2010
The growth of GPU Computing in HPC has continued unabated this year with many new milestones achieved. Hard to believe that it's only been three and a half years since Tesla launched.At the end of last year, we talked about how it felt like we had reached a "tipping point" with Tesla, a level at which momentum for change seemed unstoppable. If I had to find two words to summarize this year, I would say that it feels like Tesla has reached escape velocity, the required speed one needs to break free of a gravitational field, or in the case of Tesla, a stage of momentum where we're seeing a rapid increase in deployments and the question on many of our customers' lips is no longer "if" we deploy GPUs, it's "when".
These are our Top 10 takeaways for the year:
- CUDA by the numbers. There a lot of metrics we use internally to track the
progress of CUDA, but however you cut it, we've seen stellar growth across the board
this year in terms of developer adoption, education and community momentum.
2009 2010 % Increase Attendees at GPU
Technology Conference (GTC)1423 2166 52%
(ind. av. = ~20%)Universities Teaching CUDA 270 350 30% CUDA related videos on YouTube 800 1250 56% Submissions to CUDA Zone 670 1235 85% Cumulative downloads of CUDA SDK 293,000 668,000 127% CUDA-related citations on Google Scholar 2700 7000+ 160% Submissions to speak at GTC 67 334 398%
- The Computational Laboratory - In January, we launched a new initiative for
the bio-informatics and computational chemistry community, called
WorkbenchTesla Bio. The initiative
brought together more than 20 prominent computational research codes, such as
AMBER,
VMD and
LAMMPS, enabling scientists who rely on these codes to turn their standard PCs into "computational
laboratories" capable of doing science more than 10-20 times faster through the use
of Tesla.
In the case of AMBER, one of the most widely used applications for biochemists, performance increases of up to 100X are being seen and more importantly, critical research that once required a supercomputer could now be done on a desktop workstation. The Tesla Bio Workbench site saw more than 10,000 visitors in the first two weeks alone and since then, more than half of the 150,000+ visitors have clicked through to the specific pages belonging to the research codes.
- "Build it and they will come" - when I wrote this recap last year, there was
1 OEM with a Tesla SKU as a part of their line-up. Today, this number is up to 9,
with a total of 19 Tesla-specific SKUs now available, many using the Tesla M2050 GPU
Computing Module.
The list includes all the major players such as Cray, Dell, HP and SGI, but perhaps
most notable is IBM, who in May became the first major OEM to offer a Tesla -based
server solution in its iDataPlex
line. For IBM,
it was a sign that GPU Computing was mature enough to warrant their entry into the
space. Dave Turek IBM VP of Deep Computing said:
"I think what's changed is that customers have been experimenting for a long time and now they're getting ready to buy. It wasn't the technology that drove us to do this. It was the maturation of the marketplace and the attitude toward using this technology. It's as simple as that."
- To the Nebulae and Beyond - At the International Supercomputing Conference
in June, the world's first Tesla GPU-enabled petaflop supercomputer made its debut.
Equipped with 4640 Tesla "Fermi" GPUs,
Nebulae at the National
Supercomputer Center in Shenzhen China, made its mark on the Top500 by entering at
number 2, with sustained performance of 1.27 petaflops. Another system from the
Chinese Academy of Sciences also entered the chart and number 19.
This marked the beginning of what was to be an impressive year for China. As a relative newcomer to the supercomputing space, China is unrestricted by the need to support legacy software and systems, so it has been fearless in its adoption of GPU computing. The country has shown that it understands the significance of supercomputing, as it seeks to evolve from being a manufacturing powerhouse to become a global leader in science and technology.
- The Beginning of the Race for Better Science - Following the June list of
the Top500, the Undersecretary for Science at the DOE, Steve Koonin,
wrote an
OpEd for the San Francisco Chronicle. In this piece he voiced his concern about
Nebulae, stating that "these challenges to U.S. leadership in supercomputing and
chip design threaten our country's economic future." Undersecretary Koonin's concern
is that without the latest technologies, the U.S. will fall behind the rest of the
world in critical areas of industry, such as simulation for product design.
Leadership here enables the U.S. to continue to push the envelope in terms of
technology while encouraging innovation.
The sentiment was echoed by others, such as Senator Mark Warner and NVIDIA's own Andy Keane whose piece on AllThingsD encouraged a lot of lively discussion, such as this comment from insideHPC:
"I agree with Andy on this one; the Senate should get behind Senator Mark Warner (D-VA) and his amendment to the reauthorization of the America Competes Act. If we as an HPC community, or as a country for that matter, aren't agile enough to adapt, we could find ourselves being trounced by our own inventions."
The use of GPUs to further science was a topic covered in a recent pilot of a documentary series that NVIDIA produced, entitled The 3rd Pillar of Science. In this pilot, we spoke to leading medical experts who are using GPUs for ground-breaking medical methods, such as advanced cancer treatment and real-time open heart surgery.