Select News

The news in this category has been selected by us because we thought it would be interestingto hard core cluster geeks. Of course, you don't have to be a cluster geek to read the news stories.

Sit up straight and pay attention

Starting March 14th, 2014 and continuing for 10 weeks, Dr. Randall J. LeVeque, University of Washington, will be teaching High Performance Scientific Computing on-line at Coursera. There is no cost to take the course.

From the course description: Programming-oriented course on effectively using modern computers to solve scientific computing problems arising in the physical/engineering sciences and other fields. Provides an introduction to efficient serial and parallel computing using Fortran 90, OpenMP, MPI, and Python, and software development tools such as version control, Makefiles, and debugging.
Workload: 5-10 hours/week
Taught In: English
Subtitles Available In: English

Form the Long Live Fortran department

Two recent stories should bode well for the HPC market. First, OpenMP.org released the new OpenMP 4.0 Specification. This specification includes many new features including support for accelerators (GPUs and Intel Phi) and SIMD math units on processors (and much more). A longer description can be found below (with a link to the specification page). The OpenMP API provides a set of "pragma" comments that can be placed into existing Fortran and C/C++ code. The original source code remains unmodified and usable on other systems.

The second story is that GPU/HPC vendor NVidia has bought Portland Group. Portland Group (PGI), known for its high performance compilers, has developed Fortran and C/C++ compilers that can directly address NVidia GPUs (AMD GPUs were mentioned as well, but it assumed this support is going away.) Their technology which is similar to the OpenMP approach has brought GPU performance to many existing applications. While some consider the NVidia purchase as a reduction of choice it the market, it probably signals a much stronger move toward standardization via OpenMP.

The announcement of OpenMP 4.0 means that end-users can continue to operate at the "Fortran and C/C++" level and not have to look for custom programming methods to use new hardware. In other words, they don't need to use languages like CUDA or OpenCL, which may require large amounts of re-programming. Indeed, the NVidia acquisition of PGI is a signal that NVidia believes the future of GPU programming for HPC lies in Fortran and C/C++. Coupled with the OpenMP announcement one might conclude that CUDA may be taking a back seat to the stalwart traditional compilers. A detailed OpenMP announcement follows.

There is fast, then there is Scalable fast

Other people outside of HPC like fast computers and storage. One of these groups are the Wall Street mavens who operate on a simple premise "fast makes more money than slow." Recently, super really fast storage vendor Scalable Informatics had their siFlash and JackRabbit boxes tested by STAC (Securities Technology Analysis Center) and the results were quite impressive. The actual results are described in reports KDB130528 and KDB130529, which are only available to STAC members. There is a summary on the STAC Website which provides a summary and some additional information.

In a good way, we think

Intel has recently announced a new Ethernet Open Network Platform that splits ("disaggregates") the control plane from the data plane and provides users the ability to control network aspects that were previously hidden "inside the box." The idea is to create global control of large networks, rather than relying on local switches. The control processors are of course of Intel x86 variety. There is an SDK called the Data Plane Development Kit (includes kernel by-pass tools) so that users can twiddle with the network design at a low level.

Don't expect to see products right away, but these are open reference designs offered by Intel. They may offer some interesting options that allow networks designed and tuned for HPC. There is a good in-depth write-up at SemiAccurate.

From the notable news department

Some recent news that arrived in my in box. Intersect360 Research, a leading market intelligence, research, and consulting advisory practice for the High Performance Computing industry, announced today that it has hired Michael Feldman, a 35-year computer industry veteran, to augment its analyst team.

Feldman, recognized as an expert in the HPC industry worldwide, is well-known as the former managing editor of HPCwire, where he spent eight years as one of the foremost predictors of HPC trends. Feldman was also the co-host, along with Intersect360 Research CEO Addison Snell, of the insightful and popular weekly HPCwire Soundbite podcast.

Search

Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.

Feedburner


This work is licensed under CC BY-NC-SA 4.0

©2005-2023 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.