From the no sign of Dorothy and Toto department

There was a interesting post on Slashdot recently about a tornado simulation. Leigh Orf of Central Michigan University is a member of a research team that created a supercell thunderstorm simulation. He presented the results at the 27th Annual Severe Local Storms Conference in Madison, Wisconsin, Leigh Orf's talk was produced entirely as high definition video and put on YouTube shortly after the presentation. The presentation is yet another demonstration of the power and utter "coolness" of HPC. The talk is a bit technical, unless you are meteorologist. Check out the video and read on for more HPC background

Taken from slashdot:

In the simulation, the storm's updraft is so strong that it essentially peels rain-cooled air near the surface upward and into the storm's updraft, which appears to play a key role in maintaining the tornado. The simulation was based upon the environment that produced the May 24, 2011 outbreak which included a long-track EF5 tornado near El Reno Oklahoma (not to be confused with the May 31, 2013 EF5 tornado that killed three storm researchers).

You can read about the Blue Waters hardware profile here. Our simulation "only" utilized 20,000 of the approximately 700,000 processing cores on the machine. Blue Waters, like all major supercomputers, runs a Linux kernel tuned for HPC.

The cloud model, CM1, is a hybrid MPI/OpenMP model. Blue Waters has 16 cores (or 32 depending on how you look at it) per node. We have 16 MPI processes going and each MPI rank can access two OpenMP threads. Our decomposition is nothing special, and it works well enough at the scales we are running at.

The simulation produced on the order of 100 TB of raw data. It is easy to produce a lot of data with these simulations - data is saved as 3D floating point arrays and only compresses roughly 2:1 in aggregate form (some types of data compress better than others). I/O is a significant bottleneck for these types of simulations when you save data very frequently, which is necessary for these detailed simulations, and I've spent years working on getting I/O to work sufficiently well so that this kind of simulation and visualization was possible.

The CM1 model is written in Fortran 90/95. The code I wrote to get all the I/O and visualization stuff to work is a combination of C, C++, and Python. The model's raw output format is HDF5, and files are scattered about in a logical way, and I've written a set of tools to interface with the data in a way that greatly simplifies things through an API that accesses the data at a low level but does not require the user to do anything but request data bounded by Cartesian coordinates.

I would have to say the biggest challenge wasn't technical (and the technical challenges are significant), but was physical: Getting a storm to produce one of these types of tornadoes. They are very rare in nature, and this behavior is mirrored in the numerical world. We hope to model more of these so we can draw more general conclusions; a single simulation is compelling, but with sensitivity studies etc. you can really start to do some neat things.

We are now working on publishing the work, which seems to have "passed the sniff test" at the Severe Local Storms conference. It's exciting, and we look forward to really teasing apart some of these interesting processes that show up in the visualizations.

You have no rights to post comments


Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.


This work is licensed under CC BY-NC-SA 4.0

©2005-2023 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.