Article Index

Intel 64bit (EMT) Fortran code and AMD Opteron

I'm always amazed at what comes out of seemingly innocent questions. Often times simple questions get small simple answers. In other instances, a simple question can generate a an avalanche of responses. conversations.

For example, on Oct. 28, 2004, Roland Krause asked about experiences with the Intel EM-64T compiler on Opteron systems. Craig Tierney then posted asking if binaries from the Intel EM64T compiler would even work on Opterons without SSE3. He went on to say that if you don't vectorize, or only build 32-bit applications you should be fine. "However, for most applications the vectorization is going to give you the big win." This last statement started some interesting discussion.

Greg Lindahl quickly pointed out that most people think that vectorization is "going to give you a big win" but that SIMD (Single Instruction Multiple Data) optimization doesn't help any of the codes in the SPECfp benchmark. He mentioned that the Opteron can use both floating point pipes with scalar code, which is different than the Pentium 4. He went on to say, "I'd say this myth is the #1 myth in the HPC industry right now." I think that for this observation alone, this thread was well worth reading.

The accomplished Mikhail Kuzminsky then posted that you could run code generated with the Intel EM64T compiler if you did not use any SSE3 instructions. It also said it was possible to not use SSE3 when compiling. Serguei Patchkovskii echoed Mikhail's comments.

Nicholas Breakley pointed to Polyhedron's benchmarks. He mentioned that the Intel compiler came in just behind Pathscale's compiler on Opteron.

Intel has had some difficulties with Opteron processors with their compilers. Here is a web page that describes the problems and offers some solutions. While many people tend to blame Intel for these problems, and I can see their point since AMD is a competitor, I also think that Intel is correct in their statement that it is difficult to support the Opteron at high levels of optimization since it's not their chip. So, in the meantime, we have the people mentioned on the webpage who have been patching Intel compilers to help the AMD folks.

PVFS on 80 proc (40 node) cluster

On Oct. 30, 2004 (Halloween Eve), Jeff Candy asked about experiences people had with PVFS (Parallel Virtual File System) on a cluster with about 80 CPUs using GigE. He was considering using PVFS over using NFS. Jeff very quickly posted that the code he was running was a large physics code that does about 200 KB of I/O every 10 to 60 seconds. Then every 10 minutes or so, a 100 MB file is written. Jeff also said that he wanted a single file system for /home and for his working directory.

Brian Smith then posted that Jeff should consider PVFS or any other parallel file system over NFS mounting for concurrent scratch space. Brian thought that PVFS2 was better than PVFS1.

Rob Latham, one of the PVFS developers, then posted to mention that if you used shared storage, "heartbeat" and enough hardware, you could have redundant PVFS1 and PVFS2 nodes (previous posting indicated that PVFS did not have redundancy built in). Rob also pointed out that while PVFS did not currently have software redundancy, it was a very active area of research. Rob also pointed out that people have been using PVFS and have not found the reliability to be a problem. People will run their applications using PVFS as the scratch space and then just copy their checkpoint date to a tape or long-term storage.

Of course, one solution that people didn't really put forth was to use both NFS and PVFS. You can use NFS for /home and PVFS for scratch space. You could even make your compute nodes diskless if you like. Ahhh, the flexibility of clusters.

Sidebar One: Links Mentioned in Column





Intel Compilers on Opteron

Fortran Compiler Comparisons

This article was originally published in ClusterWorld Magazine. It has been updated and formatted for the web. If you want to read more about HPC clusters and Linux you may wish to visit Linux Magazine.

You have no rights to post comments


Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.


Creative Commons License
©2005-2019 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 2.5 License. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.