Editing Programming Tools

Jump to: navigation, search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 11: Line 11:
 
*[http://www.haskell.org/haskellwiki/Haskell Haskell] is an advanced purely-functional programming language. An open-source product of more than twenty years of cutting-edge research, it allows rapid development of robust, concise, correct software. With strong support for integration with other languages, built-in concurrency and parallelism, debuggers, profilers, rich libraries and an active community, Haskell makes it easier to produce flexible, maintainable, high-quality software.  
 
*[http://www.haskell.org/haskellwiki/Haskell Haskell] is an advanced purely-functional programming language. An open-source product of more than twenty years of cutting-edge research, it allows rapid development of robust, concise, correct software. With strong support for integration with other languages, built-in concurrency and parallelism, debuggers, profilers, rich libraries and an active community, Haskell makes it easier to produce flexible, maintainable, high-quality software.  
  
== Compiler Enhancements/Code Conversion==
+
== Compiler Enhancements==
  
 
These enhancements are used with Fortran and C/C++ compilers.  
 
These enhancements are used with Fortran and C/C++ compilers.  
Line 17: Line 17:
 
*[http://openmp.org/wp/ OpenMP] is a standard for parallel programming on shared memory systems, continues to extend its reach beyond pure HPC to include embedded systems, multicore and real time systems. A new version is being developed that will include support for accelerators, error handling, thread affinity, tasking extensions and Fortran 2003. Note: OpenMP is not a cluster programming tool. It works for multi-core cluster nodes and is supported by virtually all compilers.
 
*[http://openmp.org/wp/ OpenMP] is a standard for parallel programming on shared memory systems, continues to extend its reach beyond pure HPC to include embedded systems, multicore and real time systems. A new version is being developed that will include support for accelerators, error handling, thread affinity, tasking extensions and Fortran 2003. Note: OpenMP is not a cluster programming tool. It works for multi-core cluster nodes and is supported by virtually all compilers.
 
*[http://www.openacc-standard.org/ OpenACC] is an Application Program Interface (API) that describes a collection of compiler directives to specify loops and regions of code in standard C, C++ and Fortran to be offloaded from a host CPU to an attached accelerator (e.g. GPUs), providing portability across operating systems, host CPUs and accelerators.
 
*[http://www.openacc-standard.org/ OpenACC] is an Application Program Interface (API) that describes a collection of compiler directives to specify loops and regions of code in standard C, C++ and Fortran to be offloaded from a host CPU to an attached accelerator (e.g. GPUs), providing portability across operating systems, host CPUs and accelerators.
*[http://people.nas.nasa.gov/~hjin/CAPO/ CAPO ] (Computer-Aided Parallelizer and Optimizer) automates the insertion of compiler directives to facilitate parallel processing on shared memory parallel (SMP) machines. While CAPO is currently integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise) CAPO is independently developed at NASA Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedual data dependence analysis, and generates OpenMP directives. Due to the widely support of the OpenMP standard, the generated OpenMP codes can potentially run on a wide range of SMP machines.
 
  
 
== Lower Level Parallel Programming Libraries ==
 
== Lower Level Parallel Programming Libraries ==
Line 28: Line 27:
 
*[http://www.jppf.org/ The Java Parallel Processing Framework] is a suite of software libraries and tools providing convenient ways to parallelize CPU-intensive processing. It is written in the Java programming language and is platform independent.
 
*[http://www.jppf.org/ The Java Parallel Processing Framework] is a suite of software libraries and tools providing convenient ways to parallelize CPU-intensive processing. It is written in the Java programming language and is platform independent.
 
*[http://www.csm.ornl.gov/pvm/pvm_home.html PVM] (Parallel Virtual Machine) is a software package that permits a heterogeneous collection of Unix and/or Windows computers hooked together by a network to be used as a single large parallel computer.
 
*[http://www.csm.ornl.gov/pvm/pvm_home.html PVM] (Parallel Virtual Machine) is a software package that permits a heterogeneous collection of Unix and/or Windows computers hooked together by a network to be used as a single large parallel computer.
 
==Profilers Analyzers==
 
*[http://www.mcs.anl.gov/research/projects/perfvis/software/viewers/index.htm Jumpshot] is a Java-based visualization tool for doing postmortem performance MPICH2 analysis.
 
*[http://mpip.sourceforge.net/ mpiP]is a lightweight profiling library for MPI applications. Because it only collects statistical information about MPI functions, mpiP generates considerably less overhead and much less data than tracing tools. All the information captured by mpiP is task-local. It only uses communication during report generation, typically at the end of the experiment, to merge results from all of the tasks into one output file.
 
*[http://www.openspeedshop.org/wp/ Open|SpeedShop] is explicitly designed with usability in mind and is for application developers and computer scientists. The base functionality include, Sampling Experiments, Support for Callstack Analysis, Hardware Performance Counters, MPI Profiling and Tracing,I/O Profiling and Tracing, and Floating Point Exception Analysis. In addition, Open|SpeedShop is designed to be modular and extensible. It supports several levels of plug-ins which allow users to add their own performance experiments.
 
*[http://developer.amd.com/tools/codeanalyst/pages/default.aspx AMD CodeAnalyst] Performance Analyzer helps software developers to improve the performance of applications, drivers and system software. Well-tuned software delivers a better end-user experience through shorter response time, increased throughput and better resource utilization.
 
*[http://ipm-hpc.sourceforge.net/ IPM] is a portable profiling infrastructure for parallel codes. It provides a low-overhead performance profile of the performance aspects and resource utilization in a parallel program. Communication, computation, and IO are the primary focus. While the design scope targets production computing in HPC centers, IPM has found use in application development, performance debugging and parallel computing education.
 
*[http://hpctoolkit.org/index.html HPCToolkit] is an integrated suite of tools for measurement and analysis of program performance on computers ranging from multicore desktop systems to the nation's largest supercomputers. By using statistical sampling of timers and hardware performance counters, HPCToolkit collects accurate measurements of a program's work, resource consumption, and inefficiency and attributes them to the full calling context in which they occur. HPCToolkit works with multilingual, fully optimized applications that are statically or dynamically linked. Since HPCToolkit uses sampling, measurement has low overhead (1-5%) and scales to large parallel systems. HPCToolkit's presentation tools enable rapid analysis of a program's execution costs, inefficiency, and scaling characteristics both within and across nodes of a parallel system. HPCToolkit supports measurement and analysis of serial codes, threaded codes (e.g. pthreads, OpenMP), MPI, and hybrid (MPI+threads) parallel codes.
 
*[http://www.brendangregg.com/linuxperf.html This Page ] provides a nice map of Linux performance "zones" in the kernel and the tools that are used to analyze them. Includes slide decks on Linux performance.
 
 
==Parallel Debuggers==
 
*[http://padb.pittman.org.uk/ Padb] is a Job Inspection Tool for examining and debugging parallel programs, primarily it simplifies the process of gathering stack traces on compute clusters however it also supports a wide range of other functions. Padb supports a number of parallel environments and it works out-of-the-box on the majority of clusters. It's an open source, non-interactive, command line, script-able tool intended for use by programmers and system administrators alike.
 

Please note that all contributions to Cluster Documentation Project are considered to be released under the Attribution-NonCommercial-ShareAlike 2.5 (see Cluster Documentation Project:Copyrights for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource. Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel | Editing help (opens in new window)