Article Index

Tools -- The Biggest Barrier

Given the above-mentioned structural and culture obstacles, it should come as no surprise that HPC opportunities are also lost due to issues with the tools currently used by the HPC industry. While it is generally conceded that HPC hardware and software are hard to use, HPC companies have little reason to forge new tools and utilities, or even use those tools that are widely available and compatible with all of the current available platforms. However, if HPC systems and software were easier to use, the CoC study found that that industry could tackle more complex models in a much wider context, thus creating more business opportunities. One exacerbating factor is the cost of HPC tools versus other business investments. And as mentioned earlier, avoiding risk is common in many business cultures. HPC is an uncharted territory for many in industry -- so convincing companies to invest in the development of new HPC tools is inherently problematic.

It's a vicious cycle. Barring the development of new tools and software, HPC systems will continue to be hard to use. But developing HPC tools is expensive, and the market is limited, so companies have little incentive to develop the needed tools. And if HPC systems are hard to use, only those who can currently justify the essential use of HPC are willing to live with relatively crude tools and thus reap the benefits of HPC.

Hard to use means hardly used -- at least by the broader community.

Refocusing HPC

Science and engineering research has historically offered significant advances in both theory and computer hardware, resulting in greater expectations. We understand physical phenomenon better, and we have more capable computers, so we should be able to combine theory and hardware and reap the benefits! But there are real challenges. Companies must now begin to understand that HPC may offer many benefits in many application domains. And while there are codes that will (and do) benefit from the speed-ups available from even a modest number of processors, some extremely large problems may need thousands of processors.

Unfortunately, the link between improved theory and powerful HPC hardware is the software, and here is where we have a profound problem. There is a surprisingly limited demand that the capability of the software match the power of the hardware. Interestingly enough, while the U.S. leads the world in hardware engineering, Europe and Japan are investing more strategically in computer science research focused on software. The European Union plans to sink $63 million into universities and research labs to make grid computing work for industrial projects.

HPC software that utilizes the most powerful hardware in a user-friendly, domain oriented way is needed -- and this requires an entirely new programming paradigm. An anonymous writer in a 2004 HPCWire article argued that if programming doesn't change radically, "parallel computing will be essentially dead within ten years." The vast majority of software applications do not take advantage of parallel computing for environments, and because conversion of serial codes requires major effort, the entire spectrum of HPC applications could reap the benefits derived from improved parallel programming models. It's one thing to come up with programming algorithms and quite another to make it available to the common user on a parallel machine. Furthermore, once you have developed a parallel algorithm, installing and maintaining it on multiple HPC platforms can be difficult if not impossible.

At the high end of the spectrum -- the really hard problems that are being computed on parallel machines -- benefits from HPC come from "grand challenge" problems that cannot be otherwise tackled. Grand challenge problems have been the bread and butter of the national research labs for the last decade or more, bringing federal resources and funds to bear in order to solve high-end computational problems. The solutions to grand challenges usually represent several orders of magnitude improvement over previous capabilities. The fundamental scientific problems that are represented in the grand challenges currently being explored 1) generate increasingly complex data, 2) require more realistic simulations of the processes under study, 3) and demand greater and more intricate visualizations of the results. So interestingly, a special barrier to these extremely high-end computation challenges is the inherent difficulty of "large-code" programming, which will be exacerbated by dramatic increases in the number of processors needed to solve the problems -- perhaps hundreds of thousands of processors. National interests mandate heroic programming efforts, and the continued investment of significant long-term funding indicates this aspect of HPC will persist into the future. That is, heroic computing will remain a fundamental part of the HPC ecology.

HPC in an Industrial World

Given the current economic climate and industry demands, the U.S. has reached a critical juncture with regard to HPC. Room for dramatic growth exists between current industry HPC use and "heroic computing." We argue that there is an increasing need for a partnership between industry and "heroic" HPC. We must find a way to promote HPC as a full-spectrum industry -- and one way to achieve this goal is by focusing on high productivity computing languages and education.

"We are significantly expanding capabilities in computational modeling and computer-aided engineering, so we can do an increasing percentage of product and process design through virtual simulation," said A.G. Lafley, President and CEO of Proctor & Gamble at a 2003 Wall Street analysts meeting. Many large, forward-thinking firms are already making significant investments in advanced computational approaches to design and knowledge discovery. Through virtual simulation, production and process design is cheaper, quicker, and results in better products. Tom Lange, Associate Director of Corporate Engineering Technologies at P&G, states that innovation is his company's lifeblood. P&G spend $1.6 billion a year in research and development. "Explore digitally, confirm physically" is mantra for the company that has benefited from coupling supercomputer systems with knowledge in computational fluid dynamics and biomechanics to make innovative, competitive products.

Given the success of companies such as P&G, General Motors, Morgan Stanley, Merck & Co., Boeing, and Lexis-Nexis in integrating HPC into their R&D cycle, it's easy to see how focusing national research labs on a full-spectrum HPC market will greatly improve our national competitiveness. High-end computing will be increasingly important in making industries competitive in the global marketplace; companies that have found a way to leverage this advantage already know this. Just as the use of HPC has strengthened national security through stimulation of the field by way of federal grants, we should now focus our innovations, advances, and education on the entire application spectrum -- not just those at the high end of the spectrum. The "small" jobs of today will become the large jobs of tomorrow. Indeed, the greater impact will be felt across the entire computing market; that is, if applications can be scaled up and scaled down depending on the problem that needs to be solved.

The federal government is moving toward HPC as a solution to the problems of outsourcing and struggling industries. The White House is instructing executive-branch heads to give priority to supercomputing and cyberinfrastructure research and development in their fiscal 2006 budgets. In a memo, Office of Science and Technology Policy director John Marburger III and OMB director Joshua Bolten requested that supercomputing R&D "should be given higher relative priority due to the potential of each in further progress across a broad range of scientific and technological applications." Agency plans in supercomputing should be consistent with a recent report of the High-End Computing Revitalization Task Force that describes a coordinated R&D plan for high-end computing. The memorandum from the President's Office give priority to research that aims to create new technologies with broad societal impact, such as high-temperature and organic superconductors, molecular electronics, wide band-gap and photonic materials, and thin magnetic films. According to the President's Council of Advisers on Science and Technology (PCAST) Subcommittee on Information Technology Manufacturing and Competitiveness, the country must maintain a strong base of university R&D, educating the workforce in advanced tools and techniques in order to be competitive. These steps can also pave the way to creating well-paid, interesting jobs.

You have no rights to post comments

Search

Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.

Feedburner


This work is licensed under CC BY-NC-SA 4.0

©2005-2023 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.