- Written by Douglas Eadline
- Hits: 30502
Of course it all depends, but deciding to use the Cloud for HPC is not as simple as it may seem
Computing in the "Cloud" allows computing to be purchased as a service (like electricity) and not as a product (like a generator). Made possible by operating system (OS) virtualization and the Internet, Cloud computing allows almost any server environment to be replicated (and scaled) instantly. Many web service companies find Cloud computing more economical than purchasing (or co-locating) hardware because they can pay for computing services only when needed.
- Written by Greg Kline
- Hits: 5223
Second in a series of four articles about the TeraGrid
Navajo Technical College in New Mexico is a small tribal school hardly flush with research computing equipment, said Jason Arviso, director of the information technology office and National Science Foundation Science, Technology, Engineering, and Mathematics (STEM) grant program at Navajo Technical. Conversely, Clemson University in South Carolina went from zero to nearly 50 teraflops and the Top 500 supercomputers list in a few short months. "I know that eventually we won't have enough nodes for everybody," said Barr von Oehsen, director of computational science in the Cyberinfrastructure Technology Integration Group at Clemson.
- Written by Michael Schneider, Pittsburgh Supercomputing Center
- Hits: 4680
First in a series of four articles about the TeraGrid
Deep, wide, open. This three-pronged conceptualization underlies the TeraGrid, the National Science Foundation's cyberinfrastructure initiative. Deep means digital muscle, more than a petaflop of aggregated computing power, highlighted by the addition of NSF track 2 systems, Ranger (579 tflops) and Kraken (607 tflops). Open means extensibility, the ability to include new resource providers and university partnerships to broaden the resource base.
Wide means that the TeraGrid wants its resources to be useful to as many researchers as possible. To that end, TeraGrid has created Science Gateways -- diverse entry points for the uninitiated to pass into the realm of computational science and get things done with the array of resources available through TeraGrid. Implemented in 2005, the Science Gateways program, led by Nancy Wilkins-Diehr of the San Diego Supercomputer Center, has grown rapidly and now comprises 35 Gatewayseach of them tailored to the needs of and designed by a specific research community.
- Written by Von Welch and Olle Mulmo
- Hits: 10185
Resolving Firewall and NAT issues
As discussed in the previous Grids and Clusters columns, one activity that Grids must allow is the coordination of resources not subject to central control. In practice, this means that resources distributed across different sites on the Internet, which often have firewalls in between. In this article, we discuss the use of the Globus Toolkit® for Grid computing in the presence of firewalls, explaining what the issues are and what can be done to address them.
- Written by Jennifer M. Schopf and Keith R. Jackson
- Hits: 9167
Here are some important steps to consider
Building a Grid is no simple task: It takes planning and coordination. This column discusses some rules of thumb to consider while setting up your own Grid. It is important to remember that grids are defined by three criteria,
- A Grid must coordinate resources that are not subject to centralized control and that cross organizational boundaries.
- A Grid must use standard, open, general-purpose protocols and interfaces.
- A Grid must deliver nontrivial qualities of service.
Meeting the first criterion will be a recurring theme of this column. Grids have an added layer of complexity on top of "simple" clusters, and sites will have existing policies that must be worked around instead of relying on a centralized control that simply rewrites local policies.
Login And Newsletter