When is water cooling not wet?

There is new white paper out entitled Energy Performance Evaluation of Aquila’s Aquarius Fixed Cold Plate Cooling System at NREL’s High Performance Computing Center. Before you assume this is YAWCC (Yet Another Water Cooled Cluster) you may want to consider that no water or liquids ever "touch" any processors. How then does it work? you may ask. Basically Aquila has a clever design that separates the water/processor interface with a highly efficient transfer medium. This design keeps the water contained in cooling circuit while the servers are free to be "dry servers" and require no internal plumbing. Bottom line: highly efficient water cooling with no plumbing or leaks. The Executive Summary (reprinted with permission) provides more details.

In the first half of 2018, as part of a partnership with Sandia National Laboratories, Aquila installed its fixed cold plate, liquid-cooled Aquarius rack solution for high-performance computing ( HPC) clustering at the National Renewable Energy Laboratory’s ( NREL’s) Energy Systems Integration Facility (ESIF). This new fixed cold plate, warm-water cooling technology together with a manifold design provides easy access to service nodes and eliminates the need for server auxiliary fans altogether. Aquila and Sandia National Laboratories chose NREL’s HPC Data Center for the initial installation and evaluation because the data center is configured for liquid cooling and has the required instrumentation to measure flow and temperature differences to facilitate testing. This paper gives an overview of the Aquarius fixed cold plate cooling technology and provides results from early energy performance evaluation testing.

Sandia’s Aquila-based HPC cluster was named “Yacumama” and was configured to operate independently from all other HPC systems in the ESIF data center. There are 36 compute nodes that have INTEL S2600KP motherboards. The motherboards are configured with dual X86_64 XEON central processing units, 128 GB of random access memory (RAM), 128 GB solid-state drive (SSD), and an Omni-Path adapter.The supplied configuration is capable of providing >40 teraflops of LINPACK performance while drawing less than 15 kW of power.

In building the data center, NREL’s vision was to create a showcase facility that demonstrates best practices in data center sustainability and serves as an exemplar for the community. The innovation was realized by adopting a holistic “chips to bricks” approach to the data center, focusing on three critical aspects of data center sustainability: (1) efficiently cool the information technology equipment using direct, component-level liquid cooling with a power usage effectiveness design target of 1.06 or better; (2) capture and reuse the waste heat produced; and (3) minimize the water used as part of the cooling process. There is no compressor-based cooling system for NREL’s HPC data center. Cooling liquid is supplied indirectly from cooling towers.

The Yacumama cluster installation was straightforward and easily integrated directly into the data center’s existing hydronic system. A round of discovery testing was conducted to identify the range of reasonable supply temperatures to the fixed cold plates and the impact of adjusting facility flow. Then LINPACK tests at 100% duty cycle were run for 48 hours. Results are provided in Section 3, and the key takeaway is that this fixed cold plate design provides a very high percentage of heat capture direct to water—up to 98.3% when evaluating compute nodes only (the percentage drops to 93.4% when evaluating compute nodes along with the Powershelf for the system because the Powershelf is not direct liquid cooled).

This cluster has been in operation for nearly 10 months, requiring zero maintenance, and no water leaks were observed. The Yacumama system will be returned to service at Sandia’s recently completed warm-water-cooled HPC data center in early 2019

The White paper can be found at National Renewable Energy Laboratory (NREL)

You have no rights to post comments

Search

Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.

Feedburner


This work is licensed under CC BY-NC-SA 4.0

©2005-2023 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.