iDataCool


iDataCool is a high-performance computer cluster based on a modified IBM System x iDataPlex. The cluster serves as a research platform for cooling of IT equipment with hot water and efficient reuse of the waste heat. The project is carried out by the physics department of the University of Regensburg in collaboration with the IBM Research and Development Laboratory Böblingen and InvenSor. It is funded by the German Research Foundation (DFG), the German state of Bavaria, and IBM.

Overview

[edit]

The iDataCool high-performance compute cluster is a research project on cooling with hot water and energy reuse in data centers. The waste heat of iDataCool is used to drive an adsorption chiller that generates chilled water.[1] The project pursues the following goals:

  • Proof of principle: production grade computer cluster can be cooled with hot water at temperatures at or above 65°C
  • Recover a significant fraction of the waste heat
  • Energy Reuse Effectiveness (ERE) less than one
  • Design of a cost-efficient technology prototype for future developments in high-performance computer design and operation

The iDataCool cluster has been operating with hot-water cooling since 2011. The infrastructure support for energy reuse was finished in 2012. Members of the project have also been active in other supercomputer projects such as QCDOC and QPACE. SuperMUC is based on the cooling technology invented for QPACE, Aquasar, and iDataCool.[2]

The iDataCool research project was presented at the International Supercomputing Conference in Leipzig, Germany, in 2013,[1] which led to it being featured in several articles.[3][4]

Background and design target

[edit]

Power and cooling of IT equipment are of major concern for modern data centers. Since 1996 the worldwide costs for power and cooling of IT infrastructure have increased by more than a factor of five.[5] Conventionally, data centers use air as the primary cooling medium for the IT equipment. While air cooling is simple and flexible it also has some disadvantages, e.g., limited packaging density and limited options for energy reuse.[6] Liquid cooling based on water as the coolant is another option. Since water has a very high heat capacity large amounts of heat can be removed from a system at moderate flow rates, thus allowing for a higher packaging density which in turn results in less floor space. Liquid cooling has recently resurfaced in the sector of high-performance computing. Since 2009 the Green500 list of the most energy-efficient supercomputers is dominated by liquid-cooled designs.[7]

If the design of the liquid-cooling system allows for high coolant temperatures, energy can be saved or even reused, depending on the climate conditions and on the local infrastructure. For example, free cooling is possible if the coolant temperature is higher than the ambient temperature. In that case the energy for chillers can be saved. If the coolant temperature is even higher, the waste heat from the compute equipment could be used for heating purposes or to drive an adsorption chiller to generate chilled water. The former option is implemented, e.g., by the Leibniz-Rechenzentrum in Germany, where SuperMUC drives the heating of the data center during winter with roughly 1 MW recovered from the compute equipment. The latter option, which is the design target of iDataCool, requires a high quality of the heat, which can only be achieved by direct hot-water cooling. One example for direct hot-water cooling is the Aquasar project at ETH Zürich, which is operated at coolant temperatures around 60°C. The aim of iDataCool was to achieve coolant temperatures of more than 65°C, at which commercially available adsorption chillers tend to become efficient, and to demonstrate the long-term stability of a large production machine under these conditions.

Architecture

[edit]

The iDataCool installation at the University of Regensburg consists of three IBM System x iDataPlex[8] racks. Each rack contains 72 compute nodes. A compute node consists of two Intel Xeon Westmere server processors and is arranged as a distributed shared memory system with 24 GB of DDR3-SDRAM. Switched Infiniband is used for communication amongst the nodes. Gigabit Ethernet is used for disk I/O, system operation, and monitoring.

The original iDataPlex system is entirely cooled by air. The ambient air in the data center is drawn in through perforated front doors, and the hot air is blown back into the data center on the back side. Components that need cooling are the power supplies, network switches, and compute nodes. The power supplies and switches rely on built-in fans which generate the necessary air-flow, while fan blocks are used to draw the air over the compute nodes, which are equipped with passive heat sinks.

In a joint effort of the particle physics group of the University of Regensburg and the IBM Research and Development Laboratory Böblingen, Germany, a water-cooling solution for the compute nodes was developed which completely replaces the original fans and heat sinks. The processors are cooled by custom-designed copper heat sinks through which water flows directly. This minimizes the temperature difference between the compute cores and the coolant. A copper pipeline provides the water flow and is also thermally coupled to passive heat sinks for other components such as memory, chipset, and voltage converters.

All conversions of the original iDataPlex cluster were performed at the University of Regensburg. Newly developed parts were manufactured in the machine shop of the university's physics department. The data center of the university was extended to provide the liquid-cooling infrastructure. The system has been operating in stable production mode at coolant temperatures of up to 70°C since 2011.

Energy reuse

[edit]

iDataCool allows for cooling with hot water at temperatures of up to 70°C.[1] The waste heat of iDataCool drives a low-temperature adsorption chiller (LTC 09 by InvenSor) that works efficiently already at temperatures around 65°C. The chiller generates chilled water that is used to cool other compute equipment in the data center. The installation was finished in the summer of 2012.

See also

[edit]

References

[edit]
  1. ^ a b c N. Meyer et al., iDataCool: HPC with Hot-Water Cooling and Energy Reuse, Lecture Notes in Computer Science 7905 (2013) 383
  2. ^ B. Michel et al., Aquasar: Der Weg zu optimal effizienten Rechenzentren[permanent dead link], 2011
  3. ^ IEEE Spectrum, New Tech Keeps Data Centers Cool in Warm Climates, 26 June 2013
  4. ^ R718.com, Waste heat driven adsorption chiller cools computing centre Archived 2014-01-17 at the Wayback Machine, 30 July 2013
  5. ^ IDC, Worldwide Server Research, 2009
  6. ^ N. Meyer et al., Data centre infrastructure requirements Archived 2017-06-03 at the Wayback Machine, European Exascale project DEEP, 2012
  7. ^ The Green500 list, http://www.green500.org/ Archived 2016-08-26 at the Wayback Machine
  8. ^ IBM System x iDataPlex dx360 M3, [1]