Data Center Cooling System: CFD Analysis

ABSTRACT

The surge in data center energy use, with some facilities consuming as much power as 25,000 homes, highlights the need for efficient cooling solutions. This study focuses on a small hospital data center, and to analyse its cooling system we are using Computational Fluid Dynamics (CFD). The case study revealed inefficiencies: the initial design required double the estimated airflow to meet temperature targets, and high velocity in the plenum led to uneven air distribution. Implementing design optimizations, such as adding deflectors and adjusting airflow rates, reduced energy costs by nearly 50% and achieved uniform cooling, demonstrating CFD’s effectiveness in improving data center cooling efficiency.

INTRODUCTION

The rising demand for data computing has led to the expansion of energy-intensive data centers, which as of 2024 can consume as much power as 25,000 homes and account for 1-1.3% of global electricity use. With global data center energy consumption reaching around 460 TWh in 2022 and potentially exceeding 1,000 TWh by 2026, efficient energy management is crucial. Effective temperature regulation is key to maintaining equipment functionality, as a data center cooling system can consume as much energy as the servers themselves.

To address these challenges, Computational Fluid Dynamics (CFD) has been developed to optimize data center cooling. CFD simulates airflow patterns, helping HVAC engineers improve supply air temperature and airflow rates, thereby reducing cooling costs. This technology enables precise management of a data center cooling system, which is essential for handling the increased energy demands driven by power-intensive applications like AI and cryptocurrency mining.

CASE STUDY

The facility under study is a small data center, located in a hospital, which features a Tier III level of security and an initial room-based approach. Below is the sketch of the mentioned installation, showing the location of the CRAHs, the servers, and the grilles that connect the false floor with the main room.

Data center sketch

 

We will study 3+1 CRAHs based on the following calculation of the maximum required cooling power:

The thermal loads associated with the maximum expected equipment for this room are considered (11 kW dissipation per cabinet, 25 racks)

  • Thermal load to dissipate in the rack room: 275 kW
  • Thermal load to dissipate in the electrical panel and UPS rooms: 28 kW
  • Thermal load due to external loads (roof and façade): 3 kW
  • Latent thermal load from infiltration: 3 kW
  • Other internal loads (lighting): 1 kW

The maximum cooling load (installation of 25 racks) amounts to 310 kW. The necessary equipment for a future extension (installation of 25 racks) is:

EquipmentNº necessary unitsNº active unitsNº reserve unitsUnit Power (KW)Total installed power (KW)Total active power (KW)
Cooler431105420315
Precision interior unit43195380285
Electrical interior unit211255025

 

OBJECTIVES

The objectives that need to be achieved are:

  • A temperature under 40 ºC (target flow rate 11,168 m3/s, T 15 ºC, ΔT de 20 ºC for supplied air)
  • Uniform air velocity distribution in the raised floor.
  • Achieve cooling of the room with the different configurations of the CRAHs (n, n+1).

RESULTS

The analysis of this infrastructure under the above mentioned conditions shows preoccupying results: Inefficient design (requires twice the estimated airflow to reach target temperatures in the racks), high velocity in the plenum (does not function as intended) causes uneven air distribution in the room and part of the supplied air does not cool the racks and goes towards the return.

Initial results of temperature and velocity in the raised floor of the data center

In order to achieve the goals that have been set, some improvements need to be implemented: Five deflectors are introduced in front of each pair of racks to slow down and redirect the flow, two pairs of grilles are introduced inside the plenum, and the flow rate is reduced gradually in order to achieve the targeted 11 m3/s. Simulating the flow rates 13 m3/s and 12 m3/s, there start to be early signs that it is possible to reach the target flow rate while maintaining the maximum temperatures inside the racks.

Results of temperature at flow rates 13m3/s and 12m3/s of the data center

 

Therefore, the cold aisles that connect the plenum with the air intake to the racks are isolated and the flow rate is reduced to the target 11m3/s. Next, it is verified that the plenum operates correctly with different combinations of CRAHs.

 

Results of temperature at flow rate 11m3/s with different CRAH combinations
Results of speed at flow rate 11m3/s with different CRAH combinations r

 

Applying these design optimisations, it is estimated that the energy costs could be reduced to almost 50% of the initial case scenario. Additionally, it is confirmed that the target flow rate can be achieved under safe conditions (even air outlet through grilles into the cold aisle, and temperature below limits inside racks). Critical areas within servers are identified, where elements with lower cooling needs should be positioned, and those requiring more heat extraction should avoid being located.

CONCLUSION

All in all, CFD analyses provide in-depth knowledge of how various designs projected for the same data center installation will perform their roles, and in any case, suggest possible optimisations. Their use is beneficial from determining the approach to cooling (room, row, or rack) to sizing the equipment or setting operating conditions (flow rate, temperature), including understanding how environmental conditions affect outdoor units.

At Engineering Simulation Consulting, we provide advanced engineering solutions to help businesses optimize their designs, improve product performance, and reduce development costs. Contact us here for more information on how to design and optimize your data center cooling system.

Scroll to Top