It’s an unwritten rule that the difference between the supply air temperature and the return (room air temperature) should be 20°F. For example, if the room temperature is 75°F the supply air temperature must be 55°F.

Where did this rule come from?

As we attempted to track the origin of this (or myth as some may call it), we do have some hypotheses as to why it might have happened.

The 1st  hypothesis – The inability of a direct expansion system or chilled water system of achieving more than 20°F

The refrigerant boiling point

We selected the refrigerant based on its boiling point at a manageable pressure. What do we mean by a manageable pressure? A pressure that we can work with that isn’t as high and not in a vacuum.

Refrigerant Temperature Chart

For example, the boiling point of R502 (or its HFC counterpart R404A) at 35 PSI is 2°F. And because the Delta T across most coils is 10°f, the air leaving will be 12°F which will make it ideal for low temperature applications (freezers).

On the other hand, R12 (or its 134A counterpart) has a boiling point of 35°F at the same PSI that will make it ideal for medium temperature applications (i.e. refrigerators).

Lastly, R22 (or its R407C counterpart) has a boiling point of 45°F at 70 PSI which makes it ideal for high temperature applications (i.e. conventional commercial air conditioning).

With commercial air conditioning, a supply air of anything below 55°F will be too cold and uncomfortable. The comfort point is 75°F (middle of the ASHRAE comfort zone)  thus the 20°F split was created.

The water freezing point

As we all know that the water freezing point is 32°F,  therefore the coldest water that a chiller can produce without the risk of freezing is around 45° F.  It will create air around 55°F and the comfort point is 75°F, thus once again the 20°F split was created.

The 2nd  hypothesis – The comfort point (middle of the ASHRAE comfort zone)

The comfort point is 75°F and 50% RH, which has a dew point of 55° F. This the lowest temperature that can be supplied without the risk of having the air too dry, thus the 20°F Split was created.

Shouldn’t we ask why the 20 degree split was created? Whether it was the inability of the direct expansion system or chilled water system, or the comfort point demand, it has become an unwritten rule among engineers and designers.

The Data Center Age

Data centers have their roots in huge computer rooms of the 1940s, typified by ENIAC, which is one of the earliest examples of a data center. Early computer systems were complex to operate and maintain, and required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate / organize these were devised such as: standard racks to mount equipment, raised floors, and cable trays (installed overhead or under an elevated floor). Additionally, a single mainframe required a great deal of power and had to be cooled to avoid overheating. Lastly, security became important – computers were expensive and were often used for military purposes, so basic design-guidelines for controlling access to the computer room were devised.

The boom of data centers came during the dot-com bubble of 1997-2000. Companies needed fast internet connectivity and non-stop operation to deploy systems to establish a presence on the internet. Installing such equipment was not viable for many smaller companies. Many companies started building very large facilities, called internet data centers (IDCs), which provided enhanced capabilities, such as crossover backup: “If a Bell Atlantic line is cut, we can transfer them to … to minimize the time of the outage.”

As the data center industry has grown, it is reported that 1% of the world’s energy and 2% of all power in the United States is used for data center cooling, thus maximizing efficiencies is vital.  ASHRAE TC.9.9 and their thermal guidelines for data center processing environments have been a leader in this effort by broadening the environmental envelope in which modern IT equipment can operate.

In order to achieve improving efficiencies and achieve aggressive PUE targets being set by management, data center engineers must adopt innovative cooling solutions. The cooling technology for data centers has departed partially from conventional DX and chilled water systems to include systems such as fresh air systems using Direct and Indirect evaporative cooling, recirculating systems using Indirect evaporative cooling (IDEC) units, Heat pipe, and Hybrid systems. Yet the unwritten 20°F Degree rule continues to largely prevail with us as it’s engraved in the back of the engineers mind.


Imagine a data center in the Toronto area, which has a very moderate cold climate.  Assume a  300 KW data center (almost  1,000,000 BTU) and a traditional design of 95°F hot isle, 75°F cold isle (20°F split). This would normally require 48,000CFM of air, but by expanding this to a 30°F split, it would reduce the air volume to 32,000CFM.  The traditional 20°F split requires approximately 50% more air volume, higher power consumption, higher capital cost, and higher PUE…but why?

Q= CFM X Delta T X 1.08

Using 30-35°F split will yield to a 30,000 CFM unit, while using 20°F split will result in using a 47,000CFM unit and complex control system to reheat the supply with some of the returning air. That’s almost 60% increase in CFM for no known reason and will definitely increase the PUE.

At Air2O we don’t believe in one size fits all approach. We have vast experience to help you and support your next data center project. Our innovative engineering is ready to take the challenge and design the most economical system for your next project

Our motto is: “If it is physically, possible we can build it.”