What is Data Center Infrastructure? - Data Center Fundamentals

By Mike Netzer · 9/15/2020


This is an episode of HawkPodcast, datacenterHawk’s viewpoints on the data center industry. If you enjoyed this episode, you can check them all out on our blog. If you’d like to know when we release future episodes, you can subscribe here. You can also click here if you would like subscribe to our free data center fundamentals email course.


We previously talked about how a data center is much more than a traditional office or warehouse building. When one goes offline, it can cost companies millions of dollars per minute. That’s why data centers need sophisticated support systems in place - which we refer to as infrastructure. This infrastructure significantly increases the price to develop data enter facilities, up to 5-10x an equivalent-sized office building. We’ll explain why below.

Within the broader umbrella of infrastructure, we’ll look specifically at security, power, cooling, and connectivity. In addition, we’ll cover the concept of redundancy across these systems.

Data Center Building Security

Data centers often hold extremely valuable information on their servers, such as banking and credit card information, identity information, and important business information. Because of this, companies need to see that the data center they choose to work with will uphold a high level of physical security to keep their information safe and protected.

Data center security often starts on the outside with a K-rated fence bordering the entire property, then to enter the building you will need a key card to unlock the front door. Once you’re in the lobby of the building there will likely be what’s known as a man trap of some sort as the next layer of security. A man trap is a small room that connects the lobby to the inner sections of the facility with an entrance and an exit door where those two doors cannot be open at the same time. This allows more control of who is entering the facility and helps prevent unwanted parties from entering the sections of the facility that contain the servers. One other common security measure is a locked cage in the actual server room that is protecting specific servers. So if by chance someone is able to bypass all other security measures and actually get into the server room, they would have to somehow be able to unlock the floor to ceiling cage to get to the servers.

All these security measures serve to ensure that only properly designated and cleared users have access to their company’s space.

Electrical Infrastructure (Power)

Digital infrastructure such as servers and processors require power to operate. Even a fraction of a second of interruption can have significant impacts. As such, the power infrastructure is one of the most critical components of a data center.

Electricity travels along what’s called the power chain, which is how electricity gets from the utility provider all the way to the server inside the data center. A traditional power chain starts at the substation and eventually makes its way through a building transformer, a switching station, an uninterruptible power supply (UPS), a power distribution unit (PDU) and a remote power panel (RPP) before finally arriving at the racks and servers. Data centers also utilize on-site generators to power the facility if there is an interruption in the power supply from the substation.

Think of a data center like a giant laptop. The main power cord comes out of the wall (utility power) and is then transformed into usable power for the laptop (little box in the middle of your laptop cord). Finally, if any of the components of the cord fail (main power outage, transformer failure), the laptop has a battery to provide temporary power.

Each step of the process has a distinct purpose, whether it be transforming the power to a usable voltage, charging backup systems, or distributing power to where it is needed. We’ll be breaking down what each component does and why it’s important in future articles.

Mechanical Infrastructure (Cooling)

Servers produce substantial heat when operating and cooling them is critical to keeping systems online.

The amount of power a data center can consume is often limited by the amount of power consumption per rack that can be kept cool, typically referred to as density. In general, the average data center can cool at densities between 5-10 kW per rack, but some can go much higher.

The most common way to cool a data center involves blowing cool air up through a raised floor, which is pictured above. In this setup, racks are placed on a raised floor with removable tiles, usually three feet above the concrete slab floor. Cool air is fed underneath the raised floor and is forced up through perforated tiles in the floor around the racks. The warmer air coming out of the servers rises up and is pulled away from the data hall, run through cool-water chillers to cool it, and fed back beneath the raised floor to cool the servers again.

While raised floor is a common solution, it isn’t always necessary.

Some data centers utilize isolation, where physical barriers are placed to direct cool air toward the servers and hot air away. It’s common to see high ceilings in newer data centers as well. By simply increasing the volume of air in a data hall, it’s easier to keep the room from getting too hot.

Another less common solution is liquid cooling. The servers are stored on racks that are submerged in a special non-conductive fluid. This method is the most efficient, enabling the data center to operate at extremely high densities and prolong the lifetime of the equipment.

In certain climates, data centers can also take advantage of “free cooling” where they use the outside air to cool the servers. Instead of taking the hot air and cooling it to be used again, they allow the heat to escape and pull in the cool air from outside. This process is, as expected, much cheaper and energy efficient than operating more man made cooling infrastructure.

Connectivity Infrastructure

A data center’s connectivity infrastructure is also important. Without it, a data center would just be a building full of computers that can’t communicate with anyone outside the building.

As data centers are the primary foundation for activities happening online, the buildings themselves need to be highly connected. Access to a variety of fiber providers connects a data center to a wide network able to provide low latency connections and reach more customers.

Fiber traditionally runs into a data center through secured “vaults” and into the building’s meet-me-room or directly to a user’s servers. A meet-me-room is a location where fiber lines from different carriers can connect and exchange traffic.

Redundancy

Given the critical nature of data center infrastructure, it isn’t sufficient to only have the systems necessary for operations. Data center users also care about the additional equipment a data center has on hand to ensure that no single system can fail and take the data center, and the users servers, offline. This measure is called redundancy.

For example, a data center may need 10 chillers to cool their servers, but will have a total of 11 chillers on-site. The extra chiller is redundant and used in the event of another chiller failing.

Redundancy is communicated by the “need” or “N” plus the number of extra systems. The example above would be considered N+1. The data center needs 10 chillers and has one extra, thus it would be labeled as N+1. If the data center above had 10 extra generators in addition to the 10 they needed to operate, their redundancy would be double their need, or 2N.

In an N+1 scenario, a data center could lose one chiller and still operate because of the one extra chiller, but they would not have an extra available if a second chiller went down. In a 2N scenario, all of the operational chillers could break and the data center would enough to replace them all. Today, most data center providers find N+1 is sufficient to avoid downtime, though some industries require their data centers to be more redundant.

Redundancy applies to most aspects of a data center, including power supplies, generators, cooling infrastructure, and UPS systems. Some data centers have multiple power lines entering the building, or are fed from multiple substations to ensure uptime in the event a line is damaged somewhere. The same approach can be taken with fiber lines.

Data centers support the internet ecosystem that more and more of the world relies on today. As such, they require robust infrastructure to ensure there’s no interruption in the services they provide.


Don’t forget to check out the rest of our HawkPodcasts and don’t miss out on our latest release of market data for the data center industry.

Focused on data center real estate?

Get instant access to market analytics. Guess less. Make better decisions.