By Mike Netzer · 2/8/2021
It's a game that’s won and lost on speed. The faster information can be delivered from a data center, the more valuable the data center can be to people outside its four walls.
This is an episode of HawkPodcast, datacenterHawk’s viewpoints on the data center industry. If you enjoyed this episode, you can check them all out on our blog. If you’d like to know when we release future episodes, you can subscribe here. You can also click here if you want to read our 4Q 2020 data center overview for North America and Europe.
Customers Don’t Like Slow Delivery Times
Imagine you run a logistics company. The success of your business depends on how quickly you deliver packages. As your operation grows, you build warehouses to swap packages on and off trucks that are going to different areas. Sometimes trucks have to stop at multiple warehouses to get everything they need to take to their destination.
Ideally, your trucks can take interstate freeways directly to each warehouse instead of slogging through dense downtown traffic or winding through miles of dirt farm roads. Fast roads directly to a warehouse are better than slow roads that require a roundabout route. Traveling on slower roads or taking roundabout routes ultimately compounds into slower delivery times.
Customers don’t like slow delivery times.
This is what data center connectivity is all about.
You probably caught on that the warehouses in our example are data centers. The packages are information. The roads are the connectivity, the focus of this article. Connectivity in the data center industry is usually accomplished via fiber optic cable lines.
And just like the roads, if we want to get traffic where it needs to be as quickly as possible, it’s better to use the fastest, lowest traffic, and most direct route possible. We’ll still need to make stops every now and again to pick up packages or data, but the concept remains the same.
The Importance of Connectivity
In simpler days of the internet, one computer would talk directly to another and get everything it needed. And a delay of several milliseconds would not cause an issue. Today, companies are using increasingly complex systems to support their customers' needs.
It’s not uncommon for a company to spread their IT workload between cloud, colocation, and in-house. Within those buckets, they may have a multitude of microservices spread across different servers. As the number of points of communication increases, so does the importance of keeping those communications as fast as possible.
From a user experience perspective, all this operational speed is typically taken for granted, until something goes wrong. In terms of user experience, human factor studies have consistently shown over 30 years that delays of 1 second interrupt the user's flow of thought while delays of more than 10 seconds loses their attention. Users consistently bemoan the slow speeds of websites and apps.
In the earlier days of the internet, it was understood that as companies were growing there would be some hiccups. Twitter’s fail whale, which indicated a service outage, even became a cultural icon.
However today, as consumer choices on the internet proliferate, a slow load will ultimately become a no-load as customers go elsewhere. All the more reason to focus on speed.
Connectivity Solutions Overview
So how does a company actually ensure its data gets to its destination as fast as possible? That’s what good connectivity helps ensure.
Fiber, Carrier Neutral Data Centers, and Being On-Net
Frequently what the industry refers to as fiber is a fiber line that connects you to the world outside the four walls of the data center. You may have to deal with delays like potential congestion but it’s the primary method to reach geographically dispersed facilities.
In the earlier days of colocation, data centers would only have a single fiber provider (or Internet Service Provider “ISP”) like AT&T, Verizon, or Comcast. If you wanted to use the data center, you were going to have to use their fiber provider. Think something akin to you moving into an apartment where the complex signed an exclusive deal with a single internet provider.
Over time though, the industry evolved to building data centers with multiple fiber providers. We call these facilities carrier-neutral data centers. This means that you are not limited to using a single fiber provider within a building, but have the option to select from the multiple providers in that building, with none having an inherent advantage, aka “neutral”.
With the advent of carrier-neutral data centers, several fiber providers have gone about preemptively building out connections at several of these data centers. So that when their customers show up they can get service right away. This is what it means for a fiber provider to be on-net at a data center. It means they are already connected and live in the building so that customers don’t have to wait for them to build out a point of presence there.
Cross-connect
This might be the simplest type of connectivity. If a company has servers in one area of a data center and needs to quickly communicate with a server in another area of the data center, they would request a cross-connect.
This is a direct line between physical locations within a data center. Think something akin to you connecting two laptops via a USB or ethernet cable. There’s nothing between the computers except for the wire. No chance of latency. No chance of traffic congestion. No need for switching or routing. No chance of having the connection severed by an unknowing construction crew. It’s very fast and very reliable.
These cross-connects can be used to connect separate servers within a data center or connect to an ISP within the data center, which would then serve as their link to the outside world.
Campus Cross-connect
Instead of single isolated data centers, some larger providers opt to build several data center facilities in close proximity on the same parcel of land. These groups of data centers are called a campus. If a company has a server in one of these buildings and needs to quickly communicate with a server in the building next door, they would request a campus cross-connect.
Similar to a cross-connect, the campus cross-connect is internal to the campus. It doesn’t need to go out to the public internet and reaps many of the same benefits for not doing so. It’s very reliable with very little latency. The provider essentially hardwires a cross-connect to a single point in Facility A, which connects to a giant private fiber-optic pipe to Facility B, and then hardwires a cross-connect to the server in Facility B.
Cloud Direct Connect
Over the past ten years, companies started to move more IT resources over to cloud providers like Amazon Web Services, Google Cloud Platform, or Microsoft Azure. If a company has material resources in the cloud, they may want to reduce their exposure risk by avoiding connecting to the cloud over the public internet. In this case, they would request a direct connect from their data center to the cloud provider.
As it implies, a direct connect is a private, direct connection from the company's premises to the cloud provider. Think of a direct connect as the express lane on the interstate. There will likely be a higher cost associated with it, but gives you access to a lower-traffic, higher-speed route.
Carrier Hotel
As an example, some companies want to connect to certain fiber providers but find that the providers aren’t physically connected to their data center. Instead of waiting for all of their fiber providers to build out points of presence at their facility, they may opt to connect to the market's carrier hotel.
Carrier hotels pride themselves on the ecosystem they create through fiber and network providers. They want to be the single location you can go to connect to almost anyone.
Most markets will only have a single carrier hotel. On the coasts, the carrier hotel normally sits where the subsea cables come ashore. In non-coastal markets, you’ll find them near the densest infrastructure in the city.
We recently dove deep into carrier hotels on one of our podcasts if you’d like to learn more.
Connectivity Redundancy
Just as it is important to have redundancy in place when it comes to power, connectivity redundancy is also important. If there’s only a single line connecting your critical systems, it doesn’t matter how redundant the nodes are if they can’t talk to each other. Depending on the critical nature of the workload, companies will often have multiple fiber providers connected to their servers. This ensures if one goes down, they will avoid an outage.
Connectivity is one of many factors that influences a company’s colocation decision. And there will typically be multiple ways for them to meet their connectivity needs via internet access, direct connects, cloud on-ramps, etc. The more of these options a company can offer, the better chance they will have at securing these customers.
Recommended Articles
HawkPodcast 36 – Common data center terms
What is connectivity? A guide for new industry professionals
What is data center infrastructure? A guide for new industry professionals
Focused on data center real estate?
Get instant access to market analytics. Guess less. Make better decisions.