What is Data Center Infrastructure? – Data center fundamentals
In Podcast 33, we continue our Data Center Fundamentals series and dive into the basics of data center infrastructure.
Electrical Infrastructure (Power)
Digital infrastructure such as servers and processors require power to operate. Even a fraction of a second of interruption can have significant impacts. As such, the power infrastructure is one of the most critical components of a data center.
Electricity travels along what’s called the power chain, which is how electricity gets from the utility provider all the way to the server inside the data center. A traditional power chain starts at the substation and eventually makes its way through a building transformer, a switching station, an uninterruptible power supply (UPS), a power distribution unit (PDU) and a remote power panel (RPP) before finally arriving at the racks and servers. Data centers also utilize on-site generators to power the facility if there is an interruption in the power supply from the substation.
Think of a data center like a giant laptop. The main power cord comes out of the wall (utility power) and is then transformed into usable power for the laptop (little box in the middle of your laptop cord). Finally, if any of the components of the cord fail (main power outage, transformer failure), the laptop has a battery to provide temporary power.
Mechanical Infrastructure (Cooling)
The amount of power a data center can consume is often limited by the amount of power consumption per rack that can be kept cool, typically referred to as density. In general, the average data center can cool at densities between 5-10 kW per rack, but some can go much higher.
The most common way to cool a data center involves blowing cool air up through a raised floor, which is pictured above. In this setup, racks are placed on a raised floor with removable tiles, usually three feet above the concrete slab floor. Cool air is fed underneath the raised floor and is forced up through perforated tiles in the floor around the racks. The warmer air coming out of the servers rises up and is pulled away from the data hall, run through cool-water chillers to cool it, and fed back beneath the raised floor to cool the servers again.
In certain climates, data centers can also take advantage of “free cooling” where they use the outside air to cool the servers. Instead of taking the hot air and cooling it to be used again, they allow the heat to escape and pull in the cool air from outside. This process is, as expected, much cheaper and energy efficient than operating more man made cooling infrastructure.
Connectivity Infrastructure
A data center’s connectivity infrastructure is also important. Without it, a data center would just be a building full of computers that can’t communicate with anyone outside the building.
As data centers are the primary foundation for activities happening online, the buildings themselves need to be highly connected. Access to a variety of fiber providers connects a data center to a wide network able to provide low latency connections and reach more customers.
Fiber traditionally runs into a data center through secured “vaults” and into the building’s meet-me-room or directly to a user’s servers. A meet-me-room is a location where fiber lines from different carriers can connect and exchange traffic.
Redundancy
Redundancy is communicated by the “need” or “N” plus the number of extra systems. The example above would be considered N+1. The data center needs 10 chillers and has one extra, thus it would be labeled as N+1. If the data center above had 10 extra generators in addition to the 10 they needed to operate, their redundancy would be double their need, or 2N.
In an N+1 scenario, a data center could lose one chiller and still operate because of the one extra chiller, but they would not have an extra available if a second chiller went down. In a 2N scenario, all of the operational chillers could break and the data center would enough to replace them all. Today, most data center providers find N+1 is sufficient to avoid downtime, though some industries require their data centers to be more redundant.
Redundancy applies to most aspects of a data center, including power supplies, generators, cooling infrastructure, and UPS systems. Some data centers have multiple power lines entering the building, or are fed from multiple substations to ensure uptime in the event a line is damaged somewhere. The same approach can be taken with fiber lines.
Data centers support the internet ecosystem that more and more of the world relies on today. As such, they require robust infrastructure to ensure there’s no interruption in the services they provide.
Видео What is Data Center Infrastructure? – Data center fundamentals канала datacenterHawk
Electrical Infrastructure (Power)
Digital infrastructure such as servers and processors require power to operate. Even a fraction of a second of interruption can have significant impacts. As such, the power infrastructure is one of the most critical components of a data center.
Electricity travels along what’s called the power chain, which is how electricity gets from the utility provider all the way to the server inside the data center. A traditional power chain starts at the substation and eventually makes its way through a building transformer, a switching station, an uninterruptible power supply (UPS), a power distribution unit (PDU) and a remote power panel (RPP) before finally arriving at the racks and servers. Data centers also utilize on-site generators to power the facility if there is an interruption in the power supply from the substation.
Think of a data center like a giant laptop. The main power cord comes out of the wall (utility power) and is then transformed into usable power for the laptop (little box in the middle of your laptop cord). Finally, if any of the components of the cord fail (main power outage, transformer failure), the laptop has a battery to provide temporary power.
Mechanical Infrastructure (Cooling)
The amount of power a data center can consume is often limited by the amount of power consumption per rack that can be kept cool, typically referred to as density. In general, the average data center can cool at densities between 5-10 kW per rack, but some can go much higher.
The most common way to cool a data center involves blowing cool air up through a raised floor, which is pictured above. In this setup, racks are placed on a raised floor with removable tiles, usually three feet above the concrete slab floor. Cool air is fed underneath the raised floor and is forced up through perforated tiles in the floor around the racks. The warmer air coming out of the servers rises up and is pulled away from the data hall, run through cool-water chillers to cool it, and fed back beneath the raised floor to cool the servers again.
In certain climates, data centers can also take advantage of “free cooling” where they use the outside air to cool the servers. Instead of taking the hot air and cooling it to be used again, they allow the heat to escape and pull in the cool air from outside. This process is, as expected, much cheaper and energy efficient than operating more man made cooling infrastructure.
Connectivity Infrastructure
A data center’s connectivity infrastructure is also important. Without it, a data center would just be a building full of computers that can’t communicate with anyone outside the building.
As data centers are the primary foundation for activities happening online, the buildings themselves need to be highly connected. Access to a variety of fiber providers connects a data center to a wide network able to provide low latency connections and reach more customers.
Fiber traditionally runs into a data center through secured “vaults” and into the building’s meet-me-room or directly to a user’s servers. A meet-me-room is a location where fiber lines from different carriers can connect and exchange traffic.
Redundancy
Redundancy is communicated by the “need” or “N” plus the number of extra systems. The example above would be considered N+1. The data center needs 10 chillers and has one extra, thus it would be labeled as N+1. If the data center above had 10 extra generators in addition to the 10 they needed to operate, their redundancy would be double their need, or 2N.
In an N+1 scenario, a data center could lose one chiller and still operate because of the one extra chiller, but they would not have an extra available if a second chiller went down. In a 2N scenario, all of the operational chillers could break and the data center would enough to replace them all. Today, most data center providers find N+1 is sufficient to avoid downtime, though some industries require their data centers to be more redundant.
Redundancy applies to most aspects of a data center, including power supplies, generators, cooling infrastructure, and UPS systems. Some data centers have multiple power lines entering the building, or are fed from multiple substations to ensure uptime in the event a line is damaged somewhere. The same approach can be taken with fiber lines.
Data centers support the internet ecosystem that more and more of the world relies on today. As such, they require robust infrastructure to ensure there’s no interruption in the services they provide.
Видео What is Data Center Infrastructure? – Data center fundamentals канала datacenterHawk
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
What is a Data Center? – Data center fundamentalsIntroduction to IT InfrastructureHawkPodcast 30 - European Data Center MarketsData Center HVAC - Cooling systems cfdA DAY (NIGHT) in the LIFE of a NOC ENGINEER!HawkPodcast 23 – The different types of data centers: Colocation, Enterprise, & Carrier HotelA DAY in the LIFE of the DATA CENTRE | GENERATOR TESTING with ASH!An Insider's Look: Google's Data Centers (Cloud Next '19)How Amazon Uses Explosive-Resistant Devices To Transfer Data To AWSDATA CENTRE 101 | WHAT CERTIFICATIONS DO YOU NEED TO WORK IN A DC? CCNA? ANY AT ALL?!Data Center Power Chain: How it WorksData Center NETWORKS (what do they look like??) // FREE CCNA // EP 7Fiber Optics in the LAN and Data CenterGoogle Data Center Security: 6 Layers DeepEverything you wanted to know about data center design (but did not know who to ask)Google Data Center Efficiency Best Practices -- Full VideoA DAY in the LIFE of the DATA CENTRE | FULL CUSTOMER "RACK & STACK" with ASH & JAMES!What is Data Center? Explained in HindiWhat is Colocation? - Data Center FundamentalsHawkTalk 51 with Andy Stewart, CEO at Evoque Data Center Solutions