What is a data center?
A data center is a physical location that stores computing machines and their related hardware equipment. It contains the computing infrastructure that IT systems require, such as servers, data storage drives, and network equipment. It is the physical facility that stores any company’s digital data.
Why are data centers important?
Every business needs computing equipment to run its web applications, offer services to customers, sell products, or run internal applications for accounts, human resources, and operations management. As the business grows and IT operations increase, the scale and amount of required equipment also increases exponentially. Equipment that is distributed across several branches and locations is hard to maintain. Instead, companies use data centers to bring their devices to a central location and manage it cost effectively. Instead of keeping it on premises, they can also use third-party data centers.
Data centers bring several benefits, such as:
- Backup power supplies to manage power outages
- Data replication across several machines for disaster recovery
- Temperature-controlled facilities to extend the life of the equipment
- Easier implementation of security measures for compliance with data laws
How did modern data centers evolve?
Data centers first emerged in the early 1940s, when computer hardware was complex to operate and maintain. Early computer systems required many large components that operators had to connect with many cables. They also consumed a large amount of power and required cooling to prevent overheating. To manage these computers, called mainframes, companies typically placed all the hardware in a single room, called a data center. Every company invested in and maintained its own data center facility.
Over time, innovations in hardware technology reduced the size and power requirements of computers. However, at the same time, IT systems became more complex, such as in the following ways:
- The amount of data generated and stored by companies increased exponentially.
- Virtualization technology separated software from the underlying hardware.
- Innovations in networking made it possible to run applications on remote hardware.
Modern data centers
Modern data center design evolved to better manage IT complexity. Companies used data centers to store physical infrastructure in a central location that they could access from anywhere. With the emergence of cloud computing, third-party companies manage and maintain data centers and offer infrastructure as a service to other organizations. As the world’s leading cloud services provider, AWS has created innovative cloud data centers around the globe.
What is inside a data center?
Most enterprise data center infrastructure falls into three broad categories:
- Compute
- Storage
- Network
Also, data center equipment includes support infrastructure like power systems, which help the main equipment function effectively.
Computing infrastructure
Computing resources include several types of servers with varying internal memory, processing power, and other specifications. We give some examples below.
Rack servers
Rack servers have a flat, rectangular design, and you can stack them in racks or shelves in a server cabinet. The cabinet has special features like mesh doors, sliding shelves, and space for other data center resources like cables and fans.
Blade servers
A blade server is a modular device and you can stack multiple servers in a smaller area. The server itself is physically thin and typically only has memory, CPUs, integrated network controllers, and some built-in storage drives. You can slide multiple servers into a storage unit called a chassis. The chassis facilitates any additional components that the servers inside it require. Blade servers take up less space than rack servers and offer higher processing speed, minimal wiring, and lower power consumption.
Storage infrastructure
The following are two types of data center storage systems.
Block storage devices
Block storage devices like hard drives and solid-state drives store data in blocks and provide many terabytes of data capacity. Storage area networks (SANs) are storage units that contain several internal drives and act as large block storage systems.
File storage devices
File storage devices, like network-attached storage (NAS), can store a large volume of files. You can use them to create image and video archives.
Network infrastructure
A large number of networking devices, such as cables, switches, routers, and firewalls connect other data center components to each other and to end-user locations. They provide flawless data movement and connectivity across the system.
Support infrastructure
Data centers also contain these components:
- Power subsystems
- Uninterruptible power supplies (UPS)
- Backup generators
- Ventilation and cooling equipment
- Fire suppression systems
- Building security systems
These data center components support the main equipment so that you can use the data center facilities without interruption.
What are the standards in data center design?
As data centers increased in size and complexity and began to store sensitive and critical information, governments and other organizations imposed regulations on them. The Telecommunications Industry Association (TIA) established four levels or standards that cover all aspects of data center design, including:
- Architecture and topology
- Environmental design
- Power and cooling systems and distribution
- Cabling systems, pathways, and redundancy
- Safety and physical security
Similarly, the Uptime Institute established four tiers to compare site performance objectively and align infrastructure investments to business goals. We list the four data center tiers below.
Tier I
A Tier I data center is the basic capacity level to support IT systems for an office setting and beyond. Some of the requirements for a Tier I facility include:
- Uninterruptible power supply (UPS) for power outages and spikes
- A physical area for IT systems
- Dedicated cooling equipment that runs 24/7
- A backup power generator
Tier I protects against service disruptions from human error but not against unexpected failure or outage. You can also expect an annual downtime of 29 hours in Tier I data centers.
Tier II
Tier II facilities provide additional cooling components for better maintenance and safety against disruptions. For example, these data centers must have the following:
- Engine generators
- Chillers
- Cooling units
- Pumps
Although you can remove components from Tier II data centers without shutting them down, unexpected failures can affect the system. You can expect an annual downtime of 22 hours from a Tier II data center.
Tier III
Tier III data centers provide greater data redundancy, and you can maintain or replace equipment without system shutdown. They also implement redundancy on support systems like power and cooling units to guarantee only 1.6 hours of annual downtime.
Tier IV
Tier IV data centers contain several physically isolated systems to avoid disruption from both planned and unplanned events. They are completely fault-tolerant with fully redundant systems and can guarantee a downtime of only 26 minutes each year.
What are the types of data center services?
You can choose from many types of data center services, depending on your requirements.
On-premises data centers
On-premises data centers are fully owned company data centers that store sensitive data and critical applications for that company. You set up the data center, manage its ongoing operations, and purchase and maintain the equipment.
Benefits: An enterprise data center can give better security because you manage risks internally. You can customize the data center to meet your requirements.
Limitations: It is costly to set up your own data center and manage ongoing staffing and running costs. You also need multiple data centers because just one can become a single high-risk point of failure.
Colocation data centers
Colocation facilities are large data center facilities in which you can rent space to store your servers, racks, and other computing hardware. The colocation center typically provides security and support infrastructure such as cooling and network bandwidth.
Benefits: Colocation facilities reduce ongoing maintenance costs and provide fixed monthly costs to house your hardware. You can also geographically distribute hardware to minimize latency and to be closer to your end users.
Limitations: It can be challenging to source colocation facilities across the globe and in different geographical areas you target. Costs could also add up quickly as you expand.
Cloud data centers
In a cloud data center, you can rent both space and infrastructure. Cloud providers maintain large data centers with full security and compliance. You can access this infrastructure by using different services that give you more flexibility in usage and payment.
Benefits: A cloud data center reduces both hardware investment and the ongoing maintenance cost of any infrastructure. It gives greater flexibility in terms of usage options, resource sharing, availability, and redundancy.
How does AWS manage its data centers?
AWS has the concept of a Region, which are physical locations around the world where we cluster data centers. We call each group of logical data centers an Availability Zone (AZ). Each AWS Region consists of multiple, isolated, and physically separate AZs within a geographic area. Each AZ consists of one or more physical data centers, and we design each AZ to be completely isolated from the other AZs in terms of location, power, and water supply.
Unlike other cloud providers, who often define a region as a single data center, the multiple AZ design of every AWS Region offers additional advantages for our customers, such as reliability, scalability, and the lowest possible latency. For example:
- AZs allow for partitioning applications for high availability. If an application is partitioned across AZs, companies are better isolated and protected from issues such as power outages, lightning strikes, tornadoes, earthquakes, and more.
- AZs in an AWS Region are interconnected with high-bandwidth, low-latency networking, over fully redundant, dedicated metro fiber, providing high-throughput, low-latency networking between AZs.
- Traffic between AZs is encrypted. The network performance is sufficient to accomplish synchronous replication between AZs.
Additionally, AWS provides data center security in four layers or levels.
Perimeter layer
Perimeter security measures provide access control of the physical equipment by using:
- Security guards
- Fencing
- Security feeds
- Intrusion detection technology
- Entry control and monitoring
Infrastructure layer
Infrastructure layer security protects the equipment from damage and overheating. It includes measures such as:
- World-class cooling systems and fire suppression equipment
- Backup power equipment
- Routine machine maintenance and diagnostics
- Water, power, telecommunications, and internet connectivity backups
Data layer
Data layer security protects the data itself from unauthorized access and loss. Typical measures in this layer are:
- Threat and electronic intrusion detection systems in the data center
- Electronic control devices at server room access points
- External auditing of more than 2,600 requirements throughout the year
Environmental layer
The environmental layer is dedicated to environmental control measures that support sustainability. These are some of its measures:
- Sensors and responsive equipment that automatically detect flooding, fire, and other natural disasters
- An operations process guide outlining how to avoid and lessen disruptions due to natural disasters
100% renewable energy and environmental economies of scale
What are AWS Hybrid Cloud services?
AWS Hybrid Cloud services deliver a consistent AWS experience across both on-premises and cloud data centers. You can select from the broadest set of compute, networking, storage, security, identity, data integration, management, monitoring, and operations services to build hybrid architectures that meet your specific requirements and use cases. For example, you might decide to use these services:
- AWS Outposts is a fully managed service that offers the same AWS infrastructure, services, and tools to virtually any data center, co-location space, or on-premises facility for a consistent hybrid experience.
- AWS Direct Connect improves performance by connecting your network directly to AWS and bypassing the public internet. These connections are made at over 100 locations worldwide, with speeds starting at 50 Mbps and scaling up to 100 Gbps.
- AWS Snow Family devices collect and process data between the edge and AWS so that your apps run in even the most extreme conditions.
- AWS Wavelength embeds AWS compute and storage services at the edge of 5G networks for a faster application response time.
Get started with world-class data center infrastructure by creating a free AWS account today.