Energy Efficiency | June 27, 2017

Managing Content at the Edge with Efficiency

Tier II and Tier III markets are experiencing a rise in hosting demand as content moves to the edge. These smaller facilities in outlier locations extend the “edge” of the Internet further from the traditional Internet hubs such as New York City or Silicon Valley and provide low-latency access for local users.

As content delivery networks cache data closer to customers, these cities in non-traditional internet hubs have grown more important. There is an increasing expectation for high quality content among users that results in the rapid popularity of these facilities. The benefits of moving to edge data centers are extensive, from reduced costs and bandwidth to survivability. But the trend of processing data in edge data centers is still relatively new, and both users and experts don't know what the future of edge computing will hold. With the immense growth of this market however, there has been reduced control over these facilities and increased operational problems.

The Problem: Scaling to Meet Demand

There is a necessity for edge data centers today. Everyone wants to stream high quality content from anywhere on the map. This means larger companies, such as Netflix and Amazon, require hosting in smaller colocation facilities at the edge. The goal of a colo is to offer inexpensive, yet reliable space, power, and cooling for tenants. The influx of data from enterprise companies, however, causes operational problems for colos that weren’t built to handle such large providers and can’t scale quickly enough to manage the increase in load with efficiency.

Due to deficiencies in their design, construction, and operations, many of these edge data center facilities can’t ensure that they’re up to the challenges posed by demanding processing requirements. In effect, many existing edge data centers and their supporting network structures weren’t designed and built to effectively process the varied volumes of data and constantly changing load.  As multi-tenant colos are moving up the stack, colos have to do everything: private cloud, public, hybrid, cloud managed services, rack and cage. With all of the services colos are expected to provide, they haven’t necessarily prioritized best practices for data center energy efficiency. 

The Solution: Managing Resources Efficiently

In recent years, operational best practices in data centers have begun to mainstream. Data center operators now know the importance of implementing containment and blanking panels to manage airflow, but still struggle to engineer an environment for maximum efficiency benefits.  Best practices that every data center should be utilizing to optimize their environment are: enhancements to cooling systems, airflow management, and deployment of advanced data center infrastructure management (DCIM) systems and software, and utilization of proactive monitoring tools.

As smaller Tier II data centers are overloaded with more data because content is moving to the edge, they have to be following best practices in efficiency if they expect to be able to handle the amount of data and the larger loads demanded of their facilities. Most colos will benefit from working with an outside consultant to complete a project that combines a variety of these solutions to be able to manage the rising quantity of data at the edge with attention to efficient operations and, ultimately, associated energy and cost savings.

The data center solutions Mantis Innovation's Efficiency division integrated into this Atlanta data center resulted in a total energy savings of 4,000,000 kWh.

Learn more.