The LEGO Approach to Closing Federal Data Centers

Ekaterina_Minaeva/Shutterstock.com

Just closing and re-locating data centers may not save all that much money. The feds need to focus on data center optimization.

Dave Gwyn is the vice president of federal for Nutanix.

The Obama administration first set out to close and consolidate the number of data centers governmentwide beginning in 2010. The Federal Data Center Consolidation Initiative aims to reduce the number and size of data centers used by federal agencies without compromising performance or hindering productivity.

And according to a recent Government Accountability Office report, the effort has been largely effective, saving approximately $1.1 billion. Agencies are also planning an additional $2.1 billion in cost savings by the end of this fiscal year. Add all that up, and the initiative has delivered a total of approximately $3.3 billion in savings, or $300 million more than the original goal set forth by the Office of Management and Budget. An impressive feat, indeed.

That said, the report still identified challenges. One of the main issues included agencies consolidating centers but not seeing any results. Six agencies reporting closing as many as 67 data centers with limited or no savings. Although this is being attributed in part to an inability to effectively determine baseline costs for data center operations, it can surely also be a result of agencies simply ripping and replacing their servers from one location to another.  

What’s missing is a focus on the optimization of servers.

In 2009 OMB reported the average utilization rate of a server to be 5-15 percent. Frankly, moving an underutilized server from one location to another will not help with savings metrics. These large-scale servers still maintain the same space, weight, power, performance and cooling issues --  only in a different warehouse.  

And consider this: physically moving a server cabinet takes about three people; powering each unit uses about 500 to 1,200 watts per hour; and additional energy is required to keep the server room at temperatures ranging from 55 to 72 degrees Fahrenheit.  Moving inefficiency will simply not help the bottom-line savings metrics.

Next, consider the effectiveness of these often antiquated servers vs. their risk of failure.  

Managing and mitigating downtime risk and downtime itself is a huge factor in a data center – each hour of time spent fixing server errors equates to an estimated $42,000. And the main source of downtime is hardware failure – it accounts for 55 percent of server failure. Does it make sense to move these servers, to let them reach end-of-life and beyond and to continue to drain resources – until the hardware is replaced?

Fortunately, developments in server technology have been made to shift the burden of work from hardware to software, virtualizing the process. A software-defined approach allows for the server and storage tier to be consolidated into a single, integrated and converged appliance. This can build in resiliency that renders hardware failures irrelevant to application uptime.

To use an analogy, think of software-defined data centers as Lego blocks.

Representing a “data center in a box,” these Lego blocks replace the typical server rack, which can weigh in at close to 2,500-3,000 pounds with something one-twentieth the physical size.  

Compact and adaptable, these smaller units take up significantly less space, provide enhanced performance and allow for easy expansion, known as ‘scale-out.’ These blocks can easily connect within minutes and their software allows them to automatically link together, providing for efficient, effective scalability without guesswork. Additionally, taking a software approach allows for easy updates and refreshes, and adds enhanced security capabilities for greater data and information protection.

Advances in this type of technology include self-diagnosing and self-correcting programming to easily identify errors, or at a minimum make the problem easier to find or analyze, which reduces downtime and hours searching for the source of an error.

In typical servers, errors cause a domino effect leading to bottlenecking and slower service.  Software-defined servers reroute the traffic, alleviating the stress on the system. Using software combined with smaller commodity-based hardware also drastically reduces the physical space for each unit, decreases the up-front costs, the downtime frequency and the logistics of expansion.

In order to continue delivering on the goals set forth by data center initiative, government agencies are going to need to think differently, moving away from the traditional three-tiered architecture approach to modern, converged, web-scale infrastructure that is defined not by its outer hardware shell, but by the software that is driving it internally.

They should shift focus from reducing the overall number of data centers to reducing the total cost of data center ownership. Ultimately, there needs to be a dedicated focus on transitioning from a mentality of data center consolidation to one of transformation and optimization, in order to achieve an even greater return than the projected $5.3 billion in total planned savings.  

Incorporating this Lego-style concept will get them there.

(Image via Ekaterina_Minaeva/Shutterstock.com)