Why This is the Year of Smaller, Faster Federal Data Centers

Gorodenkoff/Shutterstock.com

Even data centers feel the need for speed.

With governmentwide data center deadlines looming, federal managers should rethink how they’ll avoid bandwidth bottlenecks with smaller footprints.

The Federal Data Center Consolidation Initiative requires agencies close 25 percent of their tiered data centers and 60 percent of non-tiered data centers, which reduces physical space by almost a third.

The Data Center Optimization Initiative requires that federal data center managers will need to evaluate network, storage and backup capabilities, monitor progress, review and remove applications that are obsolete or unnecessary, virtualize when possible and migrate to provisioned services like the cloud while retaining secure access.

At the same time, streaming real-time data, internet-of-things devices and hyperconverged infrastructures are all driving an insatiable demand for faster bandwidth speeds. As such, the cloud is becoming ever-more important for federal data centers, as expedient access to data will require reliable, high-bandwidth and low-latency network performance with a smaller footprint. Streaming applications will further drive the push, as the delivery of uncompressed 4K video requires far more than 10G bandwidths can provide.

To accommodate for the DCOI requirements and to keep up with demand, data center managers must adopt scale-out architectures and transition from traditional 10 gigabit Ethernet speeds toward 25, 40, 50—and even 100GbE and beyond. 

When 10 Isn’t Perfect

In 2002, the IEEE802.3ae standard was formalized to define 10GbE technology, which has since seen a steady increase in adoption every year. As 25/40/50/100GbE becomes more widely supported by new adapters, switches, cables and servers, 10GbE is rapidly becoming a bottleneck that severely limits network throughput and application performance.

While aggregating multiple 10GbE links at certain points in a network can address higher bandwidth requirements, this can result in suboptimal load balancing, contributing to further performance degradation, increasing footprint and adding to operational costs. Single 25/40/100GbE connections can cost much less, reduce hardware footprints, and significantly improve performance in similar scenarios. As this becomes commonplace, it’s clear that 10G won’t meet near-future requirements.

The Need for Speed

Some may argue that 40GbE—much less 100GbE—is overkill for many workloads, but predictions of bandwidth requirements rising by 25 to 35 percent each year indicate that, going forward, faster is better. The 10GbE of yesterday is today’s 40GbE, which will be tomorrow’s 100GbE—and believe it or not, 400GbE speeds are already being tested in some instances.

The vast deployment of IoT and smart devices is continuing to drive the growth of data being brought to the cloud where billions of these nodes are continually streaming data back and forth. Federal agencies’ operations will need to rely on virtual applications that promise to cut spending and offer increased performance capabilities. From a hyperscale perspective, the amount of cross-sectional bandwidth generated by these technologies can amount to petabytes of information as data is continually duplicated and moved around. When placed in the cloud, data is often replicated up to 10 times or more to ensure redundancy.

In response, federal data centers will need to have a more nimble, more efficient footprint that can easily and cost-effectively meet the demand of applications, redundancy, virtualization and still retain operations at agency level.

Federal data centers are in for a whirlwind of change as agencies rush to comply with DCOI, FDCCI and FITARA requirements, all while keeping current with best-use practices to assuage obsolescence. For data center professionals who aren’t certain whether their computing and storage systems can take full advantage of higher speeds in these consolidated data centers, it will be important to validate how applications, backup and storage architectures introduced by cloud-based networks will perform.

A test network or a high-speed network emulator can prove a valuable tool. Test networks involve replicating data centers and network connections that can mirror network demands and connections. For large institutions with high data integrity concerns, test labs may be worth the considerable capitol and operational outlay.  For quick and easy testbed setup, network emulators can save resource time and money by replicating a live network. Impairments, such as delay, loss, and jitter, to be applied to the emulated network in order to validate and optimize application performance and ensure network uptime.

Since there is no point in paying for faster links if your system can’t make use of a full line rate, taking the time to test prior to deployment can help avoid thousands or even millions of dollars of unnecessary bandwidth investment and application performance degradation. 

To Infinity and Beyond

As the state of networking evolves into a more distributed, software-defined environment to better suit the needs of the data we rely upon within our day to day lives, data requirements are only going to continue to skyrocket.  With traffic in the data center throttling the capacity of available links, faster GbE provides the scalability and efficiency needed for the interconnection of now—and future—networks.

Neal Roche is chief executive officer of Apposite Technologies.