Is it Time to Consolidate Your Data Center Hardware?

Over the last few years, data centers have been expanding at an absolutely frantic rate. Global data production is expected to hit 175 zettabytes by 2025. All of that data has got to go somewhere, and it’s probably going to end up in a data center. Accordingly, both enterprises and cloud providers have been pouring money into data center construction – Google alone increased its data center construction budget by $2 billion USD between 2017 and 2018.

 

If you work at a company with any kind of scale, there’s a good chance that you’re either planning to expand your private cloud or migrate your private cloud to larger premises. With that in mind, there’s also a chance that you may be better served by doing the opposite: performing a consolidation.

 

Why It’s Time to Look at Data Center Consolidation

Research shows that across the world, 30% of all servers sit idle. According to the strict definitions of the study, idleness means that the server is powered on, but has not processed any workloads for at least six months. This research is admittedly from 2015, which makes it ancient history in the computing world, but it’s a safe bet that the problem persists – here’s why:

 

  • It’s Easy to Scale Up- Servers are cheap, and cloud implementations are cheaper. Server prices have dropped in part due to an influx of white box servers from manufacturers such as Inspur and Supermicro. Cloud vendors in particular are eager to use these servers, which helps keep cloud prices low.

 

  • There are More Obvious Penalties for Under-capacity- Everyone is terrified of a network outage – and for good reason, with outage costs approaching $5600 USD per minute. A network outage through undercapacity would be expensive, but what are the costs of keeping servers around for a spike in usage that will never happen?

 

  •  Hidden Idleness- Even if your servers and instances aren’t totally idle, they may still be underutilized. For instance, developers often spin up massive environments for testing. The developers only work eight to ten hours a day, but the testing environments stay up 24 hours a day – and they cost money during that entire time.

 

In short, it is astonishingly cheap – at least relative to previous computing eras – to purchase a large pile of servers. It is the work of moments to get these set up and provisioned. Even if you don’t do anything this these servers, you may wonder, they’re useful to have around in case of a capacity spike. What are the drawbacks of this surplus equipment?

 

Overcapacity Today Leads to Technical Debt Tomorrow

Here’s the obvious answer to the question above: when it comes to capacity, a relatively small up-front expense can have a long tail of costs. Some of these costs are obvious, and some of them are less so.

 

First, there are the costs of keeping your servers on. There’s a monthly electricity cost, a cooling cost, and a cost per square foot if you happen to be renting or leasing space. There’s also a cost for software licenses, even if that software isn’t running any workloads. These costs aren’t cheap – based on electricity alone, the aggregate cost of running data centers in the US will be equal to $13 billion USD by 2020 and consume 50 power plants worth of energy.

 

Second, there’s maintenance to consider. The server is running, which means parts will wear out. Their power supplies and hard drives will eventually fail. The software they’re running will go out of data and need to be patched or upgraded. At the very least, they’ll need to be occasionally dusted. You need to pay people to do all of this, and time spend on maintenance will detract from more significant projects.

 

Lastly, there’s the question of what happens when you finally need to use those idle servers. Hopefully your documentation is up-to-date, so you’re at least aware you have them – By definition, they’ve been sitting idle for at least six months, so you may have bought them years ago. That means that their raw specifications are out of date – their memory, CPU, and storage may not be as powerful as the workloads you finally need to run. After waiting years for their moment in the spotlight, your idle servers will probably stumble.

 

Fix Your Idle Server Problem with Device42

With Device42, you get a transparent look into the usage and usage patterns of your servers. Not only will you be able to understand that some of your servers are underutilized, you’ll be able to go into your data center and physically identify which servers aren’t being used. In addition, our dependency maps will help you understand why your servers aren’t being utilized, and our lifecycle services will help you decommission these servers if you believe you no longer need them.

 

With Device42, you’ll be able to scale your data center intelligently, replacing racks of surplus servers with compact, high-powered machines. For more information on how our tools let you take data center optimization to the next level, contact Device42 today!