Ensuring data center processes’ energy efficiency and scalability is a major focus for operators today. Operators know that as much as 50% of the energy consumed by data center technology is wasted due to idle servers and other underutilized systems. Their work to reduce energy consumption and costs is driven by a desire to reduce costs and develop a more sustainable business.
This article explores:
- Operational current state: How much energy global data centers consume, the carbon emissions they produce, and the metrics they use to evaluate energy efficiency.
- Data center energy efficiency strategy: Key components include optimizing hardware, virtualizing systems, adopting energy-aware software designs, and integrating renewable energy to decrease energy consumption.
- Scaling energy-efficient data centers: Scalability is becoming more critical as data center teams often manage global or regional operations and grapple with rising rack density due to artificial intelligence (AI) and other workloads.
- Creating scalability: Teams can use modular data centers, edge computing, automation, and AI to deploy new capacity when and where needed and optimize operations with standardized processes.
- Rising to meet future trends: Why data center teams are using advanced technology to improve energy efficiency now, as regulations increase worldwide.
This article will help data center, facilities, and power teams create a strategic plan and use multiple levers to improve energy efficiency and sustainability.
How Data Center Teams Measure Energy Efficiency
Data center energy efficiency is determined by operators’ ability to run and sustain operations to reduce energy consumption and minimize waste. Data center operators use two metrics to determine energy efficiency and compare results across facilities, campuses, and competitors.
Power usage effectiveness (PUE) measures energy efficiency. Operators determine a data center’s PUE by dividing the total amount of power entering the facility by the power used to run IT equipment. Leading hyperscalers and colocation firms use advanced technology and energy management best practices to get their scores as close to 1.0 as possible. A more typical score is 1.55. In addition, operators may also track data center infrastructure efficiency (DCiE), which is the inverse of PUE. To get a DCiE score, operators divide IT equipment power by total facility power.
Large operators need energy-efficient practices to be scalable. So, they use best practices that are repeatable across facilities and regions, as well as energy management systems (EMSs) that provide a lens into energy operations. With EMSs, teams can view global network performance and drill down to facilities, types of devices, and individual devices. With a wealth of data, teams can identify opportunities to improve energy efficiency across individual sites, multiple facilities, regions, and more.
So, why is energy efficiency so crucial in the digital age? The world runs on technology. As a result, owners are deploying new data center capacity worldwide. However, powering and cooling these facilities takes significant energy and creates environmental impacts. Different regulatory regimes around the world are in the process of developing or passing new regulations to require data centers to become more energy-efficient. In addition, energy costs can often be volatile due to geopolitical and other developments, increasing OpEx spending. As a result, data center owners and operators want to become more energy-efficient to future-proof their business operations.
The Current State of Data Center Energy Consumption
The world is transitioning from using fossil fuels to power industry operations to using hybrid energy sources, including wind, solar, and other forms of renewable energy. Data centers and transmission networks currently represent 1.0% to 1.5% of all global energy use, excluding cryptocurrency mining. In contrast, the industrial sector used 37% of all global energy use in 2022.
In 2022, data centers used 240 to 340 terawatts of energy. However, energy use is growing by leaps and bounds due to enterprise interest in discriminative and generative AI and other processing-intensive workloads. Data center energy consumption is slated to rise to 2967 terawatts by 2030.
Global data centers contribute 1% of greenhouse gas emissions, although that number is likely rising due to businesses’ fast pace of digitization. To achieve the Net Zero Emissions Scenario by 2050, operators must decrease emissions by 50%.
Many operators are moving ahead of expected regulations. They are greening their operations by adopting such strategies as optimizing hardware, virtualizing systems, adopting energy-aware software design, and integrating more renewable energy into their energy mix. They also seek to design for scalability, bringing greater precision and control to global energy operations. Strategies include adopting modular data centers, using edge computing to process key workloads, and adopting automation and other advanced technology to optimize resource use and management practices.
While reducing energy consumption and making other sustainability improvements will require new CapEx investments, many operators view it as mandatory for sustaining business operations long-term. In addition, new solutions, such as edge computing and modular data centers, offer data centers increased flexibility in how they deploy and use computing capacity.
Do a deep dive:
Key Components of Data Center Energy Efficiency
According to the Uptime Institute, most operators have decreased PUE by implementing hot and cold air containment, optimizing cooling controls, and increasing air supply temperatures at their data centers. As a result, they are developing more far-reaching strategies to improve energy efficiency. These strategies typically include:
- Optimizing Hardware and Cooling
Servers typically account for more than half of all data center energy consumption. As a result, data center operators who adopt energy-efficient servers can significantly reduce consumption. They can accomplish this goal by replacing aging, inefficient servers with the latest technology; increasing processor utilization from low levels; running larger workloads; and improving power management. An Uptime Institute study found that leaping ahead two server generations doubled energy efficiency for data centers using AMD or Intel technology.
Many data center teams have maximized the potential of air cooling systems. As a result, they’re increasingly looking at hybrid air-liquid cooling solutions to cool hot-running equipment used for AI and other processing-intensive workloads that can’t be efficiently cooled by air alone. Water and other liquids are up to 50 to 1,000 times more efficient at cooling equipment than air. Because of liquid cooling’s higher cost and complexity, teams consider use cases carefully, including workload processing requirements, white space availability, existing infrastructure, and budget, before making cooling system decisions.
- Harnessing Virtualization and Energy-Aware Software Design
Teams can also improve energy efficiency by deploying virtual machines and containers and using energy-aware design to develop software.
A common problem that data center operators encounter is underutilized technology. Most server workloads run around half of total machine capacity, meaning significant power and cooling are used to run idle systems. In addition to turning off idle machines, data center teams can virtualize infrastructure, including operating systems, servers, storage, networks, and other devices like containers. The software simulates hardware functionality and allows teams to run multiple operating systems and servers on a single server. Cooling typically accounts for 50% of power usage, while the IT load consumes 37% of all power used in data centers. So, reducing the number of devices requiring power can significantly reduce energy use.
Containers also provide efficiency gains because they’re incredibly efficient, delivering only the code needed to perform application functions without additional dependencies. They require fewer racks and less energy for cooling and power while providing higher service quality and reducing costs and maintenance requirements.
Teams can notch further gains using energy-aware software design to develop new applications. With this concept, teams seek to understand application energy consumption before developing new systems. They then use this information to make critical programming decisions that balance data center performance and energy consumption. Software energy optimizers also monitor ongoing performance and reduce energy consumption if modularized away from the rest of the software. Further, teams can review and optimize existing code to reduce energy use. Making these changes can reduce energy consumption by 30-90%.
- Integrating Renewable Energy into the Data Center Energy Mix
Data center owners and operators can also improve energy efficiency by increasing the use of renewable energy, such as solar and wind power. While solar and wind power are inexpensive and abundant, they suffer from intermittency. As a result, many data center owners are deploying microgrids to capture renewable energy and provide an always-on source of backup power. Battery energy storage systems (BESSs) capture renewable and other forms of energy, while integrated energy management systems (EMSs) enable teams to deploy energy to meet sustainability, cost, and other goals. Integrated BESS-EMS systems can also replace generators, which use “dirty” diesel fuel to provide a backup power supply.
Data center operators can also use power purchase agreements (PPAs) to increase renewable energy use at specific facilities. There are multiple ways to leverage PPAs. With onsite PPAs, a third party, such as a utility or developer, installs, owns, and operates an energy system on the data center property. Data centers receive a stable and potentially lower-cost form of electricity, while third parties capitalize on tax credits and the income the power sales provide.
Offsite PPAs include physical and virtual PPAs. With physical PPAs, data center owners agree to purchase the renewable energy output of an offsite facility, such as a wind farm or solar installation. With virtual PPAs, companies purchase renewable energy but don’t attribute it to a specific project.
Because of their flexibility and scalability, PPAs are used extensively by hyperscalers and other large data center companies as they race to achieve sustainability goals.
Leading hyperscalers like Amazon, Apple, Google, Meta, and Microsoft have all committed to transitioning to 100% renewable energy to achieve Net Zero goals. These five companies account for 45 gigawatts of corporate renewable energy purchases globally, or half of the worldwide market. As a result, they’re motivating other corporate leaders to incorporate more renewables to transition away from fossil fuels faster.
Addressing Scalability Challenges in Data Centers
Data center owners and operators often manage a network of global data centers located strategically near business demand and sources of abundant, lower-cost energy. As a result, data center teams need to be able to scale best practices and processes across multiple locations.
Data center teams are also mindful that the worldwide drive to digitization is increasing workload demands, causing energy consumption to grow. Some 39% of cloud, hosting, or SaaS providers; 36% of colocation or data center providers; and 33% of enterprise data center owners and operators report that rack densities are rising fast.
As companies accelerate their adoption of AI models, rack densities are sure to rise, creating hotter-running equipment that requires advanced power and cooling.
Teams can leverage flexible infrastructure, such as the latest-generation CPUs and GPUs, high-density racks, virtualization, hybrid air-liquid cooling, and other innovations, to meet customer demands for AI training and inference workloads and low-latency mission-critical industry applications.
Solutions for Scalable and Energy-Efficient Data Centers
Historically, data center owners have opened up new capacity by siting and purchasing land, navigating permitting processes, and working with architects and contractors to build new facilities. This process can often take 12 months or more, slowing business momentum and creating operational disruption as new capacity is opened onsite.
Now, data center owners have new options with prefabricated modular data centers (PFMs), sometimes popularly called containerized data centers. PFMs provide building blocks of compute, integrated power and cooling, and remote monitoring capabilities. These modular data centers are predesigned and prebuilt offsite and then transported to their desired locations, where they are rapidly installed and commissioned. As a result, PFMs can be deployed much faster than stick-built facilities.
PFMs are a natural fit for meeting enterprises’ growing appetite for edge computing. Companies are standing up capacity close to business demand to power low-latency applications, such as smart manufacturing, telehealth, streaming media, and others.
PFMs help standardize edge computing deployments, ranging from a secured rack in a busy corridor to a retrofitted room to a stand-alone building. IT teams can flexibly deploy capacity in building blocks with integrated power and cooling to meet their demands. With standardized designs, it’s easy to scale deployments across data center campuses and regions and maintain systems.
Leveraging Automation and Artificial Intelligence for Resource Management
Data center teams already use remote monitoring and management platforms to oversee operations. Automation enables teams to provision and balance workloads, ensuring optimal performance. With AI, teams can predict and manage server workloads and adjust power and cooling based on demand. AI can also predict power outages and automatically switch from primary power sources, such as the grid, to always-on backup power supply, such as microgrids.
Teams can also use digital twin technology with AI and machine learning platforms to model processes and make planned improvements or real-time adjustments, optimizing energy flows and consumption.
Keeping Pace with Market Trends and Regulatory Developments
National and regional governments are considering or passing data center energy efficiency regulations. One example is the European Union’s Energy Efficiency Directive, which requires covered data centers to create an energy management plan, conduct an energy audit, and report operational data.
Rather than waiting for regulations to take force, data center owners and operators are improving energy-related processes and practices. They’re leveraging cloud and edge computing, virtualization, automation and AI, digital twins, next-generation energy systems, and other innovations to accomplish key goals. Data center teams reduce CapEx investments, right-sizing hardware purchases, and deliver cost savings through reduced power and water bills.
As they make these changes, data center teams benefit by using IT asset management solutions to auto-discover and track all assets to reduce licensing costs and maximize benefits, such as warranties. They can also leverage configuration management databases (CMDBs) to auto-discover all assets, identify dependencies, and manage changes and configurations to hardware, software, virtualized, and cloud assets. As just one example, CMDBs can help identify underperforming assets that can either be better utilized or retired.
Future-Proof Data Center Operations with Energy-Efficient, Scalable Processes
From these examples, it’s clear that data center teams have many options available as they seek to improve energy efficiency and scalability across their facilities. These strategies and solutions can deliver real value to the business, freeing up savings that teams can use for other initiatives, while reducing companies’ carbon footprint.