CONTRIBUTOR
Director of Technical Marketing,
MemVerge

Vast crypto mining data centers, consuming electricity at the rate of a small country and discharging heat into the air, are now common pop culture references, resulting in a push to reduce crypto mining’s power requirements. For example, in 2022 Ethereum officially switched to a Proof-of-Stake (PoS) consensus mechanism as a more energy-efficient way (energy consumption decreased by about 99 percent) to validate transactions and add new blocks to the blockchain.

Source: Ethereum’s energy usage will soon decrease by ~99.95%

The power consumption of conventional data centers worldwide is comparable to crypto mining data centers, so the crypto mining image tends to envelop all data centers. Yet the two scenarios are very different. It is not possible to change a single algorithm to reduce the power consumption of a data center supporting a wide array of applications. In most cases, the biggest reduction in the carbon footprint associated with general purpose computing (web site hosting, database transactions, machine learning, and so on) can be achieved by moving from on-premises to the cloud.

The Value of Shared Resources 

The risk of trivializing the complexity of carbon footprint analysis notwithstanding, consider a simple analogy. Suppose your grocery store only sells milk in one-gallon jugs. Each week, you buy a gallon of milk. You consume half a gallon and by the end of the week, the remaining half gallon has soured, so you throw it out. Carbon footprint generated: One gallon of milk and one plastic jug.

It turns out your neighbor has the same pattern. You decide to buy one gallon of milk and share it. Carbon footprint generated per person: a half gallon of milk and half a plastic jug. That is a 50% reduction. The analogy can be extended further: Suppose you switch from dairy to soy milk whose production releases lower levels of carbon emissions. That’s like switching to an energy source with lower carbon intensity.

Reducing Embedded Emissions 

This is the cloud computing model. Hyperscale Cloud Service Providers (CSPs) have invested vast sums in optimizing power and heat distribution in their data centers so that roughly 90% of the electricity drawn is used to power IT equipment directly. By using electricity from renewable energy sources and purchasing carbon offset credits, the hyperscale CSPs [Google, AWS, Azure) are either carbon neutral for their electrical power today or will get there soon. Furthermore, much of the heat is captured and used to provide heat energy, the largest energy end-use, to warm homes and buildings. So even though not all the power consumed in every data center is from renewable sources, the focus is shifting from operational emissions to other generators of carbon emissions.

CSPs purchase servers, storage arrays and other IT equipment to stock their data centers. To the CSP, these are sources of embedded emissions (or Scope 3 emissions), that is, the manufacture and transport of this equipment has an associated carbon footprint. Utilization is the key metric in apportioning the embedded emissions to end users – just like the volume of milk consumed compared to the size of the milk jug.

A typical on-premises data center may achieve server utilization of around 20%. To consume all the compute cycles that a single server (running at 100%) could provide, the organization would need to purchase five servers. In contrast, in a  CSP, utilization is at least  50% utilization, so the CSP would only need to purchase two servers. That is a 60% reduction in embedded emissions although the power consumption of a server rises as utilization increases.

Source: Cloud carbon footprint: Do Amazon, Microsoft and Google have their head in the clouds?

 

Managing VM Resources to Reduce Carbon Footprint 

A modern data center server has, aside from power supplies, network interfaces and storage devices, multiple CPU slots and multiple memory slots for each CPU slot. Each CPU comprises physical processing units (cores). For example, Intel’s third-generation Xeon Scalable processor, codenamed Ice Lake, can contain up to 40 cores. The enabling technology for sharing server resources among end users is the hypervisor, which virtualizes the physical components into (ephemeral) virtual machines (VMs) of varying capacities (measured in terms of cores and memory).

The key to reducing carbon footprint (including embedded emissions if we assume operational emissions [ref1, ref2] are zero or close to zero in hyperscale CSPs) is managing VM resources to maximize utilization. That is, size the VM for the job it needs to execute, and release the resources when they are no longer needed so that other users have access to the resources.

Sophisticated checkpoint/restore mechanisms can optimize VM instances even more – by migrating jobs in real time as resource demands change while the job is executing. If resources are idle, but locked up, (not available to other users), the CSP is forced to deploy additional servers and incur additional carbon footprint.

Suppose, for example, that server utilization increases from 50 to 65% because of intelligent VM management. This would decrease the carbon footprint (associated with the entire server) by 23%. For the end user who is tracking the Scope 3 emissions attributable to running a workload on a cloud server, the effect of continually rightsizing the VM as the workload executes can be significant. For example, if without rightsizing, an end user consumes 20% of a server’s resources and with rightsizing the end user consumes 10% of the server’s resources (averaged over the run), the embedded emissions attributable to the end user are reduced by 50%.

Modern hyperscale data centers are engineering marvels that are key components in reducing global carbon emissions, but underneath is an old-fashioned idea: Take only what you need and if you have extra, share it with your neighbors. It’ll save you money as well.