The Unpredictability of Data Center Energy Consumption
The modern data center is a massive energy consumer, not only demanding vast amounts of electricity, but often relying on water-intensive cooling methods that—when combined with peak electricity demand—can significantly strain both the power grid and local water infrastructure.
A data center's operations are not fixed; components like computing, cooling, and energy storage (batteries) can be adjusted. This adjustability allows a data center to act as a flexible load that can interact with the power grid, a concept known as Demand Response (DR). By participating in DR, a data center can shift its energy consumption to off-peak hours, thereby saving money and enhancing grid reliability.
The Problem: A Scheduling Challenge
The fundamental problem addressed in this research is how to create an optimal demand-response-based framework that minimizes the total operational costs of a data center while meeting all its computational workloads. This requires balancing two main variables:
- Workload Scheduling: When should computing tasks be processed, particularly while meeting deadlines?
- Energy Management: How should power be drawn from different sources—the main grid, local renewables (like solar PV), and battery storage—at any given time?
Researchers aimed to develop a sophisticated model that accounts for the dynamic costs of electricity (especially Time-of-Use (TOU) pricing) and other assets, and the unique delays and energy use associated with cooling and computing.

