Optimal Workload Scheduling and Energy Management of AI Data Centers with Demand Response

Getting your Trinity Audio player ready...

Lehigh Professors Shalinee Kishore, Alberto J. Lamadrid, and Javad Khazaei, and Ph.D. Candidate Morteza Ghorashi, Develop MILP Framework for 20% Data Center Cost Reduction via Demand Response

The Unpredictability of Data Center Energy Consumption

The modern data center is a massive energy consumer, not only demanding vast amounts of electricity, but often relying on water-intensive cooling methods that—when combined with peak electricity demand—can significantly strain both the power grid and local water infrastructure.

A data center's operations are not fixed; components like computing, cooling, and energy storage (batteries) can be adjusted. This adjustability allows a data center to act as a flexible load that can interact with the power grid, a concept known as Demand Response (DR). By participating in DR, a data center can shift its energy consumption to off-peak hours, thereby saving money and enhancing grid reliability.

The Problem: A Scheduling Challenge

The fundamental problem addressed in this research is how to create an optimal demand-response-based framework that minimizes the total operational costs of a data center while meeting all its computational workloads. This requires balancing two main variables:

  1. Workload Scheduling: When should computing tasks be processed, particularly while meeting deadlines?
  2. Energy Management: How should power be drawn from different sources—the main grid, local renewables (like solar PV), and battery storage—at any given time?

Researchers aimed to develop a sophisticated model that accounts for the dynamic costs of electricity (especially Time-of-Use (TOU) pricing) and other assets, and the unique delays and energy use associated with cooling and computing.

The Methodology: A Comprehensive System Model

The team developed a sophisticated mixed-integer linear programming (MILP) framework to model the entire data center system, including both the power flow (planned to be added in the next steps) and the computing workload.

System Components:

The model incorporates all major elements of the energy ecosystem:

  • The Main Grid: The primary power source with dynamic pricing.
  • Renewable Energy: Local solar photovoltaic (PV)-integrated microgrids for self-generation.
  • Energy Storage: Battery energy storage system (BESS) units that can charge or discharge.
  • Data Center Facility: Encompassing the servers (computing load) and the chillers (cooling load).
  • Power and DR Choices (Constraints): The model includes equations that manage power balance (what's generated must equal what's consumed), constraints on how the battery operates, and how fast the servers perform their computations.

Optimal Scheduling:

The model's goal is to minimize the total daily cost while ensuring all AI workload deadlines are met. What results is a complex optimization problem and through the solution, the framework determines:

  1. The optimal workload schedule (when to run tasks).
  2. The optimal energy management plan (when to use grid, solar, or battery power).
  3. The optimal frequency state (how fast to run the servers) for maximum efficiency.

Key Insights and Outlook: Proof of Value

The core value of this research was demonstrated through a case study where the model was applied to an actual 24-hour workload and two separate grid-connected data centers.

Early Findings

The results showed that by using Demand Response, the data center was able to achieve:

  • Strategic Power Use: The model intelligently schedules tasks to run during the cheapest (off-peak) hours and strategically uses the local solar PV and battery storage to avoid expensive peak-hour grid consumption.
  • Performance Scaling: The workload scheduling, battery dispatch, and server frequency states were effectively coordinated to maintain stable, efficient operation throughout the day.

What’s Next

This research provides a powerful foundation for future data center management. Building on this work, the researchers' next steps will involve solving the problem with more fine-grained and detailed cooling models, incorporating water systems and weather data into the optimization framework, assessing the economic aspects of the solutions, and evaluating the impact of external grid conditions (like reliability and network constraints) on the optimal solution.

Ultimately, this work offers a blueprint for data centers to move from being simple energy consumers to being intelligent, active participants in the power grid, dramatically reducing costs and enhancing overall grid stability.


This research was partially supported by a grant from GTI Energy.

This research was presented as part of the Innovating Energy and Water Solutions for Tomorrow's AI Data Centers Symposium hosted by the Center for Advancing Community Electrification Solutions (ACES) in October 2025

Generative AI was used to organize this story, based on data and information captured in a research poster that was part of the ACES Symposium event. It was reviewed and edited by researchers and communications staff.