From Months to Minutes: How LBNL Accelerates Energy Policy Research#
The Challenge of Energy Policy Research#
When policymakers need answers about the future of America's energy grid, they turn to researchers like Umed Paliwal at Lawrence Berkeley National Laboratory (LBNL). As part of the Department of Energy's network of national laboratories, LBNL analyzes how policy decisions impact electricity markets, costs, and reliability. But with the rapid growth of renewable energy and increasing demands from AI, the computational challenges of this work have grown exponentially.
There's a lot of bureaucratic and regulatory pathways to building new electricity generation. The rate of electricity generation isn't matching demand, especially with AI driving consumption. Policymakers are really concerned about how we'll power these growing needs.
Umed Paliwal
Researcher, LBNL

Processing Petabytes: The Data Challenge of Modern Grid Planning#
The computational demands of energy policy research are staggering. Paliwal's team analyzes 300,000 sites across the United States, processing terabytes of satellite data from NASA and the European Space Agency. Each site requires detailed weather simulations at 15-minute intervals, spanning decades of historical data from 1970 to present day.
We need very granular data, both spatially and temporally resolved, to accurately simulate how the grid will function five or ten years from now. We're working with 50-60 terabytes of data on Amazon S3, covering weather patterns at 5-kilometer resolution across the entire country.
The analysis doesn't stop at weather data. When planning renewable energy expansion, Paliwal's team must consider dozens of land-use criteria: slope, forest proximity, environmental sensitivity, and more. This requires processing 10-meter resolution data across the entire United States - another massive computational challenge.
From Local Machines to HPC: The Search for Computing Power#
Like many researchers, Paliwal's journey to handle these computational demands evolved through several stages.
I started on a local computer - that didn't work out well. Even after spending $15,000 on a powerful PC, we hit memory constraints.
The team tried traditional High-Performance Computing (HPC) clusters, but the queuing systems and fixed resources became a source of constant frustration.
If I'm trying to do some computation, I want to start it now. I don't want to wait for the queue to get my project in six hours and then get the result tomorrow. What if your code has an error? Then you go back, you wait another six hours. That's a really long lead time that makes debugging really difficult.
Large cloud instances seemed promising, but came with their own anxieties.
Let's say I put some computation at night and I'm paying like $30 dollars an hour or $40 dollars an hour for that machine. And then I woke up in the morning and the machine is still running. That's like hundreds of dollars of wasted money. We've had months where our cloud costs went to like $7,000-8,000, and that is significant money, especially in times where the funding gets really tight.
Breaking Through with Python-Native Cloud Computing#
The breakthrough came when Paliwal discovered Coiled, which offered a hybrid approach combining local development with on-demand cloud scaling. For Paliwal, who comes from a background in civil engineering and public policy rather than computer science, the simplicity was transformative.
Since I'm not from a CS background, setting up infrastructure and getting the cloud to work properly got really daunting really fast. The setup process with Coiled was really easy - it didn't take more than 10 minutes. I was really expecting a lot of errors, but somehow it worked out the first time.
This approach lets researchers develop and test locally, then seamlessly scale to hundreds of cores in the cloud when needed. The solution particularly shines with their data science stack, built on Python tools like Dask and Xarray.
I just have to focus on writing that code. The scaling part is mostly solved from the environment perspective, from the scaling perspective, and it's really easy because I don't have to even log into AWS.
Given that most of the datasets I use are these multidimensional rasters stored in Zarr, or Xarray, or NetCDF, I don't have to set up anything special. I just run it like any other Python code.
What's more, Coiled's dashboard helps optimize resource usage and costs, turning what was once a source of anxiety into a tool for efficiency.
The dashboards were really helpful in figuring out which AWS machines to use - should I use a memory intensive one, or a compute intensive one? And then how is the worker utilization going while the code is running? If you're paying for that computation, you want to ensure you're using it efficiently.
This newfound capability is crucial for time-sensitive policy analysis.
In policy circles, when things are happening, they move really fast. There's a very short window to provide insights into what the impact of policy would be. And now we can actually deliver those insights when they're needed.
Transforming Policy Research with Rapid Analysis#
The enhanced computational capabilities haven't just transformed how LBNL supports policy decisions - they've transformed what's possible. The ability to rapidly test ideas and scale successful approaches means the team can now tackle questions that were previously out of reach, providing policymakers with unprecedented insights into complex scenarios like the impacts of the Inflation Reduction Act.
Their work extends beyond federal policy. The team has developed interactive tools that help stakeholders understand renewable energy potential across the country. Their latest project analyzes land availability near existing thermal plants, helping planners identify opportunities to transition to clean energy sources.
We look at 50 different land use layers near each plant. Can we build solar or wind here? What would the generation cost be? This helps us understand if local renewable generation could economically replace existing thermal plants.
The ability to process massive datasets efficiently has been key to this work.
We can keep some of the small files locally and somehow the cloud gets those local files as well. I don't have to synchronize everything with S3. That removes a huge pain point in our workflow.
Powering America's Clean Energy Future#
As the clean energy transition accelerates, the computational demands of energy policy research will only grow. But with their enhanced capabilities, Paliwal's team is ready to tackle these challenges.
The fundamental economics really drive these changes. Solar has become the cheapest source of electricity in the US and almost most of the world. We have 2,500 gigawatts - almost double the U.S. electricity generation capacity - waiting in the interconnection queue, and 90 percent of that is clean energy.
Umed Paliwal
Researcher, LBNL
By combining cutting-edge computing capabilities with deep domain expertise, LBNL continues to provide the insights policymakers need to navigate America's energy transition. Their work demonstrates how advanced technical capabilities, when properly applied, can directly impact the decisions shaping our energy future.