A Big Enough Lever to Move the World
Process planetary-scale data without planetary-scale stress. Your data deserves this kind of horsepower.














Xarray on terabyte datasets without running out of memory or patience
- Never run out of memory
- Process global scale datasets
- Run close to your data
import xarray as xr
import coiled
# Create cluster of hundreds of machines
cluster = coiled.Cluster(
n_workers=500,
region="us-west-2",
)
client = cluster.get_client()
# Process massive cloud datasets
ds = xr.open_zarr("s3://.../my-data.zarr")
import xarray as xr
import coiled
# Create cluster of hundreds of machines
cluster = coiled.Cluster(
n_workers=500,
region="us-west-2",
)
client = cluster.get_client()
# Process massive cloud datasets
ds = xr.open_zarr("s3://.../my-data.zarr")
Brutally Cost-Efficient
Process 10 TiB for $1.00. Your CFO will think you're exaggerating.
Delightful to use
These people said nice things about us, and we didn't even ask them to. (We're just as surprised as you are.)
"Coiled is the Heroku of Data. Setup was a piece of cake."
Tim Cull
Leadership Swiss Army Knife, Urban Footprint
"Coiled support is amazing. I'll run into an issue and before I have a chance to mention it I have an email in my inbox. You don't get this kind of support with large companies."
Katya Potapov
Software Engineer, Floodbase
"The speed is nice, sure, but the real benefit is taking a multi-day effort and finishing it in an afternoon. Coiled changed the character of our work."
Matt Plough
Software Engineer, KoBold Metals
"We've been using Coiled in our backend for months and never think about it. It just works."
Luiz Augusto Alvim
Acoustic Engineer, RPG Acoustical
FAQ
Yes, and it scales beyond just memory.
Instead of wrestling with a single expensive VM that you forget to turn off (we've all been there), distribute your work across machines that shut themselves down.
- Process terabytes cheaply
- Run close to your data (goodbye egress fees)
- Scale up when you need it, scale to zero when you don't
We grew out of Pangeo
Pangeo showed that Python could handle planetary-scale data. We're taking that vision and making it trivial to deploy.
Think of this as Pangeo without the Kubernetes headaches. Your focus should be on the science, not wrestling with YAML files.
Write code for one file, we'll handle the rest.
@coiled.function()
def process(file):
# Your normal Python code here
return result
results = process.map(files) # Done in about five minutes
@coiled.function()
def process(file):
# Your normal Python code here
return result
results = process.map(files) # Done in about five minutes
No more for-loops that take all weekend. No more leaving your laptop running overnight.
Yours. Where your data lives.
We just turn on machines in your cloud account, right next to your data. You keep complete control, we just make it all work smoothly.
- Keeps your data in your control
- Minimizes data transfer costs
- Maximizes processing speed
About $0.10 per terabyte when you do it right.
We're a bit obsessed with keeping costs low:
- Spot instances when they make sense
- ARM processors (they're just better)
- Automatic shutdown (no more Monday morning surprises)
- Per-user limits (sleep well at night)
Read Ten cents per terabyte for a more thorough treatment of cloud costs
Yep, Coiled works with all the usual suspects.
Bring your favorite scheduler:
- Prefect
- Dagster
- Airflow
Perfect for processing new satellite or simulation data as it arrives. See the Prefect page for deeper examples.
Get started
Know Python? Come use the cloud. Your first 10,000 CPU-hours per month are on us.
$ pip install coiled
$ coiled quickstart
Grant cloud access? (Y/n): Y
... Configuring ...
You're ready to go. 🎉
$ pip install coiled
$ coiled quickstart
Grant cloud access? (Y/n): Y
... Configuring ...
You're ready to go. 🎉