Run any Python function in the cloud
Serverless Python functions. Just add a decorator.
Sometimes you just need a really big machine.
- Churn through large volumes of data
- Serverless pandas/Polars/DuckDB
- Get as big a VM as AWS will give you
import coiled
import duckdb
@coiled.function(
memory="512 GB"
)
def query_data():
con = duckdb.connect()
query = """
SELECT day, AVG(tip) AS average_tip
FROM read_parquet('s3://nyc-taxi/*.parquet')
GROUP BY day
"""
return con.execute(query).fetchall()
import coiled
import duckdb
@coiled.function(
memory="512 GB"
)
def query_data():
con = duckdb.connect()
query = """
SELECT day, AVG(tip) AS average_tip
FROM read_parquet('s3://nyc-taxi/*.parquet')
GROUP BY day
"""
return con.execute(query).fetchall()
AWS Lambda Without the Limits
Your Python code deserves more than 15 minutes of fame.
Simple setup, powerful execution
- Computations run as long as you need
- As much memory as you need in any region
- Near-instant execution for repeated calls
import coiled
@coiled.function(
memory="128 GB",
region="us-east-2",
keepalive="20 minutes")
def my_func(...):
...
import coiled
@coiled.function(
memory="128 GB",
region="us-east-2",
keepalive="20 minutes")
def my_func(...):
...
Cloud Compute for Python People
Going from your laptop to the cloud should be easy.
Parallel Python that just works.
- Run on a big machine or a cluster of VMs
- Autoscale up and down based on your workload
- Dask clusters for distributed computing
import coiled
@coiled.function()
def simulate(trial: int=0):
return ...
# Run once on the cloud
result = simulate(1)
# Run in parallel on 1000 machines
results = simulate.map(range(1000))
# Retrieve results
list(results)
import coiled
@coiled.function()
def simulate(trial: int=0):
return ...
# Run once on the cloud
result = simulate(1)
# Run in parallel on 1000 machines
results = simulate.map(range(1000))
# Retrieve results
list(results)
Trusted by Data Teams
Reliable compute for mission-critical workloads that actually stays up
"I've been incredibly impressed with Coiled; it's quite literally the only piece of our entire ETL architecture that I never have to worry about."
Bobby George
Co-founder, Kestrel
"The speed is nice, sure, but the real benefit is taking a multi-day effort and finishing it in an afternoon. Coiled changed the character of our work."
Matt Plough
Software Engineer, KoBold Metals
"My team has started using Coiled this week. Got us up and running with clusters for ad hoc distributed workloads in no time."
Mike Bell
Data Scientist, Titan
"Coiled is natural and fun to use. It's Pythonic."
Lucas Gabriel Balista
Data Science Lead, Online Applications
FAQ
In your cloud account, where they belong.
Your data never leaves your cloud account. We just:
- Turn on VMs when you need them
- Clean them up when you're done
- Put logs in your cloud logging system
Coiled never sees your data. See Security for more details.
Surprisingly little.
- Pay your cloud provider for compute (usually $0.02-0.05 per CPU-hour)
- First 10,000 CPU-hours per month are free
- After that, $0.05 per CPU-hour to Coiled
Most functions cost just pennies to run. See Pricing for more details.
We handle that automatically.
Our Package Sync technology:
- Automatically replicates your local environment
- Works across architectures (Windows/Mac/Linux, Intel/ARM)
- Updates when you pip install new things
- Handles credentials and cloud permissions securely
No Docker required, though you can use it if you want.
Not a chance.
We're a bit obsessive about cleaning up:
- VMs get terminated after your function completes
- Storage gets removed
- Network resources get deleted
No more "oops, I forgot to turn off that instance from last quarter" moments.
Initially about a minute, then milliseconds.
- First function call: ~1 minute to provision infrastructure
- Subsequent calls: milliseconds (using warm VMs)
You get the best of both worlds - VM flexibility with nearly-instant execution for repeated calls.
Scale with parallel execution.
import coiled
@coiled.function(memory="64 GiB")
def process(x):
# Your processing logic
return result
# Run many tasks in parallel
futures = [process.submit(x) for x in data]
results = [future.result() for future in futures]
import coiled
@coiled.function(memory="64 GiB")
def process(x):
# Your processing logic
return result
# Run many tasks in parallel
futures = [process.submit(x) for x in data]
results = [future.result() for future in futures]
Functions can automatically scale to hundreds of machines when needed.
Get started
Know Python? Come use the cloud. Your first 10,000 CPU-hours per month are on us.
$ pip install coiled
$ coiled quickstart
Grant cloud access? (Y/n): Y
... Configuring ...
You're ready to go. 🎉
$ pip install coiled
$ coiled quickstart
Grant cloud access? (Y/n): Y
... Configuring ...
You're ready to go. 🎉