Run Python scripts on cloud GPUs
One command to execute any Python script on NVIDIA GPUs. Zero-setup cloud GPU computing with automatic provisioning.
42x Faster Training
PyTorch CIFAR10 Training Time
Local CPU
400 mins
Apple M1 GPU
85 mins
NVIDIA T4 GPU
9.8 mins
Cost: $0.12 • 42x faster than CPU














GPU-accelerated Python, made simple
From local Python script to cloud GPU execution in one command.
Any GPU, Any Cloud
Zero Setup
Managed GPU Compute
From your laptop to cloud GPUs
Run a Python script on a GPU-enabled cloud machine with one line of code.
Easy to get started, extend as needed.
- Take any Python script and run it on cloud GPUs
- Scale out with --n-tasks for parallel processing
- Specify instance types with --vm-type
- Use custom Docker containers with --container
$ coiled batch run --gpu python train.py
$ coiled batch run --gpu python train.py
From serverless GPUs to large-scale, long-running batch computing
Choose the tool that fits your workflow.
Serverless Python Functions
Serverless GPU computing: Add a decorator, run on cloud GPUs.
import coiled
import torch
@coiled.function(
vm_type="g5.xlarge", # A10G GPU
keepalive="20 minutes", # Warm starts
region="us-west-2", # Any region
)
def train():
...
return model.to("cpu")
model = train()
import coiled
import torch
@coiled.function(
vm_type="g5.xlarge", # A10G GPU
keepalive="20 minutes", # Warm starts
region="us-west-2", # Any region
)
def train():
...
return model.to("cpu")
model = train()
Batch Jobs
Embarrassingly parallel batch jobs on cloud GPUs.
#!/usr/bin/env bash
# COILED n-tasks 10
# COILED gpu True
accelerate launch \
--multi_gpu \
--machine_rank $COILED_BATCH_TASK_ID \
--main_process_ip $COILED_BATCH_SCHEDULER_ADDRESS \
--main_process_port 12345 \
--num_machines $COILED_BATCH_TASK_COUNT \
--num_processes $COILED_BATCH_TASK_COUNT \
nlp_example.py
#!/usr/bin/env bash
# COILED n-tasks 10
# COILED gpu True
accelerate launch \
--multi_gpu \
--machine_rank $COILED_BATCH_TASK_ID \
--main_process_ip $COILED_BATCH_SCHEDULER_ADDRESS \
--main_process_port 12345 \
--num_machines $COILED_BATCH_TASK_COUNT \
--num_processes $COILED_BATCH_TASK_COUNT \
nlp_example.py
Trusted by Data Teams
Reliable GPU compute for mission-critical workloads
"I've been incredibly impressed with Coiled; it's quite literally the only piece of our entire ETL architecture that I never have to worry about."
Bobby George
Co-founder, Kestrel
"The speed is nice, sure, but the real benefit is taking a multi-day effort and finishing it in an afternoon. Coiled changed the character of our work."
Matt Plough
Software Engineer, KoBold Metals
"My team has started using Coiled this week. Got us up and running with clusters for ad hoc distributed workloads in no time."
Mike Bell
Data Scientist, Titan
"Coiled is natural and fun to use. It's Pythonic."
Lucas Gabriel Balista
Data Science Lead, Online Applications
FAQ
AWS Lambda doesn't support GPUs, but Coiled does.
If you're looking for "AWS Lambda with GPU support," Coiled is what you need:
- Annotate your Python functions:
@coiled.function(gpu=True)
- Auto-provisioning: Spins up GPU instances automatically (like Lambda for CPUs)
- Zero infrastructure management: No servers to manage, just like Lambda
Perfect for ML inference, training, and any GPU-accelerated Python workload.
You can use any Python library with Coiled.
Some popular GPU-accelerated libraries:
- PyTorch: Automatic CUDA version matching
- TensorFlow: GPU-enabled by default
- OpenCV: GPU-accelerated computer vision
- CuPy: NumPy-like GPU computing
- Numba: CUDA kernel compilation
- Rapids: GPU-accelerated data science
Your Python code automatically detects and uses available GPUs.
Example GPU detection:
import torch
# PyTorch automatically uses GPU if available
if torch.cuda.is_available():
device = torch.device("cuda")
print(f"Using GPU: {torch.cuda.get_device_name()}")
else:
device = torch.device("cpu")
import torch
# PyTorch automatically uses GPU if available
if torch.cuda.is_available():
device = torch.device("cuda")
print(f"Using GPU: {torch.cuda.get_device_name()}")
else:
device = torch.device("cpu")
Coiled ensures CUDA drivers and libraries are properly configured.
Yes! Coiled Functions provide true serverless GPU computing.
With @coiled.function
decorator:
@coiled.function(vm_type="g5.xlarge") # GPU instance
def process_image(image_data):
# Your GPU code here (PyTorch, OpenCV, etc.)
return processed_result
@coiled.function(vm_type="g5.xlarge") # GPU instance
def process_image(image_data):
# Your GPU code here (PyTorch, OpenCV, etc.)
return processed_result
Ready to run Python on GPUs?
Get started in under 2 minutes. Your first 500 CPU hours per month are free.
$ pip install coiled
$ coiled quickstart
Grant cloud access? (Y/n): Y
... Configuring ...
You're ready to go. 🎉
$ pip install coiled
$ coiled quickstart
Grant cloud access? (Y/n): Y
... Configuring ...
You're ready to go. 🎉