Github Actions let you launch automated jobs from your Github repository. Coiled lets you scale your Python code to the cloud. Combining the two gives you lightweight workflow orchestration for heavy ETL (extract-transform-load) jobs that can run in the cloud without any complicated infrastructure provisioning or DevOps.
This Github Action runs an ETL job on 15 GB of data and writes the results to an Amazon S3 bucket. Running this with a Github Action alone would not be possible as the default virtual machines on which Github Actions are run only have 7GB of memory. Running the action on a Coiled cluster lets you crunch the entire dataset. Since the Coiled cluster runs on AWS, the data can easily be written to an S3 bucket.
This post will demonstrate how running Github Actions on Coiled can be a useful way to schedule automated data-processing jobs. We’ll start with some background on Github Actions and Coiled and then walk through an example script in detail.
Many data analysts have tasks that need to be run on a fixed schedule. For example, you might want to fetch new data every day at 8AM, preprocess it, and use it to train or update a machine learning model. Workflow orchestration tools allow you to schedule jobs like this without human intervention. Popular orchestration tools include Apache Airflow, Prefect, Argo and others.
Managing these workflow orchestration tools can be more work than you want to do. Github Actions is a lightweight and cheap (or even free) alternative that lets you run automated jobs based on predefined trigger events. Since many developers and data analysts already have their code in Github repositories, this can be an ideal way to run automated jobs without having to learn another tool or manage a full-fledged orchestration deployment.
Github Actions are run on virtual machines that are limited in the hardware resources they let you access. This makes sense because Github Actions are free (entirely for public repositories, up to a limit for private ones) and Github probably doesn’t want you running massive workloads on their free service. By default, Github Actions are hosted on ‘runners’ with 7GB of RAM.
So what do you do if you want to run a scheduled ETL job with Github Actions that requires more memory resources than the Github Action default?
If you want to use Github Actions for more compute-intensive or memory-bound tasks, you’ll need to deploy the task to another virtual machine or cluster of machines. There are ways to do this yourself with Amazon ECS, Azure or Google Kubernetes. But deploying Github Actions to the cloud yourself is complex and time-intensive. Github also offers an option to use self-hosted runners for Github Actions. These also require an extensive installation, setup and maintenance protocol.
Coiled makes it easy to deploy Python code to the cloud. Coiled handles all the infrastructure setup for you, including software environments, security and cost-management. Coiled Clusters are launched on-demand and can be stopped immediately once the computation is completed, so there will be no unexpected costs. Even better: Coiled currently provides a generous free tier, so you’ll only be paying your own cloud account fees.
We’ve put together a Github Action that you can customize to run your own automated ETL jobs. Let’s walk through the script in detail. If you’re totally new to Github Actions, we recommend following the official Github tutorial first.
Let’s break down the workflow that our Github Action performs step by step.
We’ll set this ETL job to run on a schedule: it will be triggered at 12pm (noon) every day.
//on:
schedule:
# run at noon every day
- cron: "0 12 * * *"//]]>
Github Actions uses the cron format to specify dates and time. You can also have your Github Actions triggered by other events, such as push events to specific repositories.
We’ll then define our quickstart job:
//jobs:
quickstart:
runs-on: "ubuntu-latest"
strategy:
fail-fast: false//]]>
And define the steps this job will consist of:
The first and second steps are standard setup protocols. The third step is the one that will actually perform our ETL job: it will run the Python script quickstart.py located in the scripts directory.
Notice that we passed four environment variables to this script:
Secrets are pulled from your Github account settings. Follow these steps if you do not know how to store secrets in your Github repository.
Now let’s take a closer look at the quickstart.py script that our Github Action will run.
After importing dependencies, the script starts by launching a Coiled cluster:
This spins up a cluster of 10 virtual machines with 8GB of RAM each. Coiled automatically replicates the local Python environment in your cluster. You can customize your cluster to specify different instance types and hardware specs. Each cluster that is launched is given a unique name using the Github Run ID.
We then connect the Python client running the script (in this case the virtual machine that’s running the Github Action) to our Coiled cluster:
Now we’re all set to load in our large dataset. We’ll use Dask to do this. Dask is a parallel computing library for Python that makes it easy to scale your pandas and NumPy workloads to larger-than-memory datasets. Read this blog to learn how to speed up a pandas query 10x with Dask.
The code below loads the Parquet data and performs a groupby aggregation:
Chances are that once you have this result, you’d like to store it somewhere. Let’s write the result to an Amazon S3 bucket. We’ll write to a dedicated S3 bucket and create a unique directory for each Github Action run:
Nobody likes an unexpected expensive cloud bill, so let’s shut down our cluster with:
All set! Once pushed to the repository, the Github Action will run this script at noon every day. You can of course adjust the cron schedule to run this at a different interval. After a few runs, your S3 bucket UI will look something like this, with a unique directory for each completed run:
While there are other ways to run your Github Actions on cloud-computing resources (or with self-hosted runners), these all require complicated setup processes and careful maintenance to implement security protocols and avoid unnecessary costs. Coiled takes care of all of this DevOps for you so you can focus on writing and running your Python code.
Coiled is also very scriptable and is not picky about where you are running your Python code. Unlike some of the other cloud-computing services, Coiled does not limit you to launching clusters from hosted notebooks. This means you can launch Coiled clusters from anywhere you can run Python code, including Github Actions.
At Coiled, we actually use this feature ourselves to run benchmarking tests for the coiled-runtime metapackage on a ~200GB dataset.
If you’re ready to build your own Github Actions that leverage cloud-computing resources, follow these steps:
Note that spinning up Coiled clusters requires you to have an AWS or GCP account in which resources will be provisioned. Coiled currently provides 10,000 CPU hours free of charge. After your 10,000 CPU-hours, we charge $0.05 per CPU hour plus whatever you pay your cloud. We can’t make your own cloud costs go away: you still have to pay AWS/GCP.
Let us know how you get on by tweeting at us, we’d love to see what you build!