Fixing Mismatched Versions with Dask and Coiled

The Coiled Team August 25, 2022

, ,

By far the most common issue users encounter when running Dask in a true distributed fashion in the cloud is mismatched software environments.  Your computer has one version of a library like Dask or pandas and your distributed cluster has another version.  Sometimes this is harmless, other times it results in difficult to understand bugs.  It’s both common and painful.

Why do different versions matter? Well there’s a wonderful array of errors that can crop up:

  • Deserialization: Dask uses cloudpickle to serialize complex data types and send them between client/workers. This is highly sensitive to changes in those data types! A new attribute or a removed one will cause a deserialization error. 

  • Bugs: A newer/older version of a package may introduce or resolve a bug, but only on your workers, leaving you unable to replicate a problem locally for debugging purposes. Imagine if an update to one of your packages changed the resolution it uses for floats, suddenly you’d be getting different numbers locally compared to a cluster! 

  • Connection issues: This is rarer but desynced Dask versions can lead to communication issues as the protocol may change between versions.

  • Scheduler decisions: A newer/older version of the dask scheduler in your cluster may make different decisions on how to schedule work, leading again to potentially unreproducible issues

I personally used to maintain several physical machines as an on-prem Dask cluster, and ran into this many times. While I don’t hand maintain physical machines like that anymore, except for the NAS in my closet, the same problems can still crop up. Experts and beginners alike will run into these problems during development.

How does it happen?

So how does your environment get out of sync? Let’s run through some examples, ranging from the easy to the insidious.

For a straightforward one, you pip installed something and forgot to run create_software_environment with your new package, so now the library is entirely missing on the cluster. Doh!

In another scenario, you try to get more specific about how to replicate your environment with a conda environment.yml file, this could also be a requirements.txt file for pip:

				
					    channels:
        - conda-forge
    dependencies:
        - dask==2022.07.01
        - distributed==2022.07.01
				
			

On paper, this looks ok. After all, you’ve pinned the exact versions of the packages you need. But let’s look at what this produces:

bokeh==2.4.3
brotlipy==0.7.0
bzip2==1.0.8
ca-certificates==2022.6.15
certifi==2022.6.15
cffi==1.15.1
click==8.1.3
cloudpickle==2.1.0
cryptography==37.0.1
cytoolz==0.12.0
dask==2022.7.1
dask-core==2022.7.1
distributed==2022.7.1
freetype==2.10.4
fsspec==2022.7.1
heapdict==1.0.1
idna==3.3
jinja2==3.1.2
jpeg==9e
lcms2==2.12
lerc==4.0.0
libblas==3.9.0
libcblas==3.9.0
libcxx==14.0.6
libdeflate==1.13
libffi==3.4.2
libgfortran==5.0.0.dev0
libgfortran5==11.0.1.dev0
liblapack==3.9.0
libopenblas==0.3.21
libpng==1.6.37
libsqlite==3.39.2
libtiff==4.4.0
libwebp-base==1.2.4
libxcb==1.13
libzlib==1.2.12
llvm-openmp==14.0.4
locket==1.0.0
lz4==4.0.0
lz4-c==1.9.3
markupsafe==2.1.1
msgpack-python==1.0.4
ncurses==6.3
numpy==1.23.1
openjpeg==2.5.0
openssl==3.0.5
packaging==21.3
pandas==1.4.3
partd==1.3.0
pillow==9.2.0
pip==22.2.2
psutil==5.9.1
pthread-stubs==0.4
pycparser==2.21
pyopenssl==22.0.0
pyparsing==3.0.9
pysocks==1.7.1
python==3.10.5
python-dateutil==2.8.2
python_abi==3.10
pytz==2022.2.1
pyyaml==6.0
readline==8.1.2
setuptools==65.0.0
six==1.16.0
sortedcontainers==2.4.0
sqlite==3.39.2
tblib==1.7.0
tk==8.6.12
toolz==0.12.0
tornado==6.1
typing_extensions==4.3.0
tzdata==2022b
urllib3==1.26.11
wheel==0.37.1
xorg-libxau==1.0.9
xorg-libxdmcp==1.1.3
xz==5.2.6
yaml==0.2.5
zict==2.2.0
zstd==1.5.2

Over 80 packages are installed by conda, and only two of them are pinned, which means any of them could change at any time, oh dear. We forgot to include python too, so even the python version could change! We really only pinned the very tip of our environment iceberg.

If you installed this environment locally and created a Coiled software environment immediately you’d hope things would be in sync at least, but alas this is not the case.Different packages often have different requirements cross platform, so installing this on macOS will lead to a pretty different environment compared to the Linux environment on the cluster. Most of the time not different enough to cause major issues, but the few times it does are very painful.

Using some combination of pipenv, pdm, poetry, conda-lock and Docker is often how people try to solve this for production workflows, and often with high levels of success. If you run your client with the exact same Docker image as the cluster and install all your dependencies from a lock file, it works! In a perfect world everyone uses cutting edge best practices for replicating a python environment, everyone has exactly the same python environment everywhere, and everyone has the time to produce an always available environment artifact in the cloud for your clusters to use. 

In reality, things are messy setting up a perfect system can take much longer than what you want to do!

The Solution

We’ve been working hard on our new feature, package sync. Enabling it is simple:

				
					from coiled import Cluster

with Cluster(package_sync=True) as cluster:
    # dask work!
    pass

				
			

Package sync will then scan your local python environment and replicate it on your cluster. Your slightly imperfect environment that works great locally is now in the cloud where it will continue to work great! You’re not creating your cluster environments based on the tip of the iceberg anymore, with package sync, Coiled is looking much deeper!

 

We even handle editable package installations, i.e. those you’ve installed with pip install -e <package>, and packages installed from git! You can read more of the details about how this works (and some of the caveats) in our docs. Give it a go and let us know what you think!

 

Working with Dask core contributors, we’ve gotten very positive feedback:

[I] couldn't imagine trying to develop in this repo without package_sync. Especially as a non-primary developer in this repo who just wants to come in, add some tests, and move on without too much fuss, it's invaluable.

… spent a fair amount of time the last few days looking at performance testing and looking at some specific performance regressions, and this [package_sync] has been invaluable…

Happy syncing!


Ready to get started?

Create your first cluster in minutes.