Privacy-Preserving Machine Learning

Katharine Jarmul, privacy activist and Head of Product at Cape Privacy, joins Matt Rocklin and Hugo Bowne-Anderson to chat about how distributed computing and privacy support one another and are mutually beneficial, especially when considering today’s data science and machine learning problems.

We’ll cover both the opportunities and the challenges of protecting distributed data science workloads using the newly released Cape Python open source package:

  1. Learn about privacy-enhancing techniques and when to use them;
  2. See how to write policy for privacy-enhancing techniques and apply them to a pandas DataFrame;
  3. Explore when transformations might be important during distributed data processing and how distributed computing in machine learning could be a harbinger for advanced privacy techniques, such as federated learning and secure multi-party computation.

If you know a bit of machine learning, you’ll learn how to reason about privacy policy using Cape Python.

If you’re comfortable with Dask or other forms of distributed compute, you’ll learn about how distributed pipeline tasks can benefit from privacy as one part of preprocessing and what the future of fully distributed machine learning might look like with workflows and pipelines!

Join us this Thursday, July 23rd at 9am US Eastern time on our YouTube channel as we dive into how privacy-preserving data science and distributed computing are inter-related and can help each other.

Cape Privacy logo
Dask logo with matrix
Share

Sign up for updates