Dask allows you to do parallel and distributed computing in Python. It’s easy to adopt with a syntax similar to the PyData ecosystem of tools, and useful for problems that take a long time to compute, such as complex data processing or machine learning algorithms.
Dask was originally developed to scale NumPy and pandas. As Matthew Rocklin says in Dask History, the Python community quickly became interested in the Dask Engine and used it to build many more specialized tools. Dask remains a powerful tool for scaling data science work, but most of its use today is indirect. For example, you use Dask while using RAPIDS, PyTorch, Prefect, xarray, and more!
Dask is rooted in the Python open-source and data science community. It was designed by the members of this community for other community members. This is reflected in how Dask interoperates with other tools in this ecosystem and in the current Dask community of users, contributors, and enthusiasts.
Dask Array is the Dask collection that extends the NumPy API to enable parallel and distributed workflows. It allows you to work with larger-than-memory data using a familiar syntax.
NumPy is the Python library for working with multidimensional arrays, while Dask is the Python library for scalable computing. Dask has a collection called Dask Array that extends the NumPy API and allows you to use the familiar API for larger problems.
Many PyData tools are limited by single-core performance and available RAM storage. Dask allows these tools to scale. It is Python-native and hence interoperates well with the entire PyData stack.
The Python stack is a broad collection of Python packages used for scientific computing and data science, in both industry and basic research: At the bottom of the stack, you have the Python standard library.
PyData refers to an interoperable collection of Python tools for data analysis, visualization, statistical inference, machine learning, interactive computing, and more that is used across various types of industries and academic domains.
Dask Delayed is a low-level Dask API that allows you to write custom parallel Python code. You can apply parallelizable operations to any general Python code and access the Dask Engine directly for your computations.
Dask Bag can be used to work with large-scale unstructured data formats like JSON. You can use the Bag API to perform standard operations like map, filter, and fold, and then convert the data into other structures like a Dask DataFrame object for dedicated analysis.
There are three main ways to implement parallel processing with Dask:
- Using the high-level Dask APIs: Dask provides parallel alternatives for common PyData libraries like NumPy, pandas, and scikit-learn, that have familiar syntax you can use directly.
- Using the low-level Dask APIs: Dask also allows you to write custom code, both parallel and distributed, with low-level APIs that access the Dask Engine.
- Using the tools built on Dask: There are numerous libraries that are built using Dask like Prefect, PyTorch, RAPIDS, and more, that you can use for specialized use-cases.
A pandas DataFrame is a data structure that can store multidimensional, labeled, arrays along with some metadata. A Dask DataFrame consists of multiple pandas DataFrames, called partitions. Hence, a Dask DataFrame computation can be understood as performing relevant computations on all of its partitions (pandas DataFrames), in parallel fashion.
Dask DataFrame is a high-level Dask API that extends the pandas API for parallel and distributed computing in Dask. It allows you to work with larger-than-memory data on your local machine, or even with TB-scale data on distributed clusters on the cloud, all while following a syntax similar to pandas.
PyData libraries like NumPy, pandas, and scikit-learn are widely popular for data science today, both with individual data professionals and enterprises. These libraries are great, however, they are limited to single-core usage and do not scale beyond the available RAM. This is where you would need Dask. It is Python-native, builds on top of these familiar PyData libraries, and allows you to leverage the distributed and parallel computing potential of your computer.
SQL and NoSQL are both great database models and have clear advantages over the other depending on your use case. SQL databases are more suited for datasets that have a consistent schema and a need for vertical scalability.
SQL is primarily designed for querying structured data efficiently. SQL stands for Structured Query Language, and it is especially useful for storing, manipulating, and accessing data from enormous databases. On the other hand, Python is a general-purpose programming language with sophisticated features to do everything from game development to data analysis.
Common SQL commands include “Select”, “Insert”, “Update”, “Delete”, “Create”, and “Drop”, which can be used to accomplish a lot in a database.
- SELECT: Used to select a subset of rows in a table.
- INSERT: Used to add a row into a table.
- UPDATE: Used to update a specific row within a table.
- DELETE: Used to delete rows from the table.
- CREATE: Used to create a new table.
- DROP: Used to delete/destroy the column or the whole table from your database.
SQL, which stands for Structured Query Language, is a programming language designed for managing structured data in relational databases It provides useful statements that allow for quick data access and manipulation.