Techno Blender
Digitally Yours.

Multiprocessing in Python

0 104


Last Updated on May 3, 2022

When you work on a computer vision project, you probably need to preprocess a lot of image data. This is time-consuming, and it would be great if you could process multiple images in parallel. Multiprocessing is the ability of a system to run multiple processors at one time. If you had a computer with a single processor, it would switch between multiple processes to keep all of them running. However, most computers today have at least a multi-core processor, allowing several processes to be executed at once. The Python Multiprocessing Module is a tool for you to increase your scripts’ efficiency by allocating tasks to different processes.

After completing this tutorial, you will know:

  • Why we would want to use multiprocessing
  • How to use basic tools in the Python multiprocessing module

Let’s get started.

Multiprocessing in Python
Photo by Thirdman. Some rights reserved.

Overview

This tutorial is divided into four parts; they are:

  • Benefits of multiprocessing
  • Basic multiprocessing
  • Multiprocessing for real use
  • Using joblib

Benefits of Multiprocessing

You may ask, “Why Multiprocessing?” Multiprocessing can make a program substantially more efficient by running multiple tasks in parallel instead of sequentially. A similar term is multithreading, but they are different.

A process is a program loaded into memory to run and does not share its memory with other processes. A thread is an execution unit within a process. Multiple threads run in a process and share the process’s memory space with each other.

Python’s Global Interpreter Lock (GIL) only allows one thread to be run at a time under the interpreter, which means you can’t enjoy the performance benefit of multithreading if the Python interpreter is required. This is what gives multiprocessing an upper hand over threading in Python. Multiple processes can be run in parallel because each process has its own interpreter that executes the instructions allocated to it. Also, the OS would see your program in multiple processes and schedule them separately, i.e., your program gets a larger share of computer resources in total. So, multiprocessing is faster when the program is CPU-bound. In cases where there is a lot of I/O in your program, threading may be more efficient because most of the time, your program is waiting for the I/O to complete. However, multiprocessing is generally more efficient because it runs concurrently.

Basic multiprocessing

Let’s use the Python Multiprocessing module to write a basic program that demonstrates how to do concurrent programming.

Let’s look at this function, task(), that sleeps for 0.5 seconds and prints before and after the sleep:

To create a process, we simply say so using the multiprocessing module:

The target argument to the Process() specifies the target function that the process runs. But these processes do not run immediately until we start them:

A complete concurrent program would be as follows:

We must fence our main program under if __name__ == "__main__" or otherwise the multiprocessing module will complain. This safety construct guarantees Python finishes analyzing the program before the sub-process is created.

However, there is a problem with the code, as the program timer is printed before the processes we created are even executed. Here’s the output for the code above:

We need to call the join() function on the two processes to make them run before the time prints. This is because three processes are going on: p1, p2, and the main process. The main process is the one that keeps track of the time and prints the time taken to execute. We should make the line of finish_time run no earlier than the processes p1 and p2 are finished. We just need to add this snippet of code immediately after the start() function calls:

The join() function allows us to make other processes wait until the processes that had join() called on it are complete. Here’s the output with the join statements added:

With similar reasoning, we can make more processes run. The following is the complete code modified from above to have 10 processes:

Multiprocessing for Real Use

Starting a new process and then joining it back to the main process is how multiprocessing works in Python (as in many other languages). The reason we want to run multiprocessing is probably to execute many different tasks concurrently for speed. It can be an image processing function, which we need to do on thousands of images. It can also be to convert PDFs into plaintext for the subsequent natural language processing tasks, and we need to process a thousand PDFs. Usually, we will create a function that takes an argument (e.g., filename) for such tasks.

Let’s consider a function:

If we want to run it with arguments 1 to 1,000, we can create 1,000 processes and run them in parallel:

However, this will not work as you probably have only a handful of cores in your computer. Running 1,000 processes is creating too much overhead and overwhelming the capacity of your OS. Also, you may have exhausted your memory. The better way is to run a process pool to limit the number of processes that can be run at a time:

The argument for multiprocessing.Pool() is the number of processes to create in the pool. If omitted, Python will make it equal to the number of cores you have in your computer.

We use the apply_async() function to pass the arguments to the function cube in a list comprehension. This will create tasks for the pool to run. It is called “async” (asynchronous) because we didn’t wait for the task to finish, and the main process may continue to run. Therefore the apply_async() function does not return the result but an object that we can use, get(), to wait for the task to finish and retrieve the result. Since we get the result in a list comprehension, the order of the result corresponds to the arguments we created in the asynchronous tasks. However, this does not mean the processes are started or finished in this order inside the pool.

If you think writing lines of code to start processes and join them is too explicit, you can consider using map() instead:

We don’t have the start and join here because it is hidden behind the pool.map() function. What it does is split the iterable range(1,1000) into chunks and runs each chunk in the pool. The map function is a parallel version of the list comprehension:

But the modern-day alternative is to use map from concurrent.futures, as follows:

This code is running the multiprocessing module under the hood. The beauty of doing so is that we can change the program from multiprocessing to multithreading by simply replacing ProcessPoolExecutor with ThreadPoolExecutor. Of course, you have to consider whether the global interpreter lock is an issue for your code.

Using joblib

The package joblib is a set of tools to make parallel computing easier. It is a common third-party library for multiprocessing. It also provides caching and serialization functions. To install the joblib package, use the command in the terminal:

We can convert our previous example into the following to use joblib:

Indeed, it is intuitive to see what it does. The delayed() function is a wrapper to another function to make a “delayed” version of the function call. Which means it will not execute the function immediately when it is called.

Then we call the delayed function multiple times with different sets of arguments we want to pass to it. For example, when we give integer 1 to the delayed version of the function cube, instead of computing the result, we produce a tuple, (cube, (1,), {}) for the function object, the positional arguments, and keyword arguments, respectively.

We created the engine instance with Parallel(). When it is invoked like a function with the list of tuples as an argument, it will actually execute the job as specified by each tuple in parallel and collect the result as a list after all jobs are finished. Here we created the Parallel() instance with n_jobs=3, so there will be three processes running in parallel.

We can also write the tuples directly. Hence the code above can be rewritten as:

The benefit of using joblib is that we can run the code in multithread by simply adding an additional argument:

And this hides all the details of running functions in parallel. We simply use a syntax not too much different from a plain list comprehension.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Books

APIs

Summary

In this tutorial, you learned how we run Python functions in parallel for speed. In particular, you learned:

  • How to use the multiprocessing module in Python to create new processes that run a function
  • The mechanism of launching and completing a process
  • The use of process pool in multiprocessing for controlled multiprocessing and the counterpart syntax in concurrent.futures
  • How to use the third-party library joblib for multiprocessing


Last Updated on May 3, 2022

When you work on a computer vision project, you probably need to preprocess a lot of image data. This is time-consuming, and it would be great if you could process multiple images in parallel. Multiprocessing is the ability of a system to run multiple processors at one time. If you had a computer with a single processor, it would switch between multiple processes to keep all of them running. However, most computers today have at least a multi-core processor, allowing several processes to be executed at once. The Python Multiprocessing Module is a tool for you to increase your scripts’ efficiency by allocating tasks to different processes.

After completing this tutorial, you will know:

  • Why we would want to use multiprocessing
  • How to use basic tools in the Python multiprocessing module

Let’s get started.

Multiprocessing in Python
Photo by Thirdman. Some rights reserved.

Overview

This tutorial is divided into four parts; they are:

  • Benefits of multiprocessing
  • Basic multiprocessing
  • Multiprocessing for real use
  • Using joblib

Benefits of Multiprocessing

You may ask, “Why Multiprocessing?” Multiprocessing can make a program substantially more efficient by running multiple tasks in parallel instead of sequentially. A similar term is multithreading, but they are different.

A process is a program loaded into memory to run and does not share its memory with other processes. A thread is an execution unit within a process. Multiple threads run in a process and share the process’s memory space with each other.

Python’s Global Interpreter Lock (GIL) only allows one thread to be run at a time under the interpreter, which means you can’t enjoy the performance benefit of multithreading if the Python interpreter is required. This is what gives multiprocessing an upper hand over threading in Python. Multiple processes can be run in parallel because each process has its own interpreter that executes the instructions allocated to it. Also, the OS would see your program in multiple processes and schedule them separately, i.e., your program gets a larger share of computer resources in total. So, multiprocessing is faster when the program is CPU-bound. In cases where there is a lot of I/O in your program, threading may be more efficient because most of the time, your program is waiting for the I/O to complete. However, multiprocessing is generally more efficient because it runs concurrently.

Basic multiprocessing

Let’s use the Python Multiprocessing module to write a basic program that demonstrates how to do concurrent programming.

Let’s look at this function, task(), that sleeps for 0.5 seconds and prints before and after the sleep:

To create a process, we simply say so using the multiprocessing module:

The target argument to the Process() specifies the target function that the process runs. But these processes do not run immediately until we start them:

A complete concurrent program would be as follows:

We must fence our main program under if __name__ == "__main__" or otherwise the multiprocessing module will complain. This safety construct guarantees Python finishes analyzing the program before the sub-process is created.

However, there is a problem with the code, as the program timer is printed before the processes we created are even executed. Here’s the output for the code above:

We need to call the join() function on the two processes to make them run before the time prints. This is because three processes are going on: p1, p2, and the main process. The main process is the one that keeps track of the time and prints the time taken to execute. We should make the line of finish_time run no earlier than the processes p1 and p2 are finished. We just need to add this snippet of code immediately after the start() function calls:

The join() function allows us to make other processes wait until the processes that had join() called on it are complete. Here’s the output with the join statements added:

With similar reasoning, we can make more processes run. The following is the complete code modified from above to have 10 processes:

Multiprocessing for Real Use

Starting a new process and then joining it back to the main process is how multiprocessing works in Python (as in many other languages). The reason we want to run multiprocessing is probably to execute many different tasks concurrently for speed. It can be an image processing function, which we need to do on thousands of images. It can also be to convert PDFs into plaintext for the subsequent natural language processing tasks, and we need to process a thousand PDFs. Usually, we will create a function that takes an argument (e.g., filename) for such tasks.

Let’s consider a function:

If we want to run it with arguments 1 to 1,000, we can create 1,000 processes and run them in parallel:

However, this will not work as you probably have only a handful of cores in your computer. Running 1,000 processes is creating too much overhead and overwhelming the capacity of your OS. Also, you may have exhausted your memory. The better way is to run a process pool to limit the number of processes that can be run at a time:

The argument for multiprocessing.Pool() is the number of processes to create in the pool. If omitted, Python will make it equal to the number of cores you have in your computer.

We use the apply_async() function to pass the arguments to the function cube in a list comprehension. This will create tasks for the pool to run. It is called “async” (asynchronous) because we didn’t wait for the task to finish, and the main process may continue to run. Therefore the apply_async() function does not return the result but an object that we can use, get(), to wait for the task to finish and retrieve the result. Since we get the result in a list comprehension, the order of the result corresponds to the arguments we created in the asynchronous tasks. However, this does not mean the processes are started or finished in this order inside the pool.

If you think writing lines of code to start processes and join them is too explicit, you can consider using map() instead:

We don’t have the start and join here because it is hidden behind the pool.map() function. What it does is split the iterable range(1,1000) into chunks and runs each chunk in the pool. The map function is a parallel version of the list comprehension:

But the modern-day alternative is to use map from concurrent.futures, as follows:

This code is running the multiprocessing module under the hood. The beauty of doing so is that we can change the program from multiprocessing to multithreading by simply replacing ProcessPoolExecutor with ThreadPoolExecutor. Of course, you have to consider whether the global interpreter lock is an issue for your code.

Using joblib

The package joblib is a set of tools to make parallel computing easier. It is a common third-party library for multiprocessing. It also provides caching and serialization functions. To install the joblib package, use the command in the terminal:

We can convert our previous example into the following to use joblib:

Indeed, it is intuitive to see what it does. The delayed() function is a wrapper to another function to make a “delayed” version of the function call. Which means it will not execute the function immediately when it is called.

Then we call the delayed function multiple times with different sets of arguments we want to pass to it. For example, when we give integer 1 to the delayed version of the function cube, instead of computing the result, we produce a tuple, (cube, (1,), {}) for the function object, the positional arguments, and keyword arguments, respectively.

We created the engine instance with Parallel(). When it is invoked like a function with the list of tuples as an argument, it will actually execute the job as specified by each tuple in parallel and collect the result as a list after all jobs are finished. Here we created the Parallel() instance with n_jobs=3, so there will be three processes running in parallel.

We can also write the tuples directly. Hence the code above can be rewritten as:

The benefit of using joblib is that we can run the code in multithread by simply adding an additional argument:

And this hides all the details of running functions in parallel. We simply use a syntax not too much different from a plain list comprehension.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Books

APIs

Summary

In this tutorial, you learned how we run Python functions in parallel for speed. In particular, you learned:

  • How to use the multiprocessing module in Python to create new processes that run a function
  • The mechanism of launching and completing a process
  • The use of process pool in multiprocessing for controlled multiprocessing and the counterpart syntax in concurrent.futures
  • How to use the third-party library joblib for multiprocessing

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment