An Elegant Guide to Testing Your Data Science Pipeline Using Pytest | by Avi Chawla | Sep, 2022


A Comprehensive Guide to Pytest for Data Science Projects

Photo by Annie Spratt on Unsplash

One of the biggest hurdles today’s data science teams are facing is transitioning their data-driven pipeline from jupyter notebooks to executable, reproducible, and organized files comprising functions and classes.

For most of us, working in a jupyter notebook is a fun and enjoyable task in the life-cycle of a data science project.

However, building an error-free, well-tested, and reliable data pipeline that does not break in production isn’t.

For instance, imagine a situation where you wrote a function in your data pipeline that divides two numbers:

While working in a jupyter notebook, you, as the programmer, know what should go inside the method and what to expect as an output.

Thinking about what on earth could possibly go wrong in this simple division, you pushed the method to production without testing, and guess what?

The pipeline collapses when the denominator is zero!

Therefore, to alleviate errors in the data pipeline and to ensure that the functions work the way as expected, testing them against different example inputs (or data situations) becomes a super important step before pushing your code to production.

In other words, the objective here is to create a reliable automation framework for the pipeline.

For starters, an automation framework is defined as an organized approach to writing test cases, executing them, and reporting the results of testing automation.

Overview of Testing Framework (Image by Author)

Typically, you should always test functions in your data science projects because it:

  • Ensures that the code works as expected end-to-end — improving reliability.
  • Helps you in detecting edge cases where different methods might fail.
  • Allows other team members to understand your code with the help of your tests — reducing code complexity.
  • Assists in swapping an existing code with an optimized/improved version without interfering with other functions.
  • Supports in generating actionable insights about the code, such as run-time, expected input/output, etc.

Thankfully, Python provides a handful of open-source libraries such as unittest, nose, behave, etc., but the one that stands out in the community of Python developers is Pytest.

Therefore, in this post, I’ll provide a detailed walkthrough of how you can leverage Pytest to build a testing suite for your data science project.

Let’s begin 🚀!

As a quick introduction, Pytest is a testing framework for writing test codes, executing them on a coded pipeline in Python, and generating the corresponding test reports.

The best thing about Pytest, in my opinion, is its intuitive and uncomplicated process of integrating it with the data pipeline.

Testing is already a job that data scientists don’t look forward to with much interest.

In this regard, Pytest genuinely makes the task of writing test suites easy and fun, which in turn, helps immensely in determining the reliability of a data science project.

To install Pytest, execute the following command:

Next, you should import Pytest to leverage its core testing functionalities.

With this, you are ready to write test cases and execute them.

We have coded the divide_two_numbers() method in project.py.

Next, let’s create a separate file for tests (test_file.py).

project
├── project.py
└── test_file.py

For the method we defined above (divide_two_numbers()), let’s write a test function.

This will check the data type of the output returned by the method against different inputs.

To execute this test function, open the command line in the current working directory and run the command pytest:

As expected, the method failed on the second assertion, where the denominator was zero, which allows you to correct your function.

All this is fine, but now you should have two questions in mind:

  1. In the above pytest command, we never specified the name of the test file. However, in the results, we see a mention of test_file.py only. How did Pytest infer that test_file.py is the specific file we have listed all our test cases in?
  2. Next, how did pytest find the methods it needs to execute? In other words, how did pytest differentiate the divide_two_numbers() from the test_divide_two_numbers_method() and figured out that test cases are listed in the latter method?

The answer lies in the technique of “Test Searching” (also called “Test Discovery”) in Pytest.

Pytest implements the following test discovery method to find test functions, files, and Python classes:

  1. If no directory (or file) is specified during Pytest execution, Pytest starts discovering from the current directory and recurses into sub-directories.
  2. In each directory, it searches for a pattern in the file name: test_*.py or *_test.py. That is why, in the above testing, the project.py file was not selected.
  3. From the pattern-matched files, it collects:
  • all "test” prefixed methods outside a Python class. For instance, from the methods below, only test_method1() will be collected but not method2_test().
  • all "test" prefixed methods inside a "Test” prefixed Python class that does not have an __init__() method.

For instance, in the above code block, from Test_Class1, only test_method1() will be collected as it is prefixed by the keyword "test".

Moreover, no methods will be collected from Test_Class2 as it has the __init__() method.

Finally, no methods will be collected from Class3_Test as the class name is not prefixed by the keyword "Test".

Customizing Test Search

While the default test discovery mechanism of Pytest is intuitive and straightforward, there might be situations where you may need to customize the test search pattern.

To achieve this, you can configure a different naming convention in a configuration file (pytest.ini) declared in the root directory of the project.

Pytest also supports alternatives to a .ini file extension which you can read here.

The directory is now structured as follows:

project
├── project.py
├── test_file.py
└── pytest.ini

We set three variables in the configuration file to customize the search pattern. These are python_files, python_classes and python_functions. Here is an example:

With this search pattern, Pytest will search for test functions in files prefixed with "check_".

Moreover, the test functions should also be prefixed with "check_".

Lastly, test classes, if any, should end with the keyword "_Check" to be considered for testing.

Now, if we run the pytest command, we get the following output:

Pytest identified no tests because the new test search pattern defined does not match any of the methods and files in the directory.

To correct this, you should rename the test_file.py to check_file.py and the test_divide_two_numbers_method() to check_divide_two_numbers_method().

project
├── project.py
├── check_file.py
└── pytest.ini

After complying with the test search pattern, we re-run the command and get the following output:

Here, I removed the divide by zero test case.

To get a verbose testing output, use the -v option while running the pytest command.

Imagine you have various modules in your pipeline that work together to accomplish a common goal.

A code pipeline with multiple modules (Image by Author)

Each module, of course, may also have its own testing suite.

Assume you made some modifications to a specific module (say, Module 1), which you wish to test before committing the changes.

In this situation, running the entire testing suite would be redundant and time-consuming, given that you didn’t make any changes to the other modules of the pipeline.

Moreover, the other modules may have been thoroughly tested in previous testing runs.

This is where Test Marks in Pytest become handy.

As the name suggests, Marking allows you to organize tests into different categories.

While testing, you can use these markers to run specific tests with Pytest — avoiding unnecessary usage of computation and run-time.

Technically, you can mark a test using the @pytest.mark.{marker-name} decorator.

Implementing a Marker

In the division example above, let’s define four different test methods, check_function1() through check_function4() as follows:

To mark test functions, you should import markers, as shown above.

Next, we mark the first three methods with mark.non_zero_input decorator and the last method with mark.zero_input decorator.

When you define a custom marker (like we did above), it is a good practice to specify it in the configuration file (pytest.ini).

It has two primary benefits:

  1. It avoids any unnecessary warnings that Pytest may throw.
  2. If someone wants to understand your pipeline and its testing suite, a declaration of custom markers in the configuration file can help them understand their role, which will help them use your code without any hassle.

You can declare custom markers in the configuration file as follows:

Next, say you want to run only those test functions that correspond to the non_zero_input marker.

You can do this using the command: pytest -m {marker-name}. This is demonstrated below:

In the above testing output, we notice that Pytest ignored check_function4() for testing and took only the first three methods that were defined under the non_zero_input marker.

Things To Know About Markers in Pytest

In addition to the overview we discussed about Pytest Markers above, there are a few more things you should know about them.

→ #1 A function can have multiple Markers

A test function is not bound to belong to a single marker in Pytest. Instead, you can define multiple markers as decorators for a single function as follows:

→ #2 You can define a single Marker for the whole class

Towards writing modular code, classes in Python are usually the prime choice for testing code pipelines, where a class usually has multiple methods.

To declare a marker for the functions defined in a class, you can specify a single marker for the entire class, and Pytest extends it to every method in the class.

However, if you want different methods to belong to different markers, you can also declare markers individually for every method in the class.

→ #3 Some pre-defined Markers in Pytest

In addition to custom markers discussed above, Pytest also provides a handful of Markers covering general functionalities that developers may need during testing.

  • mark.skip: As the name suggests, this marker is declared over a method to intentionally skip it during testing. For instance, consider the tests below:

Now, if you execute the pytest command, we get the following output:

In the output above, we notice a SKIPPED test, along with the reason specified while defining the skip decorator for the method.

  • mark.xfail: Another helpful inherent decorator provided by Pytest is xfail (short for expected-failure).

Say you are aware that a specific test function is not coded correctly. Therefore, you expect it to fail in certain situations.

With the xfail marker, you can mark the test function as an expected failure in your test suite.

Now, if you execute the pytest command, we get the following output:

In the output above, we notice an XFAIL test, along with the reason specified while defining the xfail decorator for the method.

You can find a comprehensive list of pre-defined markers in Pytest using the following command:

Writing modularized code is a critical step in developing large projects. A good development practice says that:

If you are writing the same piece of code twice, you are doing it wrong!

The same is applicable while designing a testing suite.

Fixtures in Pytest are analogous to general functions in a project that can be used anywhere in the pipeline.

The only difference is that the functions should be explicitly imported while fixtures don’t.

Fixtures not only reduce the amount of code you write, but they also make your testing suite concise and easy to understand.

Creating a Fixture

Usually, in Pytest, you should declare your fixtures in a separate file from your test cases.

For demonstration purposes, let’s create a file conftest.py.

project
├── project.py
├── test_file.py
├── conftest.py
└── pytest.ini

A fixture is declared with the help of the fixture decorator as follows:

In this example, our objective is to test an addition and subtraction module against the same inputs.

A naive way of doing this would be to define the test inputs in both functions.

However, with fixtures, you can pass the test inputs as an argument to the test method.

The argument received by the test function should be the same as the name of the fixture (test_inputs).

Moreover, if you notice, we never imported the fixture.

Always remember that any fixture you create in the fixtures files becomes accessible to every test function in the current directory and every sub-directory recursively.

Now, if we execute the pytest command, we get the following output:

To summarize, we defined a fixture in a separate file with the help of the pytest.fixture decorator.

When the name of the fixture was passed as an argument in any test method, Pytest fetched the object returned by the fixture.

Scope of a Fixture

In the definition of the fixture above, we passed the scope parameter in the decorator.

Essentially, the main objective of a fixture is to dispense information.

Scope, in the context of a fixture declaration, defines how often the fixture should return a fresh copy of the object when called by a test function.

For instance, in the above example, scope=’function’ signifies that the fixture will create a fresh copy of the input list per function.

In other words, fixtures are created when first requested by a test and are destroyed based on their scope, which can take one of the following five values:

  • scope = 'function': This is the default scope of a fixture, and a new copy is created per test function.
  • scope = 'class’: A single copy of the fixture is available to all the test methods in a class.
  • scope = 'module': The fixture is invoked once per module.
  • scope = 'package': The fixture is scoped at the package level.
  • scope = 'session': The fixture is created only once per run of the test suite.

With this, we come to the end of an introductory blog on developing your test suite using Pytest. Congratulations!

I hope this article gave you a detailed understanding of why testing is essential and how to leverage Pytest to test your data science project.

There are still a few more things, such as parameterization, generating test reports, etc. which I have kept for a separate blog post soon 🙂

Thanks for reading!


A Comprehensive Guide to Pytest for Data Science Projects

Photo by Annie Spratt on Unsplash

One of the biggest hurdles today’s data science teams are facing is transitioning their data-driven pipeline from jupyter notebooks to executable, reproducible, and organized files comprising functions and classes.

For most of us, working in a jupyter notebook is a fun and enjoyable task in the life-cycle of a data science project.

However, building an error-free, well-tested, and reliable data pipeline that does not break in production isn’t.

For instance, imagine a situation where you wrote a function in your data pipeline that divides two numbers:

While working in a jupyter notebook, you, as the programmer, know what should go inside the method and what to expect as an output.

Thinking about what on earth could possibly go wrong in this simple division, you pushed the method to production without testing, and guess what?

The pipeline collapses when the denominator is zero!

Therefore, to alleviate errors in the data pipeline and to ensure that the functions work the way as expected, testing them against different example inputs (or data situations) becomes a super important step before pushing your code to production.

In other words, the objective here is to create a reliable automation framework for the pipeline.

For starters, an automation framework is defined as an organized approach to writing test cases, executing them, and reporting the results of testing automation.

Overview of Testing Framework (Image by Author)

Typically, you should always test functions in your data science projects because it:

  • Ensures that the code works as expected end-to-end — improving reliability.
  • Helps you in detecting edge cases where different methods might fail.
  • Allows other team members to understand your code with the help of your tests — reducing code complexity.
  • Assists in swapping an existing code with an optimized/improved version without interfering with other functions.
  • Supports in generating actionable insights about the code, such as run-time, expected input/output, etc.

Thankfully, Python provides a handful of open-source libraries such as unittest, nose, behave, etc., but the one that stands out in the community of Python developers is Pytest.

Therefore, in this post, I’ll provide a detailed walkthrough of how you can leverage Pytest to build a testing suite for your data science project.

Let’s begin 🚀!

As a quick introduction, Pytest is a testing framework for writing test codes, executing them on a coded pipeline in Python, and generating the corresponding test reports.

The best thing about Pytest, in my opinion, is its intuitive and uncomplicated process of integrating it with the data pipeline.

Testing is already a job that data scientists don’t look forward to with much interest.

In this regard, Pytest genuinely makes the task of writing test suites easy and fun, which in turn, helps immensely in determining the reliability of a data science project.

To install Pytest, execute the following command:

Next, you should import Pytest to leverage its core testing functionalities.

With this, you are ready to write test cases and execute them.

We have coded the divide_two_numbers() method in project.py.

Next, let’s create a separate file for tests (test_file.py).

project
├── project.py
└── test_file.py

For the method we defined above (divide_two_numbers()), let’s write a test function.

This will check the data type of the output returned by the method against different inputs.

To execute this test function, open the command line in the current working directory and run the command pytest:

As expected, the method failed on the second assertion, where the denominator was zero, which allows you to correct your function.

All this is fine, but now you should have two questions in mind:

  1. In the above pytest command, we never specified the name of the test file. However, in the results, we see a mention of test_file.py only. How did Pytest infer that test_file.py is the specific file we have listed all our test cases in?
  2. Next, how did pytest find the methods it needs to execute? In other words, how did pytest differentiate the divide_two_numbers() from the test_divide_two_numbers_method() and figured out that test cases are listed in the latter method?

The answer lies in the technique of “Test Searching” (also called “Test Discovery”) in Pytest.

Pytest implements the following test discovery method to find test functions, files, and Python classes:

  1. If no directory (or file) is specified during Pytest execution, Pytest starts discovering from the current directory and recurses into sub-directories.
  2. In each directory, it searches for a pattern in the file name: test_*.py or *_test.py. That is why, in the above testing, the project.py file was not selected.
  3. From the pattern-matched files, it collects:
  • all "test” prefixed methods outside a Python class. For instance, from the methods below, only test_method1() will be collected but not method2_test().
  • all "test" prefixed methods inside a "Test” prefixed Python class that does not have an __init__() method.

For instance, in the above code block, from Test_Class1, only test_method1() will be collected as it is prefixed by the keyword "test".

Moreover, no methods will be collected from Test_Class2 as it has the __init__() method.

Finally, no methods will be collected from Class3_Test as the class name is not prefixed by the keyword "Test".

Customizing Test Search

While the default test discovery mechanism of Pytest is intuitive and straightforward, there might be situations where you may need to customize the test search pattern.

To achieve this, you can configure a different naming convention in a configuration file (pytest.ini) declared in the root directory of the project.

Pytest also supports alternatives to a .ini file extension which you can read here.

The directory is now structured as follows:

project
├── project.py
├── test_file.py
└── pytest.ini

We set three variables in the configuration file to customize the search pattern. These are python_files, python_classes and python_functions. Here is an example:

With this search pattern, Pytest will search for test functions in files prefixed with "check_".

Moreover, the test functions should also be prefixed with "check_".

Lastly, test classes, if any, should end with the keyword "_Check" to be considered for testing.

Now, if we run the pytest command, we get the following output:

Pytest identified no tests because the new test search pattern defined does not match any of the methods and files in the directory.

To correct this, you should rename the test_file.py to check_file.py and the test_divide_two_numbers_method() to check_divide_two_numbers_method().

project
├── project.py
├── check_file.py
└── pytest.ini

After complying with the test search pattern, we re-run the command and get the following output:

Here, I removed the divide by zero test case.

To get a verbose testing output, use the -v option while running the pytest command.

Imagine you have various modules in your pipeline that work together to accomplish a common goal.

A code pipeline with multiple modules (Image by Author)

Each module, of course, may also have its own testing suite.

Assume you made some modifications to a specific module (say, Module 1), which you wish to test before committing the changes.

In this situation, running the entire testing suite would be redundant and time-consuming, given that you didn’t make any changes to the other modules of the pipeline.

Moreover, the other modules may have been thoroughly tested in previous testing runs.

This is where Test Marks in Pytest become handy.

As the name suggests, Marking allows you to organize tests into different categories.

While testing, you can use these markers to run specific tests with Pytest — avoiding unnecessary usage of computation and run-time.

Technically, you can mark a test using the @pytest.mark.{marker-name} decorator.

Implementing a Marker

In the division example above, let’s define four different test methods, check_function1() through check_function4() as follows:

To mark test functions, you should import markers, as shown above.

Next, we mark the first three methods with mark.non_zero_input decorator and the last method with mark.zero_input decorator.

When you define a custom marker (like we did above), it is a good practice to specify it in the configuration file (pytest.ini).

It has two primary benefits:

  1. It avoids any unnecessary warnings that Pytest may throw.
  2. If someone wants to understand your pipeline and its testing suite, a declaration of custom markers in the configuration file can help them understand their role, which will help them use your code without any hassle.

You can declare custom markers in the configuration file as follows:

Next, say you want to run only those test functions that correspond to the non_zero_input marker.

You can do this using the command: pytest -m {marker-name}. This is demonstrated below:

In the above testing output, we notice that Pytest ignored check_function4() for testing and took only the first three methods that were defined under the non_zero_input marker.

Things To Know About Markers in Pytest

In addition to the overview we discussed about Pytest Markers above, there are a few more things you should know about them.

→ #1 A function can have multiple Markers

A test function is not bound to belong to a single marker in Pytest. Instead, you can define multiple markers as decorators for a single function as follows:

→ #2 You can define a single Marker for the whole class

Towards writing modular code, classes in Python are usually the prime choice for testing code pipelines, where a class usually has multiple methods.

To declare a marker for the functions defined in a class, you can specify a single marker for the entire class, and Pytest extends it to every method in the class.

However, if you want different methods to belong to different markers, you can also declare markers individually for every method in the class.

→ #3 Some pre-defined Markers in Pytest

In addition to custom markers discussed above, Pytest also provides a handful of Markers covering general functionalities that developers may need during testing.

  • mark.skip: As the name suggests, this marker is declared over a method to intentionally skip it during testing. For instance, consider the tests below:

Now, if you execute the pytest command, we get the following output:

In the output above, we notice a SKIPPED test, along with the reason specified while defining the skip decorator for the method.

  • mark.xfail: Another helpful inherent decorator provided by Pytest is xfail (short for expected-failure).

Say you are aware that a specific test function is not coded correctly. Therefore, you expect it to fail in certain situations.

With the xfail marker, you can mark the test function as an expected failure in your test suite.

Now, if you execute the pytest command, we get the following output:

In the output above, we notice an XFAIL test, along with the reason specified while defining the xfail decorator for the method.

You can find a comprehensive list of pre-defined markers in Pytest using the following command:

Writing modularized code is a critical step in developing large projects. A good development practice says that:

If you are writing the same piece of code twice, you are doing it wrong!

The same is applicable while designing a testing suite.

Fixtures in Pytest are analogous to general functions in a project that can be used anywhere in the pipeline.

The only difference is that the functions should be explicitly imported while fixtures don’t.

Fixtures not only reduce the amount of code you write, but they also make your testing suite concise and easy to understand.

Creating a Fixture

Usually, in Pytest, you should declare your fixtures in a separate file from your test cases.

For demonstration purposes, let’s create a file conftest.py.

project
├── project.py
├── test_file.py
├── conftest.py
└── pytest.ini

A fixture is declared with the help of the fixture decorator as follows:

In this example, our objective is to test an addition and subtraction module against the same inputs.

A naive way of doing this would be to define the test inputs in both functions.

However, with fixtures, you can pass the test inputs as an argument to the test method.

The argument received by the test function should be the same as the name of the fixture (test_inputs).

Moreover, if you notice, we never imported the fixture.

Always remember that any fixture you create in the fixtures files becomes accessible to every test function in the current directory and every sub-directory recursively.

Now, if we execute the pytest command, we get the following output:

To summarize, we defined a fixture in a separate file with the help of the pytest.fixture decorator.

When the name of the fixture was passed as an argument in any test method, Pytest fetched the object returned by the fixture.

Scope of a Fixture

In the definition of the fixture above, we passed the scope parameter in the decorator.

Essentially, the main objective of a fixture is to dispense information.

Scope, in the context of a fixture declaration, defines how often the fixture should return a fresh copy of the object when called by a test function.

For instance, in the above example, scope=’function’ signifies that the fixture will create a fresh copy of the input list per function.

In other words, fixtures are created when first requested by a test and are destroyed based on their scope, which can take one of the following five values:

  • scope = 'function': This is the default scope of a fixture, and a new copy is created per test function.
  • scope = 'class’: A single copy of the fixture is available to all the test methods in a class.
  • scope = 'module': The fixture is invoked once per module.
  • scope = 'package': The fixture is scoped at the package level.
  • scope = 'session': The fixture is created only once per run of the test suite.

With this, we come to the end of an introductory blog on developing your test suite using Pytest. Congratulations!

I hope this article gave you a detailed understanding of why testing is essential and how to leverage Pytest to test your data science project.

There are still a few more things, such as parameterization, generating test reports, etc. which I have kept for a separate blog post soon 🙂

Thanks for reading!

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@technoblender.com. The content will be deleted within 24 hours.
artificial intelligenceAviChawlaDataelegantGuidepipelinePytestScienceSepTech NewsTechnologytesting
Comments (0)
Add Comment