Continuous integration in Python, Volume 1: automated tests with pytest
(Edit: I initially thought it would be cute to number from 0. But it turns out it becomes rather obnoxious to relate English (first, second, …) to 0-indexing. So this was formerly volume 0. But everything else remains the same.)
I just finished the process of setting up continuous integration from scratch for one of my projects, cellom2tif, a simple image file converter/liberator. I thought I would write a blog post about that process, but it has slowly mutated into a hefty document that I thought would work better as a series. I'll cover automated testing, test coverage, and how to get these to run automatically for your project with Travis-CI and Coveralls.
Without further ado, here goes the first post: how to set up automated testing for your Python project using pytest.
Automated tests, and why you need them
Software engineering is hard, and it's incredibly easy to mess things up, so you should write tests for all your functions, which ensure that nothing obviously stupid is going wrong. Tests can take a lot of different forms, but here's a really basic example. Suppose this is a function in your file, maths.py
:
def square(x): return x ** 2
Then, elsewhere in your package, you should have a file test_maths.py
with the following function definition:
from maths import square def test_square(): x = 4 assert square(x) == 16
This way, if someone (such as your future self) comes along and messes with the code in square()
, test_square
will tell you whether they broke it.
Testing in Python with pytest
A whole slew of testing frameworks, such as nose, will then traverse your project, look for files and functions whose names begin withtest_
, run them, and report any errors or assertion failures.
I've chosen to use pytest as my framework because:
- it is a strict superset of both nose and Python's built-in unittest, so that if you run it on projects set up with those, it'll work out of the box;
- its output is more readable than nose's; and
- its fixtures provide a very nice syntax for test setup and cleanup.
But the basics are very simple: sprinkle files named test_something.py
throughout your project, each containing one or more test_function()
definition; then type py.test
on the command line (at your project root directory), and voila! Pytest will traverse all your subdirectories, gather up all the test files and functions, and run your tests.
Here's the output for the minimal maths
project described above:
~/projects/maths $ py.test ============================= test session starts ============================== platform darwin -- Python 2.7.8 -- py-1.4.20 -- pytest-2.5.2 collected 1 items test_maths.py . =========================== 1 passed in 0.03 seconds ===========================
In addition to the test functions described above, there is a Python standard called doctest in which properly formatted usage examples in your documentation are automatically run as tests. Here's an example:
def square(x): """Return the square of a number `x`. [...] Examples -------- >>> square(5) 25 """ return x ** 2
(See my post on NumPy docstring conventions for what should go in the ellipsis above.)
Depending the complexity of your code, doctests, test functions, or both will be the appropriate route. Pytest supports doctests with the --doctest-modules
flag. (This runs both your standard tests and doctests.)
~/projects/maths $ py.test --doctest-modules ============================= test session starts ============================== platform darwin -- Python 2.7.8 -- py-1.4.20 -- pytest-2.5.2 collected 2 items maths.py . test_maths.py . =========================== 2 passed in 0.06 seconds ===========================
Test-driven development
That was easy! And yet most people, my past self included, neglect tests, thinking they'll do them eventually, when the software is ready. This is backwards. You've probably heard the phrase "Test-driven development (TDD)"; this is what they're talking about: writing your tests before you've written the functionality to pass them. It might initially seem like wasted effort, like you're not making progress in what you actually want to do, which is write your software. But it's not:
https://twitter.com/zspencer/status/514447236239859712
By spending a bit of extra effort to prevent bugs down the road, you will get to where you want to go faster.
That's it for volume 1! Watch out for the next post: ensuring your tests are thorough by measuring test coverage.
Comments
Comments powered by Disqus