# Continuous integration in Python, Volume 2: measuring test coverage

(Edit: I initially thought it would be cute to number from 0. But it turns out it becomes rather obnoxious to relate English (first, second, ...) to 0-indexing. So this was formerly volume 1. But everything else remains the same.)

This is the second post in a series about setting up continuous integration for a Python project from scratch. For the first post, see Automated tests with pytest.

After you've written some test cases for a tiny project, it's easy to check what code you have automatically tested. For even moderately big projects, you will need tools that automatically check what parts of your code are actually tested. The proportion of lines of code that are run at least once during your tests is called your test coverage.

For the same reasons that testing is important, measuring coverage is important. Pytest can measure coverage for you with the coverage plugin, found in the pytest-cov package. Once you've installed the extension, a test coverage measurement is just a command-line option away:

[code lang=text]
~/projects/maths \$ py.test --doctest-modules --cov .
============================= test session starts ==============================
platform darwin -- Python 2.7.8 -- py-1.4.25 -- pytest-2.6.3
plugins: cov
collected 2 items

maths.py .
test_maths.py .
--------------- coverage: platform darwin, python 2.7.8-final-0 ----------------
Name         Stmts   Miss  Cover
--------------------------------
maths            2      0   100%
test_maths       4      0   100%
--------------------------------
TOTAL            6      0   100%

=========================== 2 passed in 0.07 seconds ===========================
[/code]


(The --cov takes a directory as input, which I find obnoxious, given that py.test so naturally defaults to the current directory. But it is what it is.)

Now, if I add a function without a test, I'll see my coverage drop:

[code lang=text]
def sqrt(x):
"""Return the square root of x."""
return x * 0.5
[/code]


(The typo is intentional.)

[code lang=text]
--------------- coverage: platform darwin, python 2.7.8-final-0 ----------------
Name         Stmts   Miss  Cover
--------------------------------
maths            4      1    75%
test_maths       4      0   100%
--------------------------------
TOTAL            8      1    88%
[/code]


With one more option, --cov-report term-missing, I can see which lines I haven't covered, so I can try to design tests specifically for that code:

[code lang=text]
--------------- coverage: platform darwin, python 2.7.8-final-0 ----------------
Name         Stmts   Miss  Cover   Missing
------------------------------------------
maths            4      1    75%   24
test_maths       4      0   100%
------------------------------------------
TOTAL            8      1    88%
[/code]


Do note that 100% coverage does not ensure correctness. For example, suppose I test my sqrt function like so:

[code lang=python]
def sqrt(x):
"""Return the square root of x.

Examples
--------
&gt;&gt;&gt; sqrt(4.0)
2.0
"""
return x * 0.5
[/code]


Even though my test is correct, and I now have 100% test coverage, I haven't detected my mistake. Oops!

But, keeping that caveat in mind, full test coverage is a wonderful thing, and if you don't test something, you're guaranteed not to catch errors. Further, my example above is quite contrived, and in most situations full test coverage will spot most errors.

That's it for part 2. Tune in next time to learn how to turn on Travis continuous integration for your GitHub projects!