4
votes

I have a Python project that uses pytest-cov for unit testing and code coverage measurement.

The directory structure for my project is:

rift-python
+- rift                        # The package under test
|  +- __init__.py
|  +- __main__.py
|  +- cli_listen_handler.py
|  +- cli_session_handler.py
|  +- table.py
|  +- ...lots more...
+- tests                       # The tests 
|  +- test_table.py
|  +- test_sys_2n_l0_l1.py
|  +- ...more...
+- README.md
+- .travis.yml
+- ...

I use Travis to run pytest --cov=rift tests for every checkin, and I use codecov to view the code coverage results.

The package under test offer a command line interface (CLI) which reads commands from stdin and produces output on stdout. It is started as python rift.

The tests directory contains two types of tests.

The first type of tests is traditional unit tests that test an individual class. For example, the test test_table.py imports table.py, and performs traditional pytest tests (using assert etc.) Code coverage measurement works as expected for these tests: codecov accurately reports which lines in the rift package are or are not covered by the test.

# test_table.py (codecov works)

import table

def test_simple_table():
    tab = table.Table()
    tab.add_row(['Animal', 'Legs'])
    tab.add_rows([['Ant', 6]])
    ...
    tab_str = tab.to_string()
    assert (tab_str == "+--------+------+\n"
                       "| Animal | Legs |\n"
                       "+--------+------+\n"
                       "| Ant    | 6    |\n"
                       "+--------+------+\n"
                       ...
                       "+--------+------+\n")

The second type of test uses pexpect: it uses pexpect.spawn("python rift") to start the rift package. It then uses pexpect.sendline to inject commands into the CLI (stdin) and it used pexpect.expect to check the output of the commands on the CLI (stdout). The test functionality is working fine, but codecov is not reporting the code coverage for these tests.

# test_sys_2n_l0_l1.py (codecov does not pick up coverage of rift package)
# Greatly simplified example

import pexpect

def test_basic():
    rift = pexpect.spawn("python rift")
    rift.sendline("cli command")
    rift.expect("expected output")  # Throws exception if expected output not seen

QUESTION: How can I get code coverage measurements to report the covered line in the spawned rift package for the 2nd type of test using pexpect?

Note: I omitted several what I believe to be non-relevant details, full source code at https://github.com/brunorijsman/rift-python (UPDATE: this repo now contains the working solution suggested in the answer)

2
Check out the answer I gave to a similar question (measure code coverage of a server spawned by an external command). There is a working example of a fixture that updates coverage measured by pytest-cov, must be pretty much the same of what you want.hoefling
@hoefling Thanks for the pointer. This is useful, and I might need it if my scenario gets more complicated.Bruno Rijsman

2 Answers

2
votes

Use coverage run to run your pexpect program and gather data:

If you usually do:

pexpect.spawn("python rift")

Then instead do:

pexpect.spawn("coverage run rift.py")

(Source)

After testing you will likely want to combine the pexpect results with the "regular" unit test results. coverage.py can combine multiple files into one for reporting.

Once you have created a number of these files, you can copy them all to a single directory, and use the combine command to combine them into one .coverage data file:

$ coverage combine

(Source)

Two additional details from testing:

  • In the test program (test_sys_2n_l0_l1.py) in this example, you must make sure that you have a delay between the moment that you terminate the pexpect spawn and the moment you terminate the test itself. Otherwise, coverage will not have time to write the results to .coverage. I added a sleep(1.0).

  • Used "coverage run --parallel-mode rift". This was needed to (a) make sure .coverage was not overwritten by later runs and (b) make "coverage combine" work (which is automatically run by "pytest --cov")

0
votes

You basically have to enable subprocess coverage tracking.

I recommend using https://pypi.org/project/coverage_enable_subprocess/ to enable this easily.

Using parallel = 1 is recommended/required then, and you have to export COVERAGE_PROCESS_START, e.g. export COVERAGE_PROCESS_START="$PWD/.coveragerc".