Basic Doctest in Python

9 min read

Doctest will be the mainstay of your testing toolkit. You’ll be using it for tests, of course, but also for things that you may not think of as tests right now. For example, program specifications and API documentation both benefit from being written as doctests and checked alongside your other tests.

Like program source code, doctest tests are written in plain text. Doctest extracts the tests and ignores the rest of the text, which means that the tests can be embedded in human-readable explanations or discussions. This is the feature that makes doctest so suitable for non-classical uses such as program specifications.

Time for action – creating and running your first doctest

We’ll create a simple doctest, to demonstrate the fundamentals of using doctest.

  1. Open a new text file in your editor, and name it test.txt.
  2. Insert the following text into the file:
      This is a simple doctest that checks some of Python's arithmetic
      >>> 2 + 2
      >>> 3 * 3
  3. We can now run the doctest. The details of how we do that depend on which version of Python we’re using. At the command prompt, change to the directory where you saved test.txt.
  4. If you are using Python 2.6 or higher, type:
      $ python -m doctest test.txt
  5. If you are using python 2.5 or lower, the above command may seem to work, but it won’t produce the expected result. This is because Python 2.6 is the first version in which doctest looks for test file names on the command line when you invoke it this way.
  6. If you’re using an older version of Python, you can run your doctest by typing:
      $ python -c "__import__('doctest').testfile('test.txt')"
  7. When the test is run, you should see output as shown in the following screen:

What just happened?

You wrote a doctest file that describes a couple of arithmetic operations, and executed it to check whether Python behaved as the tests said it should. You ran the tests by telling Python to execute doctest on the files that contained the tests.

In this case, Python’s behavior differed from the tests because according to the tests, three times three equals ten! However, Python disagrees on that. As doctest expected one thing and Python did something different, doctest presented you with a nice little error report showing where to find the failed test, and how the actual result differed from the expected result. At the bottom of the report, is a summary showing how many tests failed in each file tested, which is helpful when you have more than one file containing tests.

Remember, doctest files are for computer and human consumption. Try to write the test code in a way that human readers can easily understand, and add in plenty of plain language commentary.

The syntax of doctests

You might have guessed from looking at the previous example: doctest recognizes tests by looking for sections of text that look like they’ve been copied and pasted from a Python interactive session. Anything that can be expressed in Python is valid within a doctest.

Lines that start with a >>> prompt are sent to a Python interpreter. Lines that start with a … prompt are sent as continuations of the code from the previous line, allowing you to embed complex block statements into your doctests. Finally, any lines that don’t start with >>> or …, up to the next blank line or >>> prompt, represent the output expected from the statement. The output appears as it would in an interactive Python session, including both the return value and the one printed to the console. If you don’t have any output lines, doctest assumes it to mean that the statement is expected to have no visible result on the console.

Doctest ignores anything in the file that isn’t part of a test, which means that you can place explanatory text, HTML, line-art diagrams, or whatever else strikes your fancy in between your tests. We took advantage of that in the previous doctest, to add an explanatory sentence before the test itself.

Time for action – writing a more complex test

We’ll write another test (you can add it to test.txt if you like) which shows off most of the details of doctest syntax.

  1. Insert the following text into your doctest file (test.txt), separated from the existing tests by at least one blank line:
    Now we're going to take some more of doctest's syntax for a spin.
    >>> import sys
    >>> def test_write():
    ... sys.stdout.write("Hellon")
    ... return True
    >>> test_write()

    Think about it for a moment: What does this do? Do you expect the test to pass, or to fail?

  2. Run doctest on the test file, just as we discussed before. Because we added the new tests to the same file containing the tests from before, we still see the notification that three times three does not equal ten. Now, though, we also see that five tests were run, which means our new tests ran and succeeded.

What just happened?

As far as doctest is concerned, we added three tests to the file.

  • The first one says that when we import sys, nothing visible should happen.
  • The second test says that when we define the test_write function, nothing visible should happen.
  • The third test says that when we call the test_write function, Hello and True should appear on the console, in that order, on separate lines.

Since all three of these tests pass, doctest doesn’t bother to say much about them. All it did was increase the number of tests reported at the bottom from two to five.

Expecting exceptions

That’s all well and good for testing that things work as expected, but it is just as important to make sure that things fail when they’re supposed to fail. Put another way; sometimes your code is supposed to raise an exception, and you need to be able to write tests that check that behavior as well.

Fortunately, doctest follows nearly the same principle in dealing with exceptions, that it does with everything else; it looks for text that looks like a Python interactive session. That means it looks for text that looks like a Python exception report and traceback, matching it against any exception that gets raised.

Doctest does handle exceptions a little differently from other tools. It doesn’t just match the text precisely and report a failure if it doesn’t match. Exception tracebacks tend to contain many details that are not relevant to the test, but which can change unexpectedly. Doctest deals with this by ignoring the traceback entirely: it’s only concerned with the first line—Traceback (most recent call last)—which tells it that you expect an exception, and the part after the traceback, which tells it which exception you expect. Doctest only reports a failure if one of these parts does not match.

That’s helpful for a second reason as well: manually figuring out what the traceback would look like, when you’re writing your tests would require a significant amount of effort, and would gain you nothing. It’s better to simply omit them.

Time for action – expecting an exception

This is yet another test that you can add to test.txt, this time testing some code that ought to raise an exception.

  1. Insert the following text into your doctest file (Please note that the last line of this text has been wrapped due to the constraints of the article’s format, and should be a single line):
      Here we use doctest's exception syntax to check that Python is
      correctly enforcing its grammar.
      >>> def faulty():
      ... yield 5
      ... return 7
      Traceback (most recent call last):
      SyntaxError: 'return' with argument inside generator
      (<doctest test.txt[5]>, line 3)
  2. The test is supposed to raise an exception, so it will fail if it doesn’t raise the exception, or if it raises the wrong exception. Make sure you have your mind wrapped around that: if the test code executes successfully, the test fails, because it expected an exception.
  3. Run the tests using doctest and the following screen will be displayed:

What just happened?

Since Python doesn’t allow a function to contain both yield statements and return statements with values, having the test to define such a function caused an exception. In this case, the exception was a SyntaxError with the expected value. As a result, doctest considered it a match with the expected output, and thus the test passed. When dealing with exceptions, it is often desirable to be able to use a wildcard matching mechanism. Doctest provides this facility through its ellipsis directive, which we’ll discuss later

Expecting blank lines in the output

Doctest uses the first blank line to identify the end of the expected output. So what do you do, when the expected output actually contains a blank line?

Doctest handles this situation by matching a line that contains only the text <BLANKLINE> in the expected output, with a real blank line in the actual output.

Using directives to control doctest

Sometimes, the default behavior of doctest makes writing a particular test inconvenient. That’s where doctest directives come to our rescue. Directives are specially formatted comments that you place after the source code of a test, which tell doctest to alter its default behavior in some way.

A directive comment begins with # doctest:, after which comes a comma-separated list of options, that either enable or disable various behaviors. To enable a behavior, write a + (plus symbol) followed by the behavior name. To disable a behavior, white a (minus symbol) followed by the behavior name.

Ignoring part of the result

It’s fairly common that only part of the output of a test is actually relevant to determining whether the test passes. By using the +ELLIPSIS directive, you can make doctest treat the text (called an ellipsis) in the expected output as a wildcard, which will match any text in the output.

When you use an ellipsis, doctest will scan ahead until it finds text matching whatever comes after the ellipsis in the expected output, and continue matching from there. This can lead to surprising results such as an ellipsis matching against a 0-length section of the actual output, or against multiple lines. For this reason, it needs to be used thoughtfully.


Please enter your comment!
Please enter your name here