Run Django Tests

2019-04-30

Code coverage is a simple tool for checking which lines of your application code are run by your test suite.100% coverage is a laudable goal, as it means every line is run at least once.

Coverage.py is the Python tool for measuring code coverage.Ned Batchelder has maintained it for an incredible 14 years!

I like adding Coverage.py to my Django projects, like fellow Django Software Foundation member Sasha Romijn.

A test runner is a class defining a runtests method. Django ships with a DiscoverRunner class that defines the default Django testing behavior. This class defines the runtests entry point, plus a selection of other methods that are used by runtests to set up, execute and tear down the test suite. Python.djangoTestRunner.djangoNose: if checked will use django-nose syntax for running class/method tests inside a file, defaults to non-nose testing syntax python.djangoTestRunner.flags: any flags you wish to run such as -nocapture, also useful for specifying different settings if you use a modified manage.py. To run only migrations tests, use -tag option: python mange.py test-tag = migrationtest # runs only migraion tests python mange.py test-exclude-tag = migrationtest # runs all except migraion tests Django Checks. Djangotestmigrations comes with 2 groups of Django's checks for: detecting migrations scripts automatically generated names. Running only some of the tests¶ Django’s entire test suite takes a while to run, and running every single test could be redundant if, say, you just added a test to Django that you want to run quickly without running everything else. You can run a subset of the unit tests by appending the names of the test modules to runtests.py on the command line. Default shortcuts: Run Closest Test Method: cmd+d+m extension.djangoTestRunner.runMethodTests Run Closest Test Class: cmd+d+c extension.djangoTestRunner.runClassTests Run Current Test File: cmd+d+f extension.djangoTestRunner.runFileTests Run Current App Tests: cmd+d+a extension.djangoTestRunner.runAppTests Run Previous Tests: cmd+d+p extension.djangoTestRunner.

Let’s look at how we can integrate it with a Django project, and how to get that golden 100% (even if it means ignoring some lines).

Configuring Coverage.py¶

Install coverage with pip install coverage.It includes a C extension for speed-up, it’s worth checking that this installs properly - see the installation docs for information.

Then set up a configuration file for your project.The default file name is .coveragerc, but since that’s a hidden file I prefer to use the option to store the configuration in setup.cfg.

This INI file was originally used only by setuptools but now many tools have the option to read their configuration from it.For Coverage.py, we put our settings there in sections prefixed with coverage:.

The Run Section¶

This is where we tell Coverage.py what coverage data to gather.

We tell Coverage.py which files to check with the source option.In a typical Django project this is as easy as specifying the current directory (source = .) or the app directory (source = myapp/*).Add it like so:

(Remove the coverage: if you’re using .coveragerc.).

An issue I’ve seen on a Django project is Coverage.py finding Python files from a nested node_modules.It seems Python is so great even JavaScript projects have a hard time resisting it!We can tell coverage to ignore these files by adding omit = */node_modules/*.

When you come to a fork in the road, take it.

Run Django Tests

—Yogi Berra

An extra I like to add is branch coverage.This ensures that your code runs through both the True and False paths of each conditional statement.You can set this up by adding branch = True in your run section.

As an example, take this code:

With branch coverage off, we can get away with tests that pass in a red widget.Really, we should be testing with both red and non-red widgets.Branch coverage enforces this, by counting both paths from the if.

The Report Section¶

This is where we tell Coverage.py how to report the coverage data back to us.

I like to add three settings here.

  1. fail_under = 100 requires us to reach that sweet 100% goal to pass.If we’re under our target, the report command fails.
  2. show_missing = True adds a column to the report with a summary of which lines (and branches) the tests missed.This makes it easy to go from a failure to fixing it, rather than using the HTML report.
  3. skip_covered = True avoids outputting file names with 100% coverage.This makes the report a lot shorter, especially if you have a lot of files and are getting to 100% coverage.

Add them like so:

(Again, remove the coverage: prefix if you’re using .coveragerc.)

Template Coverage¶

Your Django project probably has a lot of template code.It’s a great idea to test its coverage too.This can help you find blocks or whole template files that the tests didn’t run.

Lucky for us, the primary plugin listed on the Coverage.py plugins page is the Django template plugin.

See the django_coverage_plugin PyPI page for its installation instructions.It just needs a pip install and activation in [coverage:run].

Git Ignore¶

If your project is using Git, you’ll want to ignore the files that Coverage.py generates.GitHub’s default Python .gitignore already ignores Coverage’s file.If your project isn’t using this, add these lines in your .gitignore:

Using Coverage in Tests¶

This bit depends on how you run your tests.I prefer using pytest with pytest-django.However many, projects use the default Django test runner, so I’ll describe that first.

With Django’s Test Runner¶

If you’re using manage.py test, you need to change the way you run it.You need to wrap it with three coverage commands like so:

99% - looks like I have a little bit of work to do on my test application!

Having to run three commands sucks.That’s three times as many commands as before!

We could wrap the tests with a shell script.You could add a shell script with this code:

Update (2020-01-06):Previously the below section recommended a custom test management command.However, since this will only be run after some imports, it's not possible to record 100% coverage this way.Thanks to Hervé Le Roy for reporting this.

However, there’s a more integrated way of achieving this inside Django.We can patch manage.py to call Coverage.py’s API to measure when we run the test command.Here’s how, based on the default manage.py in Django 3.0:

Notes:

  1. The two customizations are the blocks before and after the execute_from_command_line block, guarded with if running_tests:.

  2. You need to add manage.py to omit in the configuration file, since it runs before coverage starts.For example:

    (It's fine, and good, to put them on multiple lines.Ignore the furious red from my blog's syntax highlighter.)

  3. The .report() method doesn’t exit for us like the commandline method does.Instead we do our own test on the returned covered amount.This means we can remove fail_under from the [coverage:report] section in our configuration file.

Run the tests again and you'll see it in use:

Yay!

(Okay, it’s still 99%.Spoiler: I’m actually not going to fix that in this post because I’m lazy.)

With pytest¶

It’s less work to set up Coverage testing in the magical land of pytest.Simply install the pytest-cov plugin and follow its configuration guide.

The plugin will ignore the [coverage:report] section and source setting in the configuration, in favour of its own pytest arguments.We can set these in our pytest configuration’s addopts setting.For example in our pytest.ini we might have:

(Ignore the angry red from my blog’s syntax highlighter.)

Run pytest again and you’ll see the coverage report at the end of the pytest report:

Hooray!

(Yup, still 99%.)

Browsing the Coverage HTML Report¶

The terminal report is great but it can be hard to join this data back with your code.Looking at uncovered lines requires:

  1. Remembering the file name and line numbers from the terminal report
  2. Opening the file in your text editor
  3. Navigating to those lines
  4. Repeat for each set of lines in each file

This gets tiring quickly!

Coverage.py has a very useful feature to automate this merging, the HTML report.

After running coverage run, the coverage data is stored in the .coverage file.Run this command to generate an HTML report from this file:

This creates a folder called htmlcov.Open up htmlcov/index.html and you’ll see something like this:

Click on an individual file to see line by line coverage information:

The highlighted red lines are not covered and need work.

Django itself uses this on its Jenkins test server.See the “HTML Coverage Report” on the djangoci.com project django-coverage.

With PyCharm¶

Coverage.py is built-in to this editor, in the “Run <name> with coverage” feature.

This is great for individual development but less so for a team as other developers may not use PyCharm.Also it won’t be automatically run in your tests or your Continuous Integration pipeline.

See more in this Jetbrains feature spotlight blog post.

Is 100% (Branch) Coverage Too Much?¶

Some advocate for 100% branch coverage on every project.Others are skeptical, and even believe it to be a waste of time.

For examples of this debate, see this Stack Overflow question and this one.

Like most things, it depends.

First, it depends on your project’s maturity.If you’re writing an MVP and moving fast with few tests, coverage will definitely slow you down.But if your project is supporting anything of value, it’s an investment for quality.

Second, it depends on your tests.If your tests are low quality, Coverage won’t magically improve them.That said, it can be a tool to help you work towards smaller, better targeted tests.

100% coverage certainly does not mean your tests cover all scenarios.Indeed, it’s impossible to cover all scenarios, due to the combinatorial explosion from multiplying branches.(See all-pairs testing for one way of tackling this explosion.)

Third, it depends on your code.Certain types of code are harder to test, for example branches dealing with concurrent conditions.

IF YOU’RE HAVING CONCURRENCY PROBLEMS I FEEL BAD FOR YOU SON

99 AIN’T GOT I BUT PROBLEMS CONCURRENCY ONE

—[@quinnypig on Twitter](https://twitter.com/QuinnyPig/status/1110567694837800961)

Some tools, such as unittest.mock, help us reach those hard branches.However, it might be a lot of work to cover them all, taking time away from other means of verification.

Fourth, it depends on your other tooling.If you have good code review, quality tests, fast deploys, and detailed monitoring, you already have many defences against bugs.Perhaps 100% coverage won’t add much, but normally these areas are all a bit lacking or not possible.For example, if you’re working a solo project, you don’t have code review, so 100% coverage can be a great boon.

To conclude, I think that coverage is a great addition to any project, but it shouldn’t be the only priority.A pragmatic balance is to set up Coverage for 100% branch coverage, but to be unafraid of adding # pragma: no cover.These comments may be ugly, but at least they mark untested sections intentionally.If no cover code crashes in production, you should be less surprised.

Also, review these comments periodically with a simple search.You might learn more and change your mind about how easy it is to test those sections.

Fin¶

Go forth and cover your tests!

If you used this post to improve your test suite, I’d love to hear your story.Tell me via Twitter or email - contact details are on the front page.

—Adam

Thanks to Aidas Bendoraitis for reviewing this post.

🎉 My book Speed Up Your Django Tests is now up to date for Django 3.2. 🎉
Buy now on Gumroad

Django Transactiontestcase

One summary email a week, no spam, I pinky promise.

Related posts:

Tags:django

© 2019 All rights reserved.

Tutorial

The author selected the Open Internet/Free Speech Fund to receive a donation as part of the Write for DOnations program.

Introduction

It is nearly impossible to build websites that work perfectly the first time without errors. For that reason, you need to test your web application to find these errors and work on them proactively. In order to improve the efficiency of tests, it is common to break down testing into units that test specific functionalities of the web application. This practice is called unit testing. It makes it easier to detect errors because the tests focus on small parts (units) of your project independently from other parts.

Testing a website can be a complex task to undertake because it is made up of several layers of logic like handling HTTP requests, form validation, and rendering templates. However Django provides a set of tools that makes testing your web application seamless. In Django, the preferred way to write tests is to use the Python unittest module, although it is possible to use other testing frameworks.

In this tutorial, you will set up a test suite in your Django project and write unit tests for the models and views in your application. You will run these tests, analyze their results, and learn how to find the causes of failing tests.

Prerequisites

Before beginning this tutorial, you’ll need the following:

  • Django installed on your server with a programming environment set up. To do this, you can follow one of our How To Install the Django Web Framework and Set Up a Programming Environment tutorials.
  • A Django project created with models and views. In this tutorial, we have followed the project from our Django Development tutorial series.

Step 1 — Adding a Test Suite to Your Django Application

A test suite in Django is a collection of all the test cases in all the apps in your project. To make it possible for the Django testing utility to discover the test cases you have, you write the test cases in scripts whose names begin with test. In this step, you’ll create the directory structure and files for your test suite, and create an empty test case in it.

If you followed the Django Development tutorial series, you’ll have a Django app called blogsite.

Let’s create a folder to hold all our testing scripts. First, activate the virtual environment:

Then navigate to the blogsite app directory, the folder that contains the models.py and views.py files, and then create a new folder called tests:

Next, you’ll turn this folder into a Python package, so add an __init__.py file:

You’ll now add a file for testing your models and another for testing your views:

Finally, you will create an empty test case in test_models.py. You will need to import the Django TestCase class and make it a super class of your own test case class. Later on, you will add methods to this test case to test the logic in your models. Open the file test_models.py:

Now add the following code to the file:

You’ve now successfully added a test suite to the blogsite app. Next, you will fill out the details of the empty model test case you created here.

Step 2 — Testing Your Python Code

In this step, you will test the logic of the code written in the models.py file. In particular, you will be testing the save method of the Post model to ensure it creates the correct slug of a post’s title when called.

Let’s begin by looking at the code you already have in your models.py file for the save method of the Post model:

You’ll see the following:

~/my_blog_app/blog/blogsite/models.py

We can see that it checks whether the post about to be saved has a slug value, and if not, calls slugify to create a slug value for it. This is the type of logic you might want to test to ensure that slugs are actually created when saving a post.

Close the file.

Django

To test this, go back to test_models.py:

Then update it to the following, adding in the highlighted portions:

This new method test_post_has_slug creates a new post with the title 'My first post' and then gives the post an author and saves the post. After this, using the assertEqual method from the Python unittest module, it checks whether the slug for the post is correct. The assertEqual method checks whether the two arguments passed to it are equal as determined by the ' operator and raises an error if they are not.

Save and exit test_models.py.

Free Quiz Generator For Teachers

This is an example of what can be tested. The more logic you add to your project, the more there is to test. If you add more logic to the save method or create new methods for the Post model, you would want to add more tests here. You can add them to the test_post_has_slug method or create new test methods, but their names must begin with test.

You have successfully created a test case for the Post model where you asserted that slugs are correctly created after saving. In the next step, you will write a test case to test views.

Step 3 — Using Django’s Test Client

Run

In this step, you will write a test case that tests a view using the Django test client. The test client is a Python class that acts as a dummy web browser, allowing you to test your views and interact with your Django application the same way a user would. You can access the test client by referring to self.client in your test methods. For example, let us create a test case in test_views.py. First, open the test_views.py file:

Then add the following:

~/my_blog_app/blog/blogsite/tests/test_views.py

The ViewsTestCase contains a test_index_loads_properly method that uses the Django test client to visit the index page of the website (http://your_server_ip:8000, where your_server_ip is the IP address of the server you are using). Then the test method checks whether the response has a status code of 200, which means the page responded without any errors. As a result you can be sure that when the user visits, it will respond without errors too.

Apart from the status code, you can read about other properties of the test client response you can test in the Django Documentation Testing Responses page.

In this step, you created a test case for testing that the view rendering the index page works without errors. There are now two test cases in your test suite. In the next step you will run them to see their results.

Step 4 — Running Your Tests

Now that you have finished building a suite of tests for the project, it is time to execute these tests and see their results. To run the tests, navigate to the blog folder (containing the application’s manage.py file):

Then run them with:

You’ll see output similar to the following in your terminal:

In this output, there are two dots .., each of which represents a passed test case. Now you’ll modify test_views.py to trigger a failing test. First open the file with:

Then change the highlighted code to:

Here you have changed the status code from 200 to 404. Now run the test again from your directory with manage.py:

You’ll see the following output:

You see that there is a descriptive failure message that tells you the script, test case, and method that failed. It also tells you the cause of the failure, the status code not being equal to 404 in this case, with the message AssertionError: 200 != 404. The AssertionError here is raised at the highlighted line of code in the test_views.py file:

Run
~/my_blog_app/blog/blogsite/tests/test_views.py

It tells you that the assertion is false, that is, the response status code (200) is not what was expected (404). Preceding the failure message, you can see that the two dots .. have now changed to .F, which tells you that the first test case passed while the second didn’t.

Run Django Tests Online

Conclusion

In this tutorial, you created a test suite in your Django project, added test cases to test model and view logic, learned how to run tests, and analyzed the test output. As a next step, you can create new test scripts for Python code not in models.py and views.py.

Following are some articles that may prove helpful when building and testing websites with Django:

  • The Django Unit Tests documentation
  • The Scaling Django tutorial series

Django Test Example

You can also check out our Django topic page for further tutorials and projects.