Test driven development
Test driven development is a concept that's been around for quite a long time now. The key principal is software tests should be written before software. Kent Beck is often credited with creating test driven development, however he claims only to have rediscovered it:
The original description of TDD was in an ancient book about programming. It said you take the input tape, manually type in the output tape you expect, then program until the actual output tape matches the expected output. After I'd written the first xUnit framework in Smalltalk I remembered reading this and tried it out. That was the origin of TDD for me. When describing TDD to older programmers, I often hear, "Of course. How else could you program?" Therefore I refer to my role as "rediscovering" TDD.
This post is going to go over some of the background to test driven development, and will hopefully be followed up with a future post on using the Python unittest module to do test driven development in Python.
Developing without tests
One of the most common approaches to development looks something like this:
Certainly when I initially learnt to program I would regularly write small sections of code, run the code with temporary print statements, debug any errors, then go back to writing the next section of code.
This approach is certainly intuitive and works well for small sections of code. Unfortunately as projects get bigger a few problems start to creep in:
- Debugging becomes harder as you have more code to look through when identifying issues.
- Refactoring code becomes a scary prospect because you're not sure what the knock on effects will be.
- When coming back to old code it's very easy to forget environment requirements. For example, if you're using a slightly different version of a library this can introduce subtle bugs which are hard to track down quickly.
To address the issues above you can start writing tests. In theory you could
just write manual tests and run them yourself, however this quickly becomes
time consuming, and repeating tests manually is no fun. To avoid this test
cases can be automated with a test framework like the
unittest module in
In essence automated tests are just code which programmatically checks the code you're working on works as expected, however it's worth learning the terms below if you're new to automated tests:
test fixture: A test fixture represents the preparation needed to perform one or more tests, and any associate cleanup actions. This may involve, for example, creating temporary or proxy databases, directories, or starting a server process.
test case: A test case is the individual unit of testing. It checks for a specific response to a particular set of inputs.
unittestprovides a base class, TestCase, which may be used to create new test cases.
test suite: A test suite is a collection of test cases, test suites, or both. It is used to aggregate tests that should be executed together.
test runner: A test runner is a component which orchestrates the execution of tests and provides the outcome to the user. The runner may use a graphical interface, a textual interface, or return a special value to indicate the results of executing the tests.
Note: the definitions above were taken from the Python unittest docs.
When I initially started writing automated tests I would normally do something similar to the following:
This is fairly natural because the initial development model remains the same, and tests are only introduced after an initial version of the code is ready. Unfortunately there are a few problems with this approach:
Writing test cases is often a bit monotonous, as a result it's very tempting to just skip writing automated tests.
Like proof reading, trying to write tests for code you're already familiar with is tricky. It's very easy to make assumptions because you already know how the code works, and miss important test cases. Getting someone to peer review your code can help with this, however this isn't always possible.
In theory the code you have is already working, so unless you go back and modify the code to verify tests will fail correctly there is a good chance your tests might not catch errors.
Writing tests during development
As the name implies, in test driven development test cases are written before developing the code to pass the test case. Initially a test case is written and executed to confirm it fails; code to pass the test case is then written; and finally the passing code is refactored before writing the next test case:
Unlike the methods above, test cases are written during the code development cycle. This has a few advantages:
Tests should both fail then pass during development. This reduces the chance of faulty test cases which always pass, even if the implementation is incorrect.
Testing is part of the development process, so the temptation to just skip writing tests is reduced.
Because tests are written before the code, test cases are less likely to be influenced by the implementation.
Unfortunately there are also some disadvantages associated with test driven development:
You need to have a good understanding of how to write test cases, without this development will be very slow.
Because tests are written upfront, initial development will be slower. This isn't great if you decide your going to take a different implementation approach and throw away your initial work.