31 min read

 In this article by Viktor Farcic and Alex Garcia, the authors of the book Test-Driven Java Development, we will go through TDD in a simple procedure of writing tests before the actual implementation. It’s an inversion of a traditional approach where testing is performed after the code is written.

(For more resources related to this topic, see here.)

Red-green-refactor

Test-driven development is a process that relies on the repetition of a very short development cycle. It is based on the test-first concept of extreme programming (XP) that encourages simple design with a high level of confidence. The procedure that drives this cycle is called red-green-refactor.

The procedure itself is simple and it consists of a few steps that are repeated over and over again:

  1. Write a test.
  2. Run all tests.
  3. Write the implementation code.
  4. Run all tests.
  5. Refactor.
  6. Run all tests.

Since a test is written before the actual implementation, it is supposed to fail. If it doesn’t, the test is wrong. It describes something that already exists or it was written incorrectly. Being in the green state while writing tests is a sign of a false positive. Tests like these should be removed or refactored.

While writing tests, we are in the red state. When the implementation of a test is finished, all tests should pass and then we will be in the green state.

If the last test failed, implementation is wrong and should be corrected. Either the test we just finished is incorrect or the implementation of that test did not meet the specification we had set. If any but the last test failed, we broke something and changes should be reverted.

When this happens, the natural reaction is to spend as much time as needed to fix the code so that all tests are passing. However, this is wrong. If a fix is not done in a matter of minutes, the best thing to do is to revert the changes. After all, everything worked not long ago. Implementation that broke something is obviously wrong, so why not go back to where we started and think again about the correct way to implement the test? That way, we wasted minutes on a wrong implementation instead of wasting much more time to correct something that was not done right in the first place. Existing test coverage (excluding the implementation of the last test) should be sacred. We change the existing code through intentional refactoring, not as a way to fix recently written code.

Do not make the implementation of the last test final, but provide just enough code for this test to pass.

Write the code in any way you want, but do it fast. Once everything is green, we have confidence that there is a safety net in the form of tests. From this moment on, we can proceed to refactor the code. This means that we are making the code better and more optimum without introducing new features. While refactoring is in place, all tests should be passing all the time.

If, while refactoring, one of the tests failed, refactor broke an existing functionality and, as before, changes should be reverted. Not only that at this stage we are not changing any features, but we are also not introducing any new tests. All we’re doing is making the code better while continuously running all tests to make sure that nothing got broken. At the same time, we’re proving code correctness and cutting down on future maintenance costs.

Once refactoring is finished, the process is repeated. It’s an endless loop of a very short cycle.

Speed is the key

Imagine a game of ping pong (or table tennis). The game is very fast; sometimes it is hard to even follow the ball when professionals play the game. TDD is very similar. TDD veterans tend not to spend more than a minute on either side of the table (test and implementation). Write a short test and run all tests (ping), write the implementation and run all tests (pong), write another test (ping), write implementation of that test (pong), refactor and confirm that all tests are passing (score), and then repeat—ping, pong, ping, pong, ping, pong, score, serve again. Do not try to make the perfect code. Instead, try to keep the ball rolling until you think that the time is right to score (refactor).

Time between switching from tests to implementation (and vice versa) should be measured in minutes (if not seconds).

It’s not about testing

T in TDD is often misunderstood. Test-driven development is the way we approach the design. It is the way to force us to think about the implementation and to what the code needs to do before writing it. It is the way to focus on requirements and implementation of just one thing at a time—organize your thoughts and better structure the code. This does not mean that tests resulting from TDD are useless—it is far from that. They are very useful and they allow us to develop with great speed without being afraid that something will be broken. This is especially true when refactoring takes place. Being able to reorganize the code while having the confidence that no functionality is broken is a huge boost to the quality.

The main objective of test-driven development is testable code design with tests as a very useful side product.

Testing

Even though the main objective of test-driven development is the approach to code design, tests are still a very important aspect of TDD and we should have a clear understanding of two major groups of techniques as follows:

  • Black-box testing
  • White-box testing

The black-box testing

Black-box testing (also known as functional testing) treats software under test as a black-box without knowing its internals. Tests use software interfaces and try to ensure that they work as expected. As long as functionality of interfaces remains unchanged, tests should pass even if internals are changed. Tester is aware of what the program should do, but does not have the knowledge of how it does it. Black-box testing is most commonly used type of testing in traditional organizations that have testers as a separate department, especially when they are not proficient in coding and have difficulties understanding it. This technique provides an external perspective on the software under test.

Some of the advantages of black-box testing are as follows:

  • Efficient for large segments of code
  • Code access, understanding the code, and ability to code are not required
  • Separation between user’s and developer’s perspectives

Some of the disadvantages of black-box testing are as follows:

  • Limited coverage, since only a fraction of test scenarios is performed
  • Inefficient testing due to tester’s lack of knowledge about software internals
  • Blind coverage, since tester has limited knowledge about the application

If tests are driving the development, they are often done in the form of acceptance criteria that is later used as a definition of what should be developed.

Automated black-box testing relies on some form of automation such as behavior-driven development (BDD).

The white-box testing

White-box testing (also known as clear-box testing, glass-box testing, transparent-box testing, and structural testing) looks inside the software that is being tested and uses that knowledge as part of the testing process. If, for example, an exception should be thrown under certain conditions, a test might want to reproduce those conditions. White-box testing requires internal knowledge of the system and programming skills. It provides an internal perspective on the software under test.

Some of the advantages of white-box testing are as follows:

  • Efficient in finding errors and problems
  • Required knowledge of internals of the software under test is beneficial for thorough testing
  • Allows finding hidden errors
  • Programmers introspection
  • Helps optimizing the code
  • Due to the required internal knowledge of the software, maximum coverage is obtained

Some of the disadvantages of white-box testing are as follows:

  • It might not find unimplemented or missing features
  • Requires high-level knowledge of internals of the software under test
  • Requires code access
  • Tests are often tightly coupled to the implementation details of the production code, causing unwanted test failures when the code is refactored.

White-box testing is almost always automated and, in most cases, has the form of unit tests.

When white-box testing is done before the implementation, it takes the form of TDD.

The difference between quality checking and quality assurance

The approach to testing can also be distinguished by looking at the objectives they are trying to accomplish. Those objectives are often split between quality checking (QC) and quality assurance (QA). While quality checking is focused on defects identification, quality assurance tries to prevent them. QC is product-oriented and intends to make sure that results are as expected. On the other hand, QA is more focused on processes that assure that quality is built-in. It tries to make sure that correct things are done in the correct way.

While quality checking had a more important role in the past, with the emergence of TDD, acceptance test-driven development (ATDD), and later on behavior-driven development (BDD), focus has been shifting towards quality assurance.

Better tests

No matter whether one is using black-box, white-box, or both types of testing, the order in which they are written is very important.

Requirements (specifications and user stories) are written before the code that implements them. They come first so they define the code, not the other way around. The same can be said for tests. If they are written after the code is done, in a certain way, that code (and the functionalities it implements) is defining tests. Tests that are defined by an already existing application are biased. They have a tendency to confirm what code does, and not to test whether client’s expectations are met, or that the code is behaving as expected. With manual testing, that is less the case since it is often done by a siloed QC department (even though it’s often called QA). They tend to work on tests’ definition in isolation from developers. That in itself leads to bigger problems caused by inevitably poor communication and the police syndrome where testers are not trying to help the team to write applications with quality built-in, but to find faults at the end of the process. The sooner we find problems, the cheaper it is to fix them.

Tests written in the TDD fashion (including its flavors such as ATDD and BDD) are an attempt to develop applications with quality built-in from the very start. It’s an attempt to avoid having problems in the first place.

Mocking

In order for tests to run fast and provide constant feedback, code needs to be organized in such a way that the methods, functions, and classes can be easily replaced with mocks and stubs. A common word for this type of replacements of the actual code is test double. Speed of the execution can be severely affected with external dependencies; for example, our code might need to communicate with the database. By mocking external dependencies, we are able to increase that speed drastically. Whole unit tests suite execution should be measured in minutes, if not seconds. Designing the code in a way that it can be easily mocked and stubbed, forces us to better structure that code by applying separation of concerns.

More important than speed is the benefit of removal of external factors. Setting up databases, web servers, external APIs, and other dependencies that our code might need, is both time consuming and unreliable. In many cases, those dependencies might not even be available. For example, we might need to create a code that communicates with a database and have someone else create a schema. Without mocks, we would need to wait until that schema is set.

With or without mocks, the code should be written in a way that we can easily replace one dependency with another.

Executable documentation

Another very useful aspect of TDD (and well-structured tests in general) is documentation. In most cases, it is much easier to find out what the code does by looking at tests than the implementation itself. What is the purpose of some methods? Look at the tests associated with it. What is the desired functionality of some part of the application UI? Look at the tests associated with it. Documentation written in the form of tests is one of the pillars of TDD and deserves further explanation.

The main problem with (traditional) software documentation is that it is not up to date most of the time. As soon as some part of the code changes, the documentation stops reflecting the actual situation. This statement applies to almost any type of documentation, with requirements and test cases being the most affected.

The necessity to document code is often a sign that the code itself is not well written.Moreover, no matter how hard we try, documentation inevitably gets outdated.

Developers shouldn’t rely on system documentation because it is almost never up to date. Besides, no documentation can provide as detailed and up-to-date description of the code as the code itself.

Using code as documentation, does not exclude other types of documents. The key is to avoid duplication. If details of the system can be obtained by reading the code, other types of documentation can provide quick guidelines and a high-level overview. Non-code documentation should answer questions such as what the general purpose of the system is and what technologies are used by the system. In many cases, a simple README is enough to provide the quick start that developers need. Sections such as project description, environment setup, installation, and build and packaging instructions are very helpful for newcomers. From there on, code is the bible.

Implementation code provides all needed details while test code acts as the description of the intent behind the production code.

Tests are executable documentation with TDD being the most common way to create and maintain it.

Assuming that some form of Continuous Integration (CI) is in use, if some part of test-documentation is incorrect, it will fail and be fixed soon afterwards. CI solves the problem of incorrect test-documentation, but it does not ensure that all functionality is documented. For this reason (among many others), test-documentation should be created in the TDD fashion. If all functionality is defined as tests before the implementation code is written and execution of all tests is successful, then tests act as a complete and up-to-date information that can be used by developers.

What should we do with the rest of the team? Testers, customers, managers, and other non coders might not be able to obtain the necessary information from the production and test code.

As we saw earlier, two most common types of testing are black-box and white-box testing. This division is important since it also divides testers into those who do know how to write or at least read code (white-box testing) and those who don’t (black-box testing). In some cases, testers can do both types. However, more often than not, they do not know how to code so the documentation that is usable for developers is not usable for them. If documentation needs to be decoupled from the code, unit tests are not a good match. That is one of the reasons why BDD came in to being.

BDD can provide documentation necessary for non-coders, while still maintaining the advantages of TDD and automation.

Customers need to be able to define new functionality of the system, as well as to be able to get information about all the important aspects of the current system. That documentation should not be too technical (code is not an option), but it still must be always up to date. BDD narratives and scenarios are one of the best ways to provide this type of documentation. Ability to act as acceptance criteria (written before the code), be executed frequently (preferably on every commit), and be written in natural language makes BDD stories not only always up to date, but usable by those who do not want to inspect the code.

Documentation is an integral part of the software. As with any other part of the code, it needs to be tested often so that we’re sure that it is accurate and up to date.

The only cost-effective way to have accurate and up-to-date information is to have executable documentation that can be integrated into your continuous integration system.

TDD as a methodology is a good way to move towards this direction. On a low level, unit tests are a best fit. On the other hand, BDD provides a good way to work on a functional level while maintaining understanding accomplished using natural language.

No debugging

We (authors of this article) almost never debug applications we’re working on!

This statement might sound pompous, but it’s true. We almost never debug because there is rarely a reason to debug an application. When tests are written before the code and the code coverage is high, we can have high confidence that the application works as expected. This does not mean that applications written using TDD do not have bugs—they do. All applications do. However, when that happens, it is easy to isolate them by simply looking for the code that is not covered with tests.

Tests themselves might not include some cases. In that situation, the action is to write additional tests.

With high code coverage, finding the cause of some bug is much faster through tests than spending time debugging line by line until the culprit is found.

With all this in mind, let’s go through the TDD best practices.

Best practices

Coding best practices are a set of informal rules that the software development community has learned over time, which can help improve the quality of software. While each application needs a level of creativity and originality (after all, we’re trying to build something new or better), coding practices help us avoid some of the problems others faced before us. If you’re just starting with TDD, it is a good idea to apply some (if not all) of the best practices generated by others.

For easier classification of test-driven development best practices, we divided them into four categories:

  • Naming conventions
  • Processes
  • Development practices
  • Tools

As you’ll see, not all of them are exclusive to TDD. Since a big part of test-driven development consists of writing tests, many of the best practices presented in the following sections apply to testing in general, while others are related to general coding best practices. No matter the origin, all of them are useful when practicing TDD.

Take the advice with a certain dose of skepticism. Being a great programmer is not only about knowing how to code, but also about being able to decide which practice, framework or style best suits the project and the team. Being agile is not about following someone else’s rules, but about knowing how to adapt to circumstances and choose the best tools and practices that suit the team and the project.

Naming conventions

Naming conventions help to organize tests better, so that it is easier for developers to find what they’re looking for. Another benefit is that many tools expect that those conventions are followed. There are many naming conventions in use, and those presented here are just a drop in the ocean. The logic is that any naming convention is better than none. Most important is that everyone on the team knows what conventions are being used and are comfortable with them. Choosing more popular conventions has the advantage that newcomers to the team can get up to speed fast since they can leverage existing knowledge to find their way around.

Separate the implementation from the test code

Benefits: It avoids accidentally packaging tests together with production binaries; many build tools expect tests to be in a certain source directory.

Common practice is to have at least two source directories. Implementation code should be located in src/main/java and test code in src/test/java. In bigger projects, the number of source directories can increase but the separation between implementation and tests should remain as is.

Build tools such as Gradle and Maven expect source directories separation as well as naming conventions.

You might have noticed that the build.gradle files that we used throughout this article did not have explicitly specified what to test nor what classes to use to create a .jar file. Gradle assumes that tests are in src/test/java and that the implementation code that should be packaged into a jar file is in src/main/java.

Place test classes in the same package as implementation

Benefits: Knowing that tests are in the same package as the code helps finding code faster.

As stated in the previous practice, even though packages are the same, classes are in the separate source directories.

All exercises throughout this article followed this convention.

Name test classes in a similar fashion to the classes they test

Benefits: Knowing that tests have a similar name to the classes they are testing helps in finding the classes faster.

One commonly used practice is to name tests the same as the implementation classes, with the suffix Test. If, for example, the implementation class is TickTackToe, the test class should be TickTackToeTest.

However, in all cases, with the exception of those we used throughout the refactoring exercises, we prefer the suffix Spec. It helps to make a clear distinction that test methods are primarily created as a way to specify what will be developed. Testing is a great subproduct of those specifications.

Use descriptive names for test methods

Benefits: It helps in understanding the objective of tests.

Using method names that describe tests is beneficial when trying to figure out why some tests failed or when the coverage should be increased with more tests. It should be clear what conditions are set before the test, what actions are performed and what is the expected outcome.

There are many different ways to name test methods and our preferred method is to name them using the Given/When/Then syntax used in the BDD scenarios. Given describes (pre)conditions, When describes actions, and Then describes the expected outcome. If some test does not have preconditions (usually set using @Before and @BeforeClass annotations), Given can be skipped.

Let’s take a look at one of the specifications we created for our TickTackToe application:

 

  @Test
   public void whenPlayAndWholeHorizontalLineThenWinner() {
       ticTacToe.play(1, 1); // X
       ticTacToe.play(1, 2); // O
       ticTacToe.play(2, 1); // X
       ticTacToe.play(2, 2); // O
       String actual = ticTacToe.play(3, 1); // X
       assertEquals("X is the winner", actual);
   }

Just by reading the name of the method, we can understand what it is about. When we play and the whole horizontal line is populated, then we have a winner.

Do not rely only on comments to provide information about the test objective. Comments do not appear when tests are executed from your favorite IDE nor do they appear in reports generated by CI or build tools.

Processes

TDD processes are the core set of practices. Successful implementation of TDD depends on practices described in this section.

Write a test before writing the implementation code

Benefits: It ensures that testable code is written; ensures that every line of code gets tests written for it.

By writing or modifying the test first, the developer is focused on requirements before starting to work on the implementation code. This is the main difference compared to writing tests after the implementation is done. The additional benefit is that with the tests written first, we are avoiding the danger that the tests work as quality checking instead of quality assurance. We’re trying to ensure that quality is built in as opposed to checking later whether we met quality objectives.

Only write new code when the test is failing

Benefits: It confirms that the test does not work without the implementation.

If tests are passing without the need to write or modify the implementation code, then either the functionality is already implemented or the test is defective. If new functionality is indeed missing, then the test always passes and is therefore useless. Tests should fail for the expected reason. Even though there are no guarantees that the test is verifying the right thing, with fail first and for the expected reason, confidence that verification is correct should be high.

Rerun all tests every time the implementation code changes

Benefits: It ensures that there is no unexpected side effect caused by code changes.

Every time any part of the implementation code changes, all tests should be run. Ideally, tests are fast to execute and can be run by the developer locally. Once code is submitted to version control, all tests should be run again to ensure that there was no problem due to code merges. This is specially important when more than one developer is working on the code. Continuous integration tools such as Jenkins (http://jenkins-ci.org/), Hudson (http://hudson-ci.org/), Travis (https://travis-ci.org/), and Bamboo (https://www.atlassian.com/software/bamboo) should be used to pull the code from the repository, compile it, and run tests.

All tests should pass before a new test is written

Benefits: The focus is maintained on a small unit of work; implementation code is (almost) always in working condition.

It is sometimes tempting to write multiple tests before the actual implementation. In other cases, developers ignore problems detected by existing tests and move towards new features. This should be avoided whenever possible. In most cases, breaking this rule will only introduce technical debt that will need to be paid with interest. One of the goals of TDD is that the implementation code is (almost) always working as expected. Some projects, due to pressures to reach the delivery date or maintain the budget, break this rule and dedicate time to new features, leaving the task of fixing the code associated with failed tests for later. These projects usually end up postponing the inevitable.

Refactor only after all tests are passing

Benefits: This type of refactoring is safe.

If all implementation code that can be affected has tests and they are all passing, it is relatively safe to refactor. In most cases, there is no need for new tests. Small modifications to existing tests should be enough. The expected outcome of refactoring is to have all tests passing both before and after the code is modified.

Development practices

Practices listed in this section are focused on the best way to write tests.

Write the simplest code to pass the test

Benefits: It ensures cleaner and clearer design; avoids unnecessary features.

The idea is that the simpler the implementation, the better and easier it is to maintain the product. The idea adheres to the keep it simple stupid (KISS) principle. This states that most systems work best if they are kept simple rather than made complex; therefore, simplicity should be a key goal in design, and unnecessary complexity should be avoided.

Write assertions first, act later

Benefits: This clarifies the purpose of the requirements and tests early.

Once the assertion is written, the purpose of the test is clear and the developer can concentrate on the code that will accomplish that assertion and, later on, on the actual implementation.

Minimize assertions in each test

Benefits: This avoids assertion roulette; allows execution of more asserts.

If multiple assertions are used within one test method, it might be hard to tell which of them caused a test failure. This is especially common when tests are executed as part of the continuous integration process. If the problem cannot be reproduced on a developer’s machine (as may be the case if the problem is caused by environmental issues), fixing the problem may be difficult and time consuming.

When one assert fails, execution of that test method stops. If there are other asserts in that method, they will not be run and information that can be used in debugging is lost.

Last but not least, having multiple asserts creates confusion about the objective of the test.

This practice does not mean that there should always be only one assert per test method. If there are other asserts that test the same logical condition or unit of functionality, they can be used within the same method.

Let’s go through few examples:

@Test

public final void whenOneNumberIsUsedThenReturnValueIsThatSameNumber() {
   Assert.assertEquals(3, StringCalculator.add("3"));
}

@Test
public final void whenTwoNumbersAreUsedThenReturnValueIsTheirSum() {
   Assert.assertEquals(3+6, StringCalculator.add("3,6"));
}

The preceding code contains two specifications that clearly define what the objective of those tests is. By reading the method names and looking at the assert, there should be clarity on what is being tested. Consider the following for example:

@Test
public final void whenNegativeNumbersAreUsedThenRuntimeExceptionIsThrown() {
   RuntimeException exception = null;
   try {
       StringCalculator.add("3,-6,15,-18,46,33");
   } catch (RuntimeException e) {
       exception = e;
   }
   Assert.assertNotNull("Exception was not thrown", exception);
   Assert.assertEquals("Negatives not allowed: [-6, -18]",
           exception.getMessage());
}

This specification has more than one assert, but they are testing the same logical unit of functionality. The first assert is confirming that the exception exists, and the second that its message is correct. When multiple asserts are used in one test method, they should all contain messages that explain the failure. This way debugging the failed assert is easier. In the case of one assert per test method, messages are welcome, but not necessary since it should be clear from the method name what the objective of the test is.

@Test
public final void whenAddIsUsedThenItWorks() {
   Assert.assertEquals(0, StringCalculator.add(""));
   Assert.assertEquals(3, StringCalculator.add("3"));
   Assert.assertEquals(3+6, StringCalculator.add("3,6"));
   Assert.assertEquals(3+6+15+18+46+33,           StringCalculator.add("3,6,15,18,46,33"));
   Assert.assertEquals(3+6+15, StringCalculator.add("3,6n15"));
   Assert.assertEquals(3+6+15,           StringCalculator.add("//;n3;6;15"));
   Assert.assertEquals(3+1000+6,           StringCalculator.add("3,1000,1001,6,1234"));
}

This test has many asserts. It is unclear what the functionality is, and if one of them fails, it is unknown whether the rest would work or not. It might be hard to understand the failure when this test is executed through some of the CI tools.

Do not introduce dependencies between tests

Benefits: The tests work in any order independently, whether all or only a subset is run

Each test should be independent from the others. Developers should be able to execute any individual test, a set of tests, or all of them. Often, due to the test runner’s design, there is no guarantee that tests will be executed in any particular order. If there are dependencies between tests, they might easily be broken with the introduction of new ones.

Tests should run fast

Benefits: These tests are used often.

If it takes a lot of time to run tests, developers will stop using them or run only a small subset related to the changes they are making. The benefit of fast tests, besides fostering their usage, is quick feedback. The sooner the problem is detected, the easier it is to fix it. Knowledge about the code that produced the problem is still fresh. If the developer already started working on the next feature while waiting for the completion of the execution of the tests, he might decide to postpone fixing the problem until that new feature is developed. On the other hand, if he drops his current work to fix the bug, time is lost in context switching.

Tests should be so quick that developers can run all of them after each change without getting bored or frustrated.

Use test doubles

Benefits: This reduces code dependency and test execution will be faster.

Mocks are prerequisites for fast execution of tests and ability to concentrate on a single unit of functionality. By mocking dependencies external to the method that is being tested, the developer is able to focus on the task at hand without spending time in setting them up. In the case of bigger teams, those dependencies might not even be developed. Also, the execution of tests without mocks tends to be slow. Good candidates for mocks are databases, other products, services, and so on.

Use set-up and tear-down methods

Benefits: This allows set-up and tear-down code to be executed before and after the class or each method.

In many cases, some code needs to be executed before the test class or before each method in a class. For this purpose, JUnit has @BeforeClass and @Before annotations that should be used as the setup phase. @BeforeClass executes the associated method before the class is loaded (before the first test method is run). @Before executes the associated method before each test is run. Both should be used when there are certain preconditions required by tests. The most common example is setting up test data in the (hopefully in-memory) database.

At the opposite end are @After and @AfterClass annotations, which should be used as the tear-down phase. Their main purpose is to destroy data or a state created during the setup phase or by the tests themselves. As stated in one of the previous practices, each test should be independent from the others. Moreover, no test should be affected by the others. Tear-down phase helps to maintain the system as if no test was previously executed.

Do not use base classes in tests

Benefits: It provides test clarity.

Developers often approach test code in the same way as implementation. One of the common mistakes is to create base classes that are extended by tests. This practice avoids code duplication at the expense of tests clarity. When possible, base classes used for testing should be avoided or limited. Having to navigate from the test class to its parent, parent of the parent, and so on in order to understand the logic behind tests introduces often unnecessary confusion. Clarity in tests should be more important than avoiding code duplication.

Tools

TDD, coding and testing in general, are heavily dependent on other tools and processes. Some of the most important ones are as follows. Each of them is too big a topic to be explored in this article, so they will be described only briefly.

Code coverage and Continuous integration (CI)

Benefits: It gives assurance that everything is tested

Code coverage practice and tools are very valuable in determining that all code, branches, and complexity is tested. Some of the tools are JaCoCo (http://www.eclemma.org/jacoco/), Clover (https://www.atlassian.com/software/clover/overview), and Cobertura (http://cobertura.github.io/cobertura/).

Continuous Integration (CI) tools are a must for all except the most trivial projects. Some of the most used tools are Jenkins (http://jenkins-ci.org/), Hudson (http://hudson-ci.org/), Travis (https://travis-ci.org/), and Bamboo (https://www.atlassian.com/software/bamboo).

Use TDD together with BDD

Benefits: Both developer unit tests and functional customer facing tests are covered.

While TDD with unit tests is a great practice, in many cases, it does not provide all the testing that projects need. TDD is fast to develop, helps the design process, and gives confidence through fast feedback. On the other hand, BDD is more suitable for integration and functional testing, provides better process for requirement gathering through narratives, and is a better way of communicating with clients through scenarios. Both should be used, and together they provide a full process that involves all stakeholders and team members. TDD (based on unit tests) and BDD should be driving the development process. Our recommendation is to use TDD for high code coverage and fast feedback, and BDD as automated acceptance tests. While TDD is mostly oriented towards white-box, BDD often aims at black-box testing. Both TDD and BDD are trying to focus on quality assurance instead of quality checking.

Summary

You learned that it is a way to design the code through short and repeatable cycle called red-green-refactor. Failure is an expected state that should not only be embraced, but enforced throughout the TDD process. This cycle is so short that we move from one phase to another with great speed.

While code design is the main objective, tests created throughout the TDD process are a valuable asset that should be utilized and severely impact on our view of traditional testing practices. We went through the most common of those practices such as white-box and black-box testing, tried to put them into the TDD perspective, and showed benefits that they can bring to each other. You discovered that mocks are a very important tool that is often a must when writing tests. Finally, we discussed how tests can and should be utilized as executable documentation and how TDD can make debugging much less necessary.

Now that we are armed with theoretical knowledge, it is time to set up the development environment and get an overview and comparison of different testing frameworks and tools.

Now we will walk you through all the TDD best practices in detail and refresh the knowledge and experience you gained throughout this article.

Resources for Article:


Further resources on this subject:


LEAVE A REPLY

Please enter your comment!
Please enter your name here