22 min read

In this article by John Brett, the author of the book Getting Started with Hapi.js, we are going to explore the topic of testing in node and hapi. We will look at what is involved in writing a simple test using hapi’s test runner, lab, how to test hapi applications, techniques to make testing easier, and finally how to achieve the all-important 100% code coverage.

(For more resources related to this topic, see here.)

The benefits and importance of testing code

Technical debt is developmental work that must be done before a particular job is complete, or else it will make future changes much harder to implement later on. A codebase without tests is a clear indication of technical debt. Let’s explore this statement in more detail.

Even very simple applications will generally comprise:

  • Features, which the end user interacts with
  • Shared services, such as authentication and authorization, that features interact with

These will all generally depend on some direct persistent storage or API. Finally, to implement most of these features and services, we will use libraries, frameworks, and modules regardless of language. So, even for simpler applications, we have already arrived at a few dependencies to manage, where a change that causes a break in one place could possibly break everything up the chain.

So let’s take a common use case, in which a new version of one of your dependencies is released. This could be a new hapi version, a smaller library, your persistent storage engine, MySQL, MongoDB, or even an operating system or language version. SemVer, as mentioned previously, attempts to mitigate this somewhat, but you are taking someone at their word when they say that they have adhered to this correctly, and SemVer is not used everywhere. So, in the case of a break-causing change, will the current application work with this new dependency version? What will fail? What percentage of tests fail? What’s the risk if we don’t upgrade? Will support eventually be dropped, including security patches? Without a good automated test suite, these have to be answered by manual testing, which is a huge waste of developer time. Development progress stops here every time these tasks have to be done, meaning that these types of tasks are rarely done, building further technical debt. Apart from this, humans are proven to be poor at repetitive tasks, prone to error, and I know I personally don’t enjoy testing manually, which makes me poor at it. I view repetitive manual testing like this as time wasted, as these questions could easily be answered by running a test suite against the new dependency so that developer time could be spent on something more productive.

Now, let’s look at a worse and even more common example: a security exploit has been identified in one of your dependencies. As mentioned previously, if it’s not easy to update, you won’t do it often, so you could be on an outdated version that won’t receive this security update. Now you have to jump multiple versions at once and scramble to test them manually. This usually means many quick fixes, which often just cause more bugs. In my experience, code changes under pressure are what deteriorate the structure and readability in a codebase, lead to a much higher number of bugs, and are a clear sign of poor planning.

A good development team will, instead of looking at what is currently available, look ahead to what is in beta and will know ahead of time if they expect to run into issues. The questions asked will be: Will our application break in the next version of Chrome? What about the next version of node? Hapi does this by running the full test suite against future versions of node in order to alert the node community of how planned changes will impact hapi and the node community as a whole. This is what we should all aim to do as developers.

A good test suite has even bigger advantages when working in a team or when adding new developers to a team. Most development teams start out small and grow, meaning all the knowledge of the initial development needs to be passed on to new developers joining the team. So, how do tests lead to a benefit here?

For one, tests are a great documentation on how parts of the application work for other members of a team. When trying to communicate a problem in an application, a failing test is a perfect illustration of what and where the problem is.

When working as a team, for every code change from yourself or another member of the team, you’re faced with the preceding problem of changing a dependency. Do we just test the code that was changed? What about the code that depends on the changed code? Is it going to be manual testing again? If this is the case, how much time in a week would be spent on manual testing versus development? Often, with changes, existing functionality can be broken along with new functionality, which is called regression. Having a good test suite highlights this and makes it much easier to prevent. These are the questions and topics that need to be answered when discussing the importance of tests.

Writing tests can also improve code quality. For one, identifying dead code is much easier when you have a good testing suite. If you find that you can only get 90% code coverage, what does the extra 10% do? Is it used at all if it’s unreachable? Does it break other parts of the application if removed? Writing tests will often improve your skills in writing easily testable code.

Software applications usually grow to be complex pretty quickly—it happens, but we always need to be active in dealing with this, or software complexity will win. A good test suite is one of the best tools we have to tackle this.

The preceding is not an exhaustive list on the importance or benefits of writing tests for your code, but hopefully it has convinced you of the importance of having a good testing suite. So, now that we know why we need to write good tests, let’s look at hapi’s test runner lab and assertion library code and how, along with some tools from hapi, they make the process of writing tests much easier and a more enjoyable experience.

Introducing hapi’s testing utilities

The test runner in the hapi ecosystem is called lab. If you’re not familiar with test runners, they are command-line interface tools for you to run your testing suite. Lab was inspired by a similar test tool called mocha, if you are familiar with it, and in fact was initially begun as a fork of the mocha codebase. But, as hapi’s needs diverged from the original focus of mocha, lab was born.

The assertion library commonly used in the hapi ecosystem is code. An assertion library forms the part of a test that performs the actual checks to judge whether a test case has passed or not, for example, checking that the value of a variable is true after an action has been taken.

Lets look at our first test script; then, we can take a deeper look at lab and code, how they function under the hood, and some of the differences they have with other commonly used libraries, such as mocha and chai.

Installing lab and code

You can install lab and code the same as any other module on npm:

npm install lab code -–save-dev

Note the –save-dev flag added to the install command here. Remember your package.json file, which describes an npm module? This adds the modules to the devDependencies section of your npm module. These are dependencies that are required for the development and testing of a module but are not required for using the module.

The reason why these are separated is that when we run npm install in an application codebase, it only installs the dependencies and devDependencies of package.json in that directory. For all the modules installed, only their dependencies are installed, not their development dependencies. This is because we only want to download the dependencies required to run that application; we don’t need to download all the development dependencies for every module.

The npm install command installs all the dependencies and devDependencies of package.json in the current working directory, and only the dependencies of the other installed module, not devDependencies. To install the development dependencies of a particular module, navigate to the root directory of the module and run npm install.

After you have installed lab, you can then run it with the following:

./node_modules/lab/bin/lab test.js

This is quite long to type every time, but fortunately due to a handy feature of npm called npm scripts, we can shorten it. If you look at package.json generated by npm init in the first chapter, depending on your version of npm, you may see the following (some code removed for brevity):

...
"scripts": {
"test": "echo "Error: no test specified" && exit 1"
},
...

Scripts are a list of commands related to the project; they can be for testing purposes, as we will see in this example; to start an application; for build steps; and to start extra servers, among many other options. They offer huge flexibility in how these are combined to manage scripts related to a module or application, and I could spend a chapter, or even a book, on just these, but they are outside the scope of this book, so let’s just focus on what is important to us here.

To get a list of available scripts for a module application, in the module directory, simply run:

$ npm run

To then run the listed scripts, such as test you can just use:

$ npm run test

As you can see, this gives a very clean API for scripts and the documentation for each of them in the project’s package.json. From this point on in this book, all code snippets will use npm scripts to test or run any examples. We should strive to use these in our projects to simplify and document commands related to applications and modules for ourselves and others.

Let’s now add the ability to run a test file to our package.json file. This just requires modifying the scripts section to be the following:

...
"scripts": {
"test": "./node_modules/lab/bin/lab ./test/index.js"
},
...

It is common practice in node to place all tests in a project within the test directory.

A handy addition to note here is that when calling a command with npm run, the bin directory of every module in your node_modules directory is added to PATH when running these scripts, so we can actually shorten this script to:

…
"scripts": {
"test": "lab ./test/index.js"
},
…

This type of module install is considered to be local, as the dependency is local to the application directory it is being run in. While I believe this is how we should all install our modules, it is worth pointing it out that it is also possible to install a module globally. This means that when installing something like lab, it is immediately added to PATH and can be run from anywhere. We do this by adding a -g flag to the install, as follows:

$ npm install lab code -g

This may appear handier than having to add npm scripts or running commands locally outside of an npm script but should be avoided where possible. Often, installing globally requires sudo to run, meaning you are taking a script from the Internet and allowing it to have complete access to your system. Hopefully, the security concerns here are obvious.

Other than that, different projects may use different versions of test runners, assertion libraries, or build tools, which can have unknown side effects and cause debugging headaches.

The only time I would use globally installed modules are for command-line tools that I may use outside a particular project—for example, a node base terminal IDE such as slap (https://www.npmjs.com/package/slap) or a process manager such as PM2 (https://www.npmjs.com/package/pm2)—but never with sudo!

Now that we are familiar with installing lab and code and the different ways or running it inside and outside of npm scripts, let’s look at writing our first test script and take a more in-depth look at lab and code.

Our First Test Script

Let’s take a look at what a simple test script in lab looks like using the code assertion library:

const Code = require('code');                     [1]
const Lab = require('lab');                       [1]
const lab = exports.lab = Lab.script();           [2]

lab.experiment('Testing example', () => {         [3]

lab.test('fails here', (done) => {              [4]
   Code.expect(false).to.be.true();               [4]
   return done();                                [4]
});                                             [4]

lab.test('passes here', (done) => {             [4]
   Code.expect(true).to.be.true();             [4]
   return done();                                [4]
});                                           [4]
});

This script, even though small, includes a number of new concepts, so let’s go through it with reference to the numbers in the preceding code:

  • [1]: Here, we just include the code and lab modules, as we would any other node module.
  • [2]: As mentioned before, it is common convention to place all test files within the test directory of a project. However, there may be JavaScript files in there that aren’t tests, and therefore should not be tested. To avoid this, we inform lab of which files are test scripts by calling Lab.script() and assigning the value to lab and exports.lab.
  • [3]: The lab.experiment() function (aliased lab.describe()) is just a way to group tests neatly. In test output, tests will have the experiment string prefixed to the message, for example, “Testing example fails here”. This is optional, however.
  • [4]: These are the actual test cases. Here, we define the name of the test and pass a callback function with the parameter function done(). We see code in action here for managing our assertions. And finally, we call the done() function when finished with our test case.

Things to note here: lab tests are always asynchronous. In every test, we have to call done() to finish the test; there is no counting of function parameters or checking whether synchronous functions have completed in order to ensure that a test is finished. Although this requires the boilerplate of calling the done() function at the end of every test, it means that all tests, synchronous or asynchronous, have a consistent structure.

In Chai, which was originally used for hapi, some of the assertions such as .ok, .true, and .false use properties instead of functions for assertions, while assertions like .equal(), and .above() use functions. This type of inconsistency leads to us easily forgetting that an assertion should be a method call and hence omitting the (). This means that the assertion is never called and the test may pass as a false positive. Code’s API is more consistent in that every assertion is a function call. Here is a comparison of the two:

Chai:
expect('hello').to.equal('hello');
expect(foo).to.exist;

Code:
expect('hello').to.equal('hello');
expect('foot').to.exist();

Notice the difference in the second exist() assertion. In Chai, you see the property form of the assertion, while in Code, you see the required function call. Through this, lab can make sure all assertions within a test case are called before done is complete, or it will fail the test.

So let’s try running our first test script. As we already updated our package.json script, we can run our test with the following command:

$ npm run test

This will generate the following output:

There are a couple of things to note from this. Tests run are symbolized with a . or an X, depending on whether they pass or not. You can get a lab list of the full test title by adding the -v or -–verbose flag to our npm test script command.

There are lots of flags to customize the running and output of lab, so I recommend using the full labels for each of these, for example, –verbose and –lint instead of -v and -l, in order to save you the time spent referring back to the documentation each time.

You may have noticed the No global variable leaks detected message at the bottom. Lab assumes that the global object won’t be polluted and checks that no extra properties have been added after running tests. Lab can be configured to not run this check or whitelist certain globals. Details of this are in the lab documentation availbale at https://github.com/hapijs/lab.

Testing approaches

This is one of the many known approaches to building a test suite, as is BDD (Behavior Driven Development), and like most test runners in node, lab is unopinionated about how you structure your tests. Details of how to structure your tests in a BDD can again be found easily in the lab documentation.

Testing with hapi

As I mentioned before, testing is considered paramount in the hapi ecosystem, with every module in the ecosystem having to maintain 100% code coverage at all times, as with all module dependencies.

Fortunately, hapi provides us with some tools to make the testing of hapi apps much easier through a module called Shot, which simulates network requests to a hapi server. Let’s take the example of a Hello World server and write a simple test for it:

const Code = require('code');
const Lab = require('lab');
const Hapi = require('hapi');
const lab = exports.lab = Lab.script();

lab.test('It will return Hello World', (done) => {
const server = new Hapi.Server();
server.connection();
server.route({
   method: 'GET',
   path: '/',
   handler: function (request, reply) {
   return reply('Hello Worldn');
   }
});

server.inject('/', (res) => {
   Code.expect(res.statusCode).to.equal(200);
   Code.expect(res.result).to.equal('Hello Worldn');
   done();
});
});

Now that we are more familiar with with what a test script looks like, most of this will look familiar. However, you may have noticed we never started our hapi server. This means the server was never started and no port assigned, but thanks to the shot module (https://github.com/hapijs/shot), we can still make requests against it using the server.inject API. Not having to start a server means less setup and teardown before and after tests and means that a test suite can run quicker as less resources are required. server.inject can still be used if used with the same API whether the server has been started or not.

Code coverage

As I mentioned earlier in the article, having 100% code coverage is paramount in the hapi ecosystem and, in my opinion, hugely important for any application to have. Without a code coverage target, writing tests can feel like an empty or unrewarding task where we don’t know how many tests are enough or how much of our application or module has been covered. With any task, we should know what our goal is; testing is no different, and this is what code coverage gives us. Even with 100% coverage, things can still go wrong, but it means that at the very least, every line of code has been considered and has at least one test covering it. I’ve found from working on modules for hapi that trying to achieve 100% code coverage actually gamifies the process of writing tests, making it a more enjoyable experience overall.

Fortunately, lab has code coverage integrated, so we don’t need to rely on an extra module to achieve this. It’s as simple as adding the –coverage or -c flag to our test script command. Under the hood, lab will then build an abstract syntax tree so it can evaluate which lines are executed, thus producing our coverage, which will be added to the console output when we run tests. The code coverage tool will also highlight which lines are not covered by tests, so you know where to focus your testing effort, which is extremely useful in identifying where to focus your testing effort.

It is also possible to enforce a minimum threshold as to the percentage of code coverage required to pass a suite of tests with lab through the –threshold or -t flag followed by an integer. This is used for all the modules in the hapi ecosystem, and all thresholds are set to 100.

Having a threshold of 100% for code coverage makes it much easier to manage changes to a codebase. When any update or pull request is submitted, the test suite is run against the changes, so we can know that all tests have passed and all code covered before we even look at what has been changed in the proposed submission. There are services that even automate this process for us, such as TravisCI (https://travis-ci.org/).

It’s also worth knowing that the coverage report can be displayed in a number of formats; For a full list of these reporters with explanations, I suggest reading the lab documentation available at https://github.com/hapijs/lab.

Let’s now look at what’s involved in getting 100% coverage for our previous example. First of all, we’ll move our server code to a separate file, which we will place in the lib folder and call index.js.

It’s worth noting here that it’s good testing practice and also the typical module structure in the hapi ecosystem to place all module code in a folder called lib and the associated tests for each file within lib in a folder called test, preferably with a one-to-one mapping like we have done here, where all the tests for lib/index.js are in test/index.js. When trying to find out how a feature within a module works, the one-to-one mapping makes it much easier to find the associated tests and see examples of it in use.

So, having separated our server from our tests, let’s look at what our two files now look like; first, ./lib/index.js:

const Hapi = require('hapi');
const server = new Hapi.Server();
server.connection();
server.route({
method: 'GET',
path: '/',
handler: function (request, reply) {
   return reply('Hello Worldn');
}
});
module.exports = server;

The main change here is that we export our server at the end for another file to acquire and start it if necessary. Our test file at ./test/index.js will now look like this:

const Code = require('code');
const Lab = require('lab');
const server = require('../lib/index.js');
const lab = exports.lab = Lab.script();

lab.test('It will return Hello World', (done) => {
server.inject('/', (res) => {
   Code.expect(res.statusCode).to.equal(200);
   Code.expect(res.result).to.equal('Hello Worldn');
   done();
});
});

Finally, for us to test our code coverage, we update our npm test script to include the coverage flag –coverage or -c. The final example of this is in the second example of the source code of Chapter 4, Adding Tests and the Importance of 100% Coverage, which is supplied with this book. If you run this, you’ll find that we actually already have 100% of the code covered with this one test. An interesting exercise here would be to find out what versions of hapi this code functions correctly with. At the time of writing, this code was written for hapi version 11.x.x on node.js version 4.0.0. Will it work if run with hapi version 9 or 10? You can test this now by installing an older version with the help of the following command:

$ npm install hapi@10

This will give you an idea of how easy it can be to check whether your codebase works with different versions of libraries. If you have some time, it would be interesting to see how this example runs on different versions of node (Hint: it breaks on any version earlier than 4.0.0).

In this example, we got 100% code coverage with one test. Unfortunately, we are rarely this fortunate when we increase the complexity of our codebase, and so will the complexity of our tests be, which is where knowledge of writing testable code comes in. This is something that comes with practice by writing tests while writing application or module code.

Linting

Also built into lab is linting support. Linting enforces a code style that is adhered to, which can be specified through an .eslintrc or .jshintrc file. By default, lab will enforce the the hapi style guide rules.

The idea of linting is that all code will have the same structure, making it much easier to spot bugs and keep code tidy. As JavaScript is a very flexible language, linters are used regularly to forbid bad practices such as global or unused variables.

To enable the lab linter, simply add the linter flag to the test command, which is –lint or -L. I generally stick with the default hapi style guide rules as they are chosen to promote easy-to-read code that is easily testable and forbids many bad practices. However, it’s easy to customize the linting rules used; for this, I recommend referring to the lab documentation.

Summary

In this article, we covered testing in node and hapi and how testing and code coverage are paramount in the hapi ecosystem. We saw justification for their need in application development and where they can make us more productive developers.

We also introduced the test runner and code assertion libraries lab and code in the ecosystem. We saw justification for their use and also how to use them to write simple tests and how to use the tools provided in lab and hapi to test hapi applications.

We also learned about some of the extra features baked into lab, such as code coverage and linting. We looked at how to test the code coverage of an application and get it to 100% and how the hapi ecosystem applies the hapi styleguide to all modules using lab’s linting integration.

Resources for Article:


Further resources on this subject:


Packt

Share
Published by
Packt

Recent Posts

Harnessing Tech for Good to Drive Environmental Impact

At Packt, we are always on the lookout for innovative startups that are not only…

2 months ago

Top life hacks for prepping for your IT certification exam

I remember deciding to pursue my first IT certification, the CompTIA A+. I had signed…

3 years ago

Learn Transformers for Natural Language Processing with Denis Rothman

Key takeaways The transformer architecture has proved to be revolutionary in outperforming the classical RNN…

3 years ago

Learning Essential Linux Commands for Navigating the Shell Effectively

Once we learn how to deploy an Ubuntu server, how to manage users, and how…

3 years ago

Clean Coding in Python with Mariano Anaya

Key-takeaways:   Clean code isn’t just a nice thing to have or a luxury in software projects; it's a necessity. If we…

3 years ago

Exploring Forms in Angular – types, benefits and differences   

While developing a web application, or setting dynamic pages and meta tags we need to deal with…

3 years ago