33 min read

In this article by Ben Augarten, Marc Kuo, Eric Lin, Aidha Shaikh, Fabiano Pereira Soriani, Geoffrey Tisserand, Chiqing Zhang, Kan Zhang, authors of the book Express.js Blueprints, we will see how it uses Google Chrome’s JavaScript engine, V8, to execute code. Node.js is single-threaded and event-driven. It uses non-blocking I/O to squeeze every ounce of processing power out of the CPU. Express builds on top of Node.js, providing all of the tools necessary to develop robust web applications with node.

In addition, by utilizing Express, one gains access to a host of open source software to help solve common pain points in development. The framework is unopinionated, meaning it does not guide you one way or the other in terms of implementation or interface. Because it is unopinionated, the developer has more control and can use the framework to accomplish nearly any task; however, the power Express offers is easily abused. In this book, you will learn how to use the framework in the right way by exploring the following different styles of an application:

  • Setting up Express for a static site
  • Local user authentication
  • OAuth with passport
  • Profile pages
  • Testing

(For more resources related to this topic, see here.)

Setting up Express for a static site

To get our feet wet, we’ll first go over how to respond to basic HTTP requests. In this example, we will handle several GET requests, responding first with plaintext and then with static HTML. However, before we get started, you must install two essential tools: node and npm, which is the node package manager.

Navigate to https://nodejs.org/download/ to install node and npm.

Saying Hello, World in Express

For those unfamiliar with Express, we will start with a basic example—Hello World! We’ll start with an empty directory. As with any Node.js project, we will run the following code to generate our package.json file, which keeps track of metadata about the project, such as dependencies, scripts, licenses, and even where the code is hosted:

$ npm init

The package.json file keeps track of all of our dependencies so that we don’t have versioning issues, don’t have to include dependencies with our code, and can deploy fearlessly. You will be prompted with a few questions. Choose the defaults for all except the entry point, which you should set to server.js.

There are many generators out there that can help you generate new Express applications, but we’ll create the skeleton this time around. Let’s install Express. To install a module, we use npm to install the package. We use the –save flag to tell npm to add the dependency to our package.json file; that way, we don’t need to commit our dependencies to the source control. We can just install them based on the contents of the package.json file (npm makes this easy):

$ npm install --save express 

We’ll be using Express v4.4.0 throughout this book.

Warning: Express v4.x is not backwards compatible with the versions before it.

You can create a new file server.js as follows:

var express = require('express');
var app = express();
 
app.get('/', function(req, res, next) {
res.send('Hello, World!');
});
 
app.listen(3000);
console.log('Express started on port 3000');

This file is the entry point for our application. It is here that we generate an application, register routes, and finally listen for incoming requests on port 3000. The require(‘express’) method returns a generator of applications.

We can continually create as many applications as we want; in this case, we only created one, which we assigned to the variable app. Next, we register a GET route that listens for GET requests on the server root, and when requested, sends the string ‘Hello, World’ to the client. Express has methods for all of the HTTP verbs, so we could have also done app.post, app.put, app.delete, or even app.all, which responds to all HTTP verbs. Finally, we start the app listening on port 3000, then log to standard out.

It’s finally time to start our server and make sure everything works as expected.

$ node server.js

We can validate that everything is working by navigating to http://localhost:3000 in our browser or curl -v localhost:3000 in your terminal.

Jade templating

We are now going to extract the HTML we send to the client into a separate template. After all, it would be quite difficult to render full HTML pages simply by using res.send. To accomplish this, we will use a templating language frequently in conjunction with Express — jade. There are many templating languages that you can use with Express. We chose Jade because it greatly simplifies writing HTML and was created by the same developer of the Express framework.

$ npm install --save jade

After installing Jade, we’re going to have to add the following code to server.js:

app.set('view engine', 'jade');
app.set('views', __dirname + '/views');
 
app.get('/', function(req, res, next) {
res.render('index');
});

The preceding code sets the default view engine for Express—sort of like telling Express that in the future it should assume that, unless otherwise specified, templates are in the Jade templating language. Calling app.set sets a key value pair for Express internals. You can think of this sort of application like wide configuration. We could call app.get (view engine) to retrieve our set value at any time.

We also specify the folder that Express should look into to find view files. That means we should create a views directory in our application and add a file, index.jade to it. Alternatively, if you want to include many different template types, you could execute the following:

app.engine('jade', require('jade').__express);
app.engine('html', require('ejs').__express);
app.get('/html', function(req, res, next) {
res.render('index.html');
});
 
app.get(/'jade, function(req, res, next) {
res.render('index.jade');
});

Here, we set custom template rendering based on the extension of the template we want to render. We use the Jade renderer for .jade extensions and the ejs renderer for .html extensions and expose both of our index files by different routes. This is useful if you choose one templating option and later want to switch to a new one in an incremental way. You can refer to the source for the most basic of templates.

Local user authentication

The majority of applications require user accounts. Some applications only allow authentication through third parties, but not all users are interested in authenticating through third parties for privacy reasons, so it is important to include a local option. Here, we will go over best practices when implementing local user authentication in an Express app. We’ll be using MongoDB to store our users and Mongoose as an ODM (Object Document Mapper). Then, we’ll leverage passport to simplify the session handling and provide a unified view of authentication.

Downloading the example code

You can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

User object modeling

We will leverage passportjs to handle user authentication. Passport centralizes all of the authentication logic and provides convenient ways to authenticate locally in addition to third parties, such as Twitter, Google, Github, and so on. First, install passport and the local authentication strategy as follows:

$ npm install --save passport-local

In our first pass, we will implement a local authentication strategy, which means that users will be able to register locally for an account. We start by defining a user model using Mongoose. Mongoose provides a way to define schemas for objects that we want to store in MongoDB and then provide a convenient way to map between stored records in the database and an in-memory representation.

Mongoose also provides convenient syntax to make many MongoDB queries and perform CRUD operations on models. Our user model will only have an e-mail, password, and timestamp for now. Before getting started, we need to install Mongoose:

$ npm install --save mongoose bcrypt validator

Now we define the schema for our user in models/user.js as follows:

Var mongoose = require('mongoose');
 
var userSchema = new mongoose.Schema({
email: {
   type: String,
   required: true,
   unique: true
},
password: {
   type: String,
   required: true
},
created_at: {
   type: Date,
   default: Date.now
}
});
 
userSchema.pre('save', function(next) {
if (!this.isModified('password')) {
   return next();
}
this.password = User.encryptPassword(this.password);
next();
});

Here, we create a schema that describes our users. Mongoose has convenient ways to describe the required and unique fields as well as the type of data that each property should hold. Mongoose does all the validations required under the hood. We don’t require many user fields for our first boilerplate application—e-mail, password, and timestamp to get us started.

We also use Mongoose middleware to rehash a user’s password if and when they decide to change it. Mongoose exposes several hooks to run user-defined callbacks. In our example, we define a callback to be invoked before Mongoose saves a model. That way, every time a user is saved, we’ll check to see whether their password was changed.

Without this middleware, it would be possible to store a user’s password in plaintext, which is not only a security vulnerability but would break authentication. Mongoose supports two kinds of middleware – serial and parallel. Parallel middleware can run asynchronous functions and gets an additional callback to invoke; you’ll learn more about Mongoose middleware later in this book.

Now, we want to add validations to make sure that our data is correct. We’ll use the validator library to accomplish this, as follows:

Var validator = require('validator');
 
User.schema.path('email').validate(function(email) {
return validator.isEmail(email);
});
 
User.schema.path('password').validate(function(password) {
return validator.isLength(password, 6);
});
 
var User = mongoose.model('User', userSchema);
module.exports = User;

We added validations for e-mail and password length using a library called validator, which provides a lot of convenient validators for different types of fields. Validator has validations based on length, URL, int, upper case; essentially, anything you would want to validate (and don’t forget to validate all user input!).

We also added a host of helper functions regarding registration, authentication, as well as encrypting passwords that you can find in models/user.js. We added these to the user model to help encapsulate the variety of interactions we want using the abstraction of a user.

For more information on Mongoose, see http://mongoosejs.com/. You can find more on passportjs at http://passportjs.org/.

This lays out the beginning of a design pattern called MVC—model, view, controller. The basic idea is that you encapsulate separate concerns in different objects: the model code knows about the database, storage, and querying; the controller code knows about routing and requests/responses; and the view code knows what to render for users.

Introducing Express middleware

Passport is authentication middleware that can be used with Express applications. Before diving into passport, we should go over Express middleware. Express is a connect framework, which means it uses the connect middleware. Connecting internally has a stack of functions that handle requests.

When a request comes in, the first function in the stack is given the request and response objects along with the next() function. The next() function when called, delegates to the next function in the middleware stack. Additionally, you can specify a path for your middleware, so it is only called for certain paths.

Express lets you add middleware to an application using the app.use() function. In fact, the HTTP handlers we already wrote are a special kind of middleware. Internally, Express has one level of middleware for the router, which delegates to the appropriate handler.

Middleware is extraordinarily useful for logging, serving static files, error handling, and more. In fact, passport utilizes middleware for authentication. Before anything else happens, passport looks for a cookie in the request, finds metadata, and then loads the user from the database, adds it to req, user, and then continues down the middleware stack.

Setting up passport

Before we can make full use of passport, we need to tell it how to do a few important things. First, we need to instruct passport how to serialize a user to a session. Then, we need to deserialize the user from the session information. Finally, we need to tell passport how to tell if a given e-mail/password combination represents a valid user as given in the following:

// passport.js
var passport = require('passport');
var LocalStrategy = require('passport-local').Strategy;
var User = require('mongoose').model('User');
 
passport.serializeUser(function(user, done) {
done(null, user.id);
});
 
passport.deserializeUser(function(id, done) {
User.findById(id, done);
});

Here, we tell passport that when we serialize a user, we only need that user’s id. Then, when we want to deserialize a user from session data, we just look up the user by their ID! This is used in passport’s middleware, after the request is finished, we take req.user and serialize their ID to our persistent session. When we first get a request, we take the ID stored in our session, retrieve the record from the database, and populate the request object with a user property. All of this functionality is provided transparently by passport, as long as we provide definitions for these two functions as given in the following:

function authFail(done) {
done(null, false, { message: 'incorrect email/password combination' });
}
 
passport.use(new LocalStrategy(function(email, password, done) {
User.findOne({
   email: email
}, function(err, user) {
   if (err) return done(err);
   if (!user) {
     return authFail(done);
   }
   if (!user.validPassword(password)) {
     return authFail(done);
   }
   return done(null, user);
});
}));

We tell passport how to authenticate a user locally. We create a new LocalStrategy() function, which, when given an e-mail and password, will try to lookup a user by e-mail. We can do this because we required the e-mail field to be unique, so there should only be one user. If there is no user, we return an error. If there is a user, but they provided an invalid password, we still return an error. If there is a user and they provided the correct password, then we tell passport that the authentication request was a success by calling the done callback with the valid user.

Registering users

Now, we add routes for registration, both a view with a basic form and backend logic to create a user. First, we will create a user controller. Up until now, we have thrown our routes in our server.js file, but this is generally bad practice. What we want to do is have separate controllers for each kind of route that we want. We have seen the model portion of MVC. Now it’s time to take a look at controllers. Our user controller will have all the routes that manipulate the user model. Let’s create a new file in a new directory, controllers/user.js:

// controllers/user.js
var User = require('mongoose').model('User');
 
module.exports.showRegistrationForm = function(req, res, next) {
res.render('register');
};
 
module.exports.createUser = function(req, res, next) {
User.register(req.body.email, req.body.password, function(err, user) {
   if (err) return next(err);
   req.login(user, function(err) {
     if (err) return next(err);
     res.redirect('/');
   });
});
};

Note that the User model takes care of the validations and registration logic; we just provide callback. Doing this helps consolidate the error handling and generally makes the registration logic easier to understand. If the registration was successful, we call req.login, a function added by passport, which creates a new session for that user and that user will be available as req.user on subsequent requests.

Finally, we register the routes. At this point, we also extract the routes we previously added to server.js to their own file. Let’s create a new file called routes.js as follows:

// routes.js
app.get('/users/register', userRoutes.showRegistrationForm);
app.post('/users/register', userRoutes.createUser);

Now we have a file dedicated to associating controller handlers with actual paths that users can access. This is generally good practice because now we have a place to come visit and see all of our defined routes. It also helps unclutter our server.js file, which should be exclusively devoted to server configuration.

For details, as well as the registration templates used, see the preceding code.

Authenticating users

We have already done most of the work required to authenticate users (or rather, passport has). Really, all we need to do is set up routes for authentication and a form to allow users to enter their credentials. First, we’ll add handlers to our user controller:

// controllers/user.js
module.exports.showLoginForm = function(req, res, next) {
res.render('login');
};
 
module.exports.createSession = passport.authenticate('local', {
successRedirect: '/',
failureRedirect: '/login'
});

Let’s deconstruct what’s happening in our login post. We create a handler that is the result of calling passport.authenticate(‘local’, …). This tells passport that the handler uses the local authentication strategy. So, when someone hits that route, passport will delegate to our LocalStrategy. If they provided a valid e-mail/password combination, our LocalStrategy will give passport the now authenticated user, and passport will redirect the user to the server root. If the e-mail/password combination was unsuccessful, passport will redirect the user to /login so they can try again.

Then, we will bind these callbacks to routes in routes.js:

app.get('/users/login', userRoutes.showLoginForm);
app.post('/users/login', userRoutes.createSession);

At this point, we should be able to register an account and login with those same credentials. (see tag 0.2 for where we are right now).

OAuth with passport

Now we will add support for logging into our application using Twitter, Google, and GitHub. This functionality is useful if users don’t want to register a separate account for your application. For these users, allowing OAuth through these providers will increase conversions and generally make for an easier registration process for users.

Adding OAuth to user model

Before adding OAuth, we need to keep track of several additional properties on our user model. We keep track of these properties to make sure we can look up user accounts provided there is information to ensure we don’t allow duplicate accounts and allow users to link multiple third-party accounts by using the following code:

var userSchema = new mongoose.Schema({
email: {
   type: String,
   required: true,
   unique: true
},
password: {
   type: String,
},
created_at: {
   type: Date,
   default: Date.now
},
twitter: String,
google: String,
github: String,
profile: {
   name: { type: String, default: '' },
   gender: { type: String, default: '' },
   location: { type: String, default: '' },
   website: { type: String, default: '' },
   picture: { type: String, default: '' }
},
});

First, we add a property for each provider, in which we will store a unique identifier that the provider gives us when they authorize with that provider. Next, we will store an array of tokens, so we can conveniently access a list of providers that are linked to this account; this is useful if you ever want to let a user register through one and then link to others for viral marketing or extra user information. Finally, we keep track of some demographic information about the user that the providers give to us so we can provide a better experience for our users.

Getting API tokens

Now, we need to go to the appropriate third parties and register our application to receive application keys and secret tokens. We will add these to our configuration. We will use separate tokens for development and production purposes (for obvious reasons!). For security reasons, we will only have our production tokens as environment variables on our final deploy server, not committed to version control.

I’ll wait while you navigate to the third-party websites and add their tokens to your configuration as follows:

// config.js
twitter: {
   consumerKey: process.env.TWITTER_KEY || 'VRE4lt1y0W3yWTpChzJHcAaVf',
   consumerSecret: process.env.TWITTER_SECRET || 'TOA4rNzv9Cn8IwrOi6MOmyV894hyaJks6393V6cyLdtmFfkWqe',
   callbackURL: '/auth/twitter/callback'
},
google: {
   clientID: process.env.GOOGLE_ID || '627474771522-uskkhdsevat3rn15kgrqt62bdft15cpu.apps.googleusercontent.com',
   clientSecret: process.env.GOOGLE_SECRET || 'FwVkn76DKx_0BBaIAmRb6mjB',
   callbackURL: '/auth/google/callback'
},
github: {
   clientID: process.env.GITHUB_ID || '81b233b3394179bfe2bc',
   clientSecret: process.env.GITHUB_SECRET || 'de0322c0aa32eafaa84440ca6877ac5be9db9ca6',
   callbackURL: '/auth/github/callback'
}

Of course, you should never commit your development keys publicly either. Be sure to either not commit this file or to use private source control. The best idea is to only have secrets live on machines ephemerally (usually as environment variables). You especially should not use the keys that I provided here!

Third-party registration and login

Now we need to install and implement the various third-party registration strategies. To install third-party registration strategies run the following command:

npm install --save passport-twitter passport-google-oAuth passport-github

Most of these are extraordinarily similar, so I will only show the TwitterStrategy, as follows:

passport.use(new TwitterStrategy(config.twitter, function(req, accessToken, tokenSecret, profile, done) {
User.findOne({ twitter: profile.id }, function(err, existingUser) {
     if (existingUser) return done(null, existingUser);
     var user = new User();
     // Twitter will not provide an email address. Period.
     // But a person's twitter username is guaranteed to be unique
     // so we can "fake" a twitter email address as follows:
     // username@twitter.mydomain.com
user.email = profile.username + "@twitter." + config.domain + ".com";
     user.twitter = profile.id;
     user.tokens.push({ kind: 'twitter', accessToken: accessToken, tokenSecret: tokenSecret });
     user.profile.name = profile.displayName;
     user.profile.location = profile._json.location;
     user.profile.picture = profile._json.profile_image_url;
     user.save(function(err) {
       done(err, user);
     });
   });
}));

Here, I included one example of how we would do this. First, we pass a new TwitterStrategy to passport. The TwitterStrategy takes our Twitter keys and callback information and a callback is used to make sure we can register the user with that information. If the user is already registered, then it’s a no-op; otherwise we save their information and pass along the error and/or successfully saved user to the callback. For the others, refer to the source.

Profile pages

It is finally time to add profile pages for each of our users. To do so, we’re going to discuss more about Express routing and how to pass request-specific data to Jade templates. Often times when writing a server, you want to capture some portion of the URL to use in the controller; this could be a user id, username, or anything! We’ll use Express’s ability to capture URL parts to get the id of the user whose profile page was requested.

URL params

Express, like any good web framework, supports extracting data from URL parts. For example, you can do the following:

app.get('/users/:id', function(req, res, next) {
console.log(req.params.id);
}

In the preceding example, we will print whatever comes after /users/ in the request URL. This allows an easy way to specify per user routes, or routes that only make sense in the context of a specific user, that is, a profile page only makes sense when you specify a specific user. We will use this kind of routing to implement our profile page. For now, we want to make sure that only the logged-in user can see their own profile page (we can change this functionality later):

app.get('/users/:id', function(req, res, next) {
if (!req.user || (req.user.id != req.params.id)) {
   return next('Not found');
}
res.render('users/profile', { user: req.user.toJSON() });
});

Here, we check first that the user is signed in and that the requested user’s id is the same as the logged-in user’s id. If it isn’t, then we return an error. If it is, then we render the users/profile.jade template with req.user as the data.

Profile templates

We already looked at models and controllers at length, but our templates have been underwhelming. Finally, we’ll show how to write some basic Jade templates. This section will serve as a brief introduction to the Jade templating language, but does not try to be comprehensive. The code for Profile templates is as follows:

html
body
   h1
     =user.email
   h2
     =user.created_at
   - for (var prop in user.profile)
     if user.profile[prop]
       h4
         =prop + "=" + user.profile[prop]

Notably, because in the controller we passed in the user to the view, we can access the variable user and it refers to the logged-in user! We can execute arbitrary JavaScript to render into the template by prefixing it with = –. In these blocks, we can do anything we would normally do, including string concatenation, method invocation, and so on.

Similarly, we can include JavaScript code that is not intended to be written as HTML by prefixing it with like we did with the for loop. This basic template prints out the user’s e-mail, the created_at timestamp, as well as all of the properties in their profile, if any.

For a more in-depth look at Jade, please see http://jade-lang.com/reference/.

Testing

Testing is essential for any application. I will not dwell on the whys, but instead assume that you are angry with me for skipping this topic in the previous sections. Testing Express applications tend to be relatively straightforward and painless. The general format is that we make fake requests and then make certain assertions about the responses.

We could also implement finer-grained unit tests for more complex logic, but up until now almost everything we did is straightforward enough to be tested on a per route basis. Additionally, testing at the API level provides a more realistic view of how real customers will be interacting with your website and makes tests less brittle in the face of refactoring code.

Introducing Mocha

Mocha is a simple, flexible, test framework runner. First, I would suggest installing Mocha globally so you can easily run tests from the command line as follows:

$ npm install --save-dev –g mocha

The –save-dev option saves mocha as a development dependency, meaning we don’t have to install Mocha on our production servers. Mocha is just a test runner. We also need an assertion library. There are a variety of solutions, but should.js syntax, written by the same person as Express and Mocha, gives a clean syntax to make assertions:

$ npm install --save-dev should

The should.js syntax provides BDD assertions, such as ‘hello’.should.equal(‘hello’) and [1,2].should.have.length(2). We can start with a Hello World test example by creating a test directory with a single file, hello-world.js, as shown in the following code:

var should = require('should');
 
describe('The World', function() {
it('should say hello', function() {
   'Hello, World'.should.equal('Hello, World');
});
it('should say hello asynchronously!', function(done) {
   setTimeout(function() {
     'Hello, World'.should.equal('Hello, World');
     done();
   }, 300);
});
});

We have two different tests both in the same namespace, The World. The first test is an example of a synchronous test. Mocha executes the function we give to it, sees that no exception gets thrown and the test passes. If, instead, we accept a done argument in our callback, as we do in the second example, Mocha will intelligently wait until we invoke the callback before checking the validity of our test. For the most part, we will use the second version, in which we must explicitly invoke the done argument to finish our test because it makes more sense to test Express applications.

Now, if we go back to the command line, we should be able to run Mocha (or node_modules/.bin/mocha if you didn’t install it globally) and see that both of the tests we wrote pass!

Testing API endpoints

Now that we have a basic understanding of how to run tests using Mocha and make assertions with should syntax, we can apply it to test local user registration. First, we need to introduce another npm module that will help us test our server programmatically and make assertions about what kind of responses we expect. The library is called supertest:

$ npm install --save-dev supertest

The library makes testing Express applications a breeze and provides chainable assertions. Let’s take a look at an example usage to test our create user route,
as shown in the following code:

var should = require('should'),
   request = require('supertest'),
   app = require('../server').app,
   User = require('mongoose').model('User');
 
describe('Users', function() {
before(function(done) {
   User.remove({}, done);
});
describe('registration', function() {
   it('should register valid user', function(done) {
     request(app)
       .post('/users/register')
      .send({
         email: "test@example.com",
         password: "hello world"
       })
       .expect(302)
       .end(function(err, res) {
         res.text.should.containEql("Redirecting to /");
         done(err);
       });
   });
});
});

First, notice that we used two namespaces: Users and registration. Now, before we run any tests, we remove all users from the database. This is useful to ensure we know where we’re starting the tests This will delete all of your saved users though, so it’s useful to use a different database in the test environment. Node detects the environment by looking at the NODE_ENV environment variable. Typically it is test, development, staging, or production. We can do so by changing the database URL in our configuration file to use a different local database when in a test environment and then run Mocha tests with NODE_ENV=test mocha.

Now, on to the interesting bits! Supertest exposes a chainable API to make requests and assertions about responses. To make a request, we use request(app). From there, we specify the HTTP method and path. Then, we can specify a JSON body to send to the server; in this case, an example user registration form. On registration, we expect a redirect, which is a 302 response. If that assertion fails, then the err argument in our callback will be populated, and the test will fail when we use done(err). Additionally, we validate that we were redirected to the route we expect, the server root /.

Automate builds and deploys

All of this development is relatively worthless without a smooth process to build and deploy your application. Fortunately, the node community has written a variety of task runners. Among these are Grunt and Gulp, two of the most popular task runners. Both work seamlessly with Express and provide a set of utilities for us to use, including concatenating and uglifying JavaScript, compiling sass/less, and reloading the server on local file changes. We’ll focus on Grunt, for simplicity.

Introducing the Gruntfile

Grunt itself is a simple task runner, but its extensibility and plugin architecture lets you install third-party scripts to run in predefined tasks. To give us an idea of how we might use Grunt, we’re going to write our css in sass and then use Grunt to compile sass to css. Through this example, we’ll explore the different ideas that Grunt introduces. First, you need to install cli globally to install the plugin that compiles sass to css:

$ npm install -g grunt-cli 
$ npm install --save grunt grunt-contrib-sass

Now we need to create Gruntfile.js, which contains instructions for all of the tasks and build targets that we need. To do this perform the following:

// Gruntfile.js
module.exports = function(grunt) {
grunt.loadNpmTasks('grunt-contrib-sass');
grunt.initConfig({
   sass: {
     dist: {
       files: [{
         expand: true,
         cwd: "public/styles",
         src: ["**.scss"],
         dest: "dist/styles",
         ext: ".css"
       }]
     }
   }
});
}

Let’s go over the major parts. Right at the top, we require the plugin we will use, grunt-contrib-sass. This tells grunt that we are going to configure a task called sass. In our definition of the task sass, we specify a target, dist, which is commonly used for tasks that produce production files (minified, concatenated, and so on).

In that task, we build our file list dynamically, telling Grunt to look in /public/styles/ recursively for all .scss files, then compile them all to the same paths in /dist/styles. It is useful to have two parallel static directories, one for development and one for production, so we don’t have to look at minified code in development. We can invoke this target by executing grunt sass or grunt sass:dist.

It is worth noting that we don’t explicitly concatenate the files in this task, but if we use @imports in our main sass file, the compiler will concatenate everything for us.

We can also configure Grunt to run our test suite. To do this, let’s add another plugin — npm install –save-dev grunt-mocha-test. Now we have to add the following code to our Gruntfile.js file:

grunt.loadNpmTasks('grunt-mocha-test');
grunt.registerTask('test', 'mochaTest');
...
 
mochaTest: {
   test: {
     src: ["test/**.js"]
   }
}

Here, the task is called mochaTest and we register a new task called test that simply delegates to the mochaTest task. This way, it is easier to remember how to run tests. Similarly, we could have specified a list of tasks to run if we passed an array of strings as the second argument to registerTask. This is a sampling of what can be accomplished with Grunt. For an example of a more robust Gruntfile, check out the source.

Continuous integration with Travis

Travis CI provides free continuous integration for open source projects as well as paid options for closed source applications. It uses a git hook to automatically test your application after every push. This is useful to ensure no regression was introduced. Also, there could be dependency problems only revealed in CI that local development masks; Travis is the first line of defense for these bugs. It takes your source, runs npm install to install the dependencies specified in package.json, and then runs the npm test to run your test suite.

Travis accepts a configuration file called travis.yml. These typically look like this:

language: node_js
node_js:
- "0.11"
- "0.10"
- "0.8"
services:
- mongodb

We can specify the versions of node that we want to test against as well as the services that we rely on (specifically MongoDB). Now we have to update our test command in package.json to run grunt test. Finally, we have to set up a webhook for the repository in question. We can do this on Travis by enabling the repository. Now we just have to push our changes and Travis will make sure all the tests pass! Travis is extremely flexible and you can use it to accomplish most tasks related to continuous integration, including automatically deploying successful builds.

Deploying Node.js applications

One of the easiest ways to deploy Node.js applications is to utilize Heroku, a platform as a service provider. Heroku has its own toolbelt to create and deploy Heroku apps from your machine. Before getting started with Heroku, you will need to install its toolbelt.

Please go to https://toolbelt.heroku.com/ to download the Heroku toolbelt.

Once installed, you can log in to Heroku or register via the web UI and then run Heroku login. Heroku uses a special file, called the Procfile, which specifies exactly how to run your application.

  1. Our Procfile looks like this:
    web: node server.js

    Extraordinarily simple: in order to run the web server, just run node server.js.

  2. In order to verify that our Procfile is correct, we can run the following locally:
    $ foreman start
  3. Foreman looks at the Procfile and uses that to try to start our server. Once that runs successfully, we need to create a new application and then deploy our application to Heroku. Be sure to commit the Procfile to version control:
    $ heroku create
    $ git push heroku master

    Heroku will create a new application and URL in Heroku, as well as a git remote repository named heroku. Pushing that remote actually triggers a deploy of your code.

    If you do all of this, unfortunately your application will not work. We don’t have a Mongo instance for our application to talk to!

  4. First we have to request MongoDB from Heroku:
    $ heroku addons:add mongolab // don't worry, it's free

    This spins up a shared MongoDB instance and gives our application an environment variable named MONOGOLAB_URI, which we should use as our MongoDB connect URI. We need to change our configuration file to reflect these changes.

    In our configuration file, in production, for our database URL, we should look at the environment variable MONGOLAB_URI. Also, be sure that Express is listening on process.env.PORT || 3000, or else you will receive stra

  5. With all of that set up, we can commit our changes and push the changes once again to Heroku. Hopefully, this time it works! To view the application logs for debugging purposes, just use the Heroku toolbelt:

    • $ heroku logs
  6. One last thing about deploying Express applications: sometimes applications crash, software isn’t perfect. We should anticipate crashes and have our application respond accordingly (by restarting itself). There are many server monitoring tools, including pm2 and forever. We use forever because of its simplicity.
    $ npm install --save forever
  7. Then, we update our Procfile to reflect our use of forever:

    // Procfile
    web: node_modules/.bin/forever server.js

Now, forever will automatically restart our application, if it crashes for any strange reason. You can also set up Travis to automatically push successful builds to your server, but that goes beyond the deployment we will do in this book.

Summary

In this article, we got our feet wet in the world of node and using the Express framework. We went over everything from Hello World and MVC to testing and deployments. You should feel comfortable using basic Express APIs, but also feel empowered to own a Node.js application across the entire stack.

Resources for Article:


Further resources on this subject:


Packt

Share
Published by
Packt

Recent Posts

Top life hacks for prepping for your IT certification exam

I remember deciding to pursue my first IT certification, the CompTIA A+. I had signed…

3 years ago

Learn Transformers for Natural Language Processing with Denis Rothman

Key takeaways The transformer architecture has proved to be revolutionary in outperforming the classical RNN…

3 years ago

Learning Essential Linux Commands for Navigating the Shell Effectively

Once we learn how to deploy an Ubuntu server, how to manage users, and how…

3 years ago

Clean Coding in Python with Mariano Anaya

Key-takeaways:   Clean code isn’t just a nice thing to have or a luxury in software projects; it's a necessity. If we…

3 years ago

Exploring Forms in Angular – types, benefits and differences   

While developing a web application, or setting dynamic pages and meta tags we need to deal with…

3 years ago

Gain Practical Expertise with the Latest Edition of Software Architecture with C# 9 and .NET 5

Software architecture is one of the most discussed topics in the software industry today, and…

3 years ago