18 min read

In this article by Vilic Vane,author of the book TypeScript Design Patterns, we’ll study architecture and patterns that are closely related to the language or its common applications. Many topics in this articleare related to asynchronous programming. We’ll start from a web architecture for Node.js that’s based on Promise. This is a larger topic that has interesting ideas involved, including abstractions of response and permission, as well as error handling tips. Then, we’ll talk about how to organize modules with ES module syntax.

Due to the limited length of this article, some of the related code is aggressively simplified, and nothing more than the idea itself can be applied practically.

(For more resources related to this topic, see here.)

Promise-based web architecture

The most exciting thing for Promise may be the benefits brought to error handling. In a Promise-based architecture, throwing an error could be safe and pleasant. You don’t have to explicitly handle errors when chaining asynchronous operations, and this makes it tougher for mistakes to occur.

With the growing usage with ES2015 compatible runtimes, Promise has already been there out of the box. We have actually plenty of polyfills for Promises (including my ThenFail, written in TypeScript) as people who write JavaScript roughly, refer to the same group of people who create wheels.

Promises work great with other Promises:

  • A Promises/A+ compatible implementation should work with other Promises/A+ compatible implementations
  • Promises do their best in a Promise-based architecture

If you are new to Promise, you may complain about trying Promise with a callback-based project. You may intend to use helpers provided by Promise libraries, such asPromise.all, but it turns out that you have better alternatives,such as the async library.

So, the reason that makes you decide to switch should not be these helpers (as there are a lot of them for callbacks).They should be because there’s an easier way to handle errors or because you want to take the advantages of ES async and awaitfeatures which are based on Promise.

Promisifying existing modules or libraries

Though Promises do their best with a Promise-based architecture, it is still possible to begin using Promise with a smaller scope by promisifying existing modules or libraries.

Taking Node.js style callbacks as an example, this is how we use them:

import * as FS from 'fs';

 

FS.readFile('some-file.txt', 'utf-8', (error, text) => {

if (error) {

    console.error(error);

    return;

}

 

console.log('Content:', text);

});

You may expect a promisified version of readFile to look like the following:

FS

.readFile('some-file.txt', 'utf-8')

.then(text => {

    console.log('Content:', text);

})

.catch(reason => {

    Console.error(reason);

});

Implementing the promisified version of readFile can be easy as the following:

function readFile(path: string, options: any): Promise<string> {

return new Promise((resolve, reject) => {

    FS.readFile(path, options, (error, result) => {

        if (error) {

reject(error);

        } else {

            resolve(result);

        }

    });

});

}

I am using any here for parameter options to reduce the size of demo code, but I would suggest that you donot useany whenever possible in practice.

There are libraries that are able to promisify methods automatically. Unfortunately, you may need to write declaration files yourself for the promisified methods if there is no declaration file of the promisified version that is available.

Views and controllers in Express

Many of us may have already been working with frameworks such as Express. This is how we render a view or send back JSON data in Express:

import * as Path from 'path';

import * as express from 'express';

 

let app = express();

 

app.set('engine', 'hbs');

app.set('views', Path.join(__dirname, '../views'));

 

app.get('/page', (req, res) => {

    res.render('page', {

        title: 'Hello, Express!',

        content: '...'

    });

});

 

app.get('/data', (req, res) => {

    res.json({

        version: '0.0.0',

        items: []

    });

});

 

app.listen(1337);

We will usuallyseparate controller from routing, as follows:

import { Request, Response } from 'express';

 

export function page(req: Request, res: Response): void {

    res.render('page', {

        title: 'Hello, Express!',

        content: '...'

    });

}

Thus, we may have a better idea of existing routes, and we may have controllers managed more easily. Furthermore, automated routing can be introduced so that we don’t always need to update routing manually:

import * as glob from 'glob';

 

let controllersDir = Path.join(__dirname, 'controllers');

 

let controllerPaths = glob.sync('**/*.js', {

    cwd: controllersDir

});

 

for (let path of controllerPaths) {

    let controller = require(Path.join(controllersDir, path));

    let urlPath = path.replace(/\/g, '/').replace(/.js$/, '');

 

    for (let actionName of Object.keys(controller)) {

        app.get(

            `/${urlPath}/${actionName}`,

controller[actionName]

);

    }

}

The preceding implementation is certainly too simple to cover daily usage. However, it displays the one rough idea of how automated routing could work: via conventions that are based on file structures.

Now, if we are working with asynchronous code that is written in Promises, an action in the controller could be like the following:

export function foo(req: Request, res: Response): void {

    Promise

        .all([

            Post.getContent(),

            Post.getComments()

        ])

        .then(([post, comments]) => {

            res.render('foo', {

                post,

                comments

            });

        });

}

We use destructuring of an array within a parameter. Promise.all returns a Promise of an array with elements corresponding to values of resolvablesthat are passed in. (A resolvable means a normal value or a Promise-like object that may resolve to a normal value.)

However, this is not enough, we need to handle errors properly. Or in some case, the preceding code may fail in silence (which is terrible). In Express, when an error occurs, you should call next (the third argument that is passed into the callback) with the error object, as follows:

import { Request, Response, NextFunction } from 'express';

 

export function foo(

req: Request,

res: Response,

next: NextFunction

): void {

    Promise

        // ...

        .catch(reason => next(reason));

}

Now, we are fine with the correctness of this approach, but this is simply not how Promises work. Explicit error handling with callbacks could be eliminated in the scope of controllers, and the easiest way to do this is to return the Promise chain and hand over to code that was previously performing routing logic. So, the controller could be written like the following:

export function foo(req: Request, res: Response) {

    return Promise

        .all([

            Post.getContent(),

            Post.getComments()

        ])

        .then(([post, comments]) => {

            res.render('foo', {

                post,

                comments

            });

        });

}

Or, can we make this even better?

Abstraction of response

We’ve already been returning a Promise to tell whether an error occurs. So, for a server error, the Promise actually indicates the result, or in other words, the response of the request. However, why we are still calling res.render()to render the view? The returned Promise object could be an abstraction of the response itself.

Think about the following controller again:

export class Response {}

 

export class PageResponse extends Response {

    constructor(view: string, data: any) { }

}

 

export function foo(req: Request) {

    return Promise

        .all([

            Post.getContent(),

            Post.getComments()

        ])

        .then(([post, comments]) => {

            return new PageResponse('foo', {

                post,

                comments

            });

        });

}

The response object that is returned could vary for a different response output. For example, it could be either a PageResponse like it is in the preceding example, a JSONResponse, a StreamResponse, or even a simple Redirection.

As in most of the cases, PageResponse or JSONResponse is applied, and the view of a PageResponse can usually be implied with the controller path and action name.It is useful to have these two responses automatically generated from a plain data object with proper view to render with, as follows:

export function foo(req: Request) {

    return Promise

        .all([

            Post.getContent(),

            Post.getComments()

        ])

        .then(([post, comments]) => {

            return {

                post,

                comments

            };

        });

}

This is how a Promise-based controller should respond. With this idea in mind, let’s update the routing code with an abstraction of responses. Previously, we were passing controller actions directly as Express request handlers. Now, we need to do some wrapping up with the actions by resolving the return value, and applying operations that are based on the resolved result, as follows:

  1. If it fulfills and it’s an instance of Response, apply it to the resobjectthat is passed in by Express.
  2. If it fulfills and it’s a plain object, construct a PageResponse or a JSONResponse if no view found and apply it to the resobject.
  3. If it rejects, call thenext function using this reason.

As seen previously,our code was like the following:

app.get(`/${urlPath}/${actionName}`, controller[actionName]);

Now, it gets a little bit more lines, as follows:

let action = controller[actionName];

 

app.get(`/${urlPath}/${actionName}`, (req, res, next) => {

    Promise

        .resolve(action(req))

        .then(result => {

            if (result instanceof Response) {

                result.applyTo(res);

            } else if (existsView(actionName)) {

                new PageResponse(actionName, result).applyTo(res);

            } else {

                new JSONResponse(result).applyTo(res);

            }

        })

        .catch(reason => next(reason));

});

 

However, so far we can only handle GET requests as we hardcoded app.get() in our router implementation. The poor view matching logic can hardly be used in practice either. We need to make these actions configurable, and ES decorators could perform a good job here:

export default class Controller {

@get({

    View: 'custom-view-path'

})

    foo(req: Request) {

        return {

            title: 'Action foo',

            content: 'Content of action foo'

        };

    }

}

I’ll leave the implementation to you, and feel free to make them awesome.

Abstraction of permission

Permission plays an important role in a project, especially in systems that have different user groups. For example, a forum. The abstraction of permission should be extendable to satisfy changing requirements, and it should be easy to use as well.

Here, we are going to talk about the abstraction of permission in the level of controller actions. Consider the legibility of performing one or more actions a privilege. The permission of a user may consist of several privileges, and usually most of the users at the same level would have the same set of privileges. So, we may have a larger concept, namely groups.

The abstraction could either work based on both groups and privileges, or work based on only privileges (groups are now just aliases to sets of privileges):

  • Abstraction that validates based on privileges and groups at the same time is easier to build. You do not need to create a large list of which actions can be performed for a certain group of user, as granular privileges are only required when necessary.
  • Abstraction that validates based on privileges has better control and more flexibility to describe the permission. For example, you can remove a small set of privileges from the permission of a user easily.

However, both approaches have similar upper-level abstractions, and they differ mostly on implementations. The general structure of the permission abstractions that we’ve talked about is like in the following diagram:

The participants include the following:

  • Privilege: This describes detailed privilege corresponding to specific actions
  • Group: This defines a set of privileges
  • Permission: This describes what a user is capable of doing, consist of groups that the user belongs to, and the privileges that the user has.
  • Permission descriptor: This describes how the permission of a user works and consists of possible groups and privileges.

Expected errors

A great concern that was wiped away after using Promises is that we do not need to worry about whether throwing an error in a callback would crash the application most of the time. The error will flow through the Promises chain and if not caught, it will be handled by our router. Errors can be roughly divided as expected errors and unexpected errors. Expected errors are usually caused by incorrect input or foreseeable exceptions, and unexpected errors are usually caused by bugs or other libraries that the project relies on.

For expected errors, we usually want to give users a friendly response with readable error messages and codes. So that the user can help themselves searching the error or report to us with useful context. For unexpected errors, we would also want a reasonable response (usually a message described as an unknown error), a detailed server-side log (including real error name, message, stack information, and so on), and even alerts to let the team know as soon as possible.

Defining and throwing expected errors

The router will need to handle different types of errors, and an easy way to achieve this is to subclass a universal ExpectedError class and throw its instances out, as follows:

import ExtendableError from 'extendable-error';

 

class ExpectedError extends ExtendableError {

constructor(

    message: string,

    public code: number

) {

    super(message);

}

}

The extendable-error is a package of mine that handles stack trace and themessage property. You can directly extend Error class as well.

Thus, when receiving an expected error, we can safely output the error name and message as part of the response. If this is not an instance of ExpectedError, we can display predefined unknown error messages.

Transforming errors

Some errors such as errors that are caused by unstable networks or remote services are expected.We may want to catch these errors and throw them out again as expected errors. However, it could be rather trivial to actually do this. A centralized error transforming process can then be applied to reduce the efforts required to manage these errors.

The transforming process includes two parts: filtering (or matching) and transforming. These are the approaches to filter errors:

  • Filter by error class: Many third party libraries throws error of certain class. Taking Sequelize (a popular Node.js ORM) as an example, it has DatabaseError, ConnectionError, ValidationError, and so on. By filtering errors by checking whether they are instances of a certain error class, we may easily pick up target errors from the pile.
  • Filter by string or regular expression: Sometimes a library might be throw errors that are instances of theError class itself instead of its subclasses.This makes these errors hard to distinguish from others. In this situation, we can filter these errors by their message with keywords or regular expressions.
  • Filter by scope: It’s possible that instances of the same error class with the same error message should result in a different response. One of the reasons may be that the operation throwing a certain error is at a lower-level, but it is being used by upper structures within different scopes. Thus, a scope mark can be added for these errors and make it easier to be filtered.

There could be more ways to filter errors, and they are usually able to cooperate as well. By properly applying these filters and transforming errors, we can reduce noises, analyze what’s going on within a system,and locate problems faster if they occur.

Modularizing project

Before ES2015, there are actually a lot of module solutions for JavaScript that work. The most famous two of them might be AMD and CommonJS. AMD is designed for asynchronous module loading, which is mostly applied in browsers. While CommonJSperforms module loading synchronously, and this is the way that the Node.js module system works.

To make it work asynchronously, writing an AMD module takes more characters. Due to the popularity of tools, such asbrowserify and webpack, CommonJS becomes popular even for browser projects.

Proper granularity of internal modules can help a project keep a healthy structure. Consider project structure like the following:

project
├─controllers
├─core
│  │ index.ts
│  │
│  ├─product
│  │   index.ts
│  │   order.ts
│  │   shipping.ts
│  │
│  └─user
│      index.ts
│      account.ts
│      statistics.ts

├─helpers
├─models
├─utils
└─views

Let’s assume that we are writing a controller file that’s going to import a module defined by thecore/product/order.ts file. Previously, usingCommonJS style’srequire, we would write the following:

const Order = require('../core/product/order');

Now, with the new ES import syntax, this would be like the following:

import * as Order from '../core/product/order';

Wait, isn’t this essentially the same? Sort of. However, you may have noticed several index.ts files that I’ve put into folders. Now, in the core/product/index.tsfile, we could have the following:

import * as Order from './order';

import * as Shipping from './shipping';

 

export { Order, Shipping }

Or, we could also have the following:

export * from './order';

export * from './shipping';

What’s the difference? The ideal behind these two approaches of re-exporting modules can vary. The first style works better when we treat Order and Shipping as namespaces, under which the identifier names may not be easy to distinguish from one another. With this style, the files are the natural boundaries of building these namespaces. The second style weakens the namespace property of two files, and then uses them as tools to organize objects and classes under the same larger category.

A good thingabout using these files as namespaces is that multiple-level re-exporting is fine, while weakening namespaces makes it harder to understand different identifier names as the number of re-exporting levels grows.

Summary

In this article, we discussed some interesting ideas and an architecture formed by these ideas. Most of these topics focused on limited examples, and did their own jobs.However, we also discussed ideas about putting a whole system together.

Resources for Article:


Further resources on this subject:


LEAVE A REPLY

Please enter your comment!
Please enter your name here