Design patterns in Node.js

Design patterns are part of the day to day of any software developer, whether they realize it or not.

In this article, we will look at how to identify these patterns out in the wild and look at how you can start using them in your own projects.

What are design patterns?

Design patterns, simply put, are a way for you to structure your solution’s code in a way that allows you to gain some kind of benefit. Such as faster development speed, code reusability, and so on.

All patterns lend themselves quite easily to the OOP paradigm. Although given JavaScript’s flexibility, you can implement these concepts in non-OOP projects as well.

When it comes to design patterns, there are way too many of them to cover in just one article, in fact, books have been written exclusively about this topic and every year new patterns are created, leaving their lists incomplete.

A very common classification for the pattern is the one used in the GoF book (The Gang of Four Book) but since I’m going to be reviewing just a handful of them, I will ignore the classification and simply present you with a list of patterns you can see and start using in your code right now.

Immediately Invoked Function Expressions (IIFE)

The first pattern I’m going to show you is one that allows you to define and call a function at the same time. Due to the way JavaScript scopes works, using IIFEs can be great to simulate things like private properties in classes. In fact, this particular pattern is sometimes used as part of the requirements of other, more complex ones. We’ll see how in a bit.

What does an IIFE look like?

But before we delve into the use cases and the mechanics behind it, let me quickly show you what it looks like exactly:

(function() {
   var x = 20;
   var y = 20;
   var answer = x + y;
   console.log(answer);
})();

By pasting the above code into a Node.js REPL or even your browser’s console, you’d immediately get the result because, as the name suggests, you’re executing the function as soon as you define it.

The template for an IIFE consists of an anonymous function declaration, inside a set of parenthesis (which turn the definition into a function expression, a.k.a an assignment) and then a set of calling parenthesis at the end tail of it. Like so:

(function(/*received parameters*/) {
//your code here
})(/*parameters*/)

Use cases

Although it might sound crazy, there are actually a few benefits and use cases where using an IIFE can be a good thing, for example:

Simulating static variables

Remember static variables? From other languages such as C or C# for example. If you’re not familiar with them, a static variable gets initialized the first time you use it, and then it takes the value that you last set it to. The benefit being that if you define a static variable inside a function, that variable will be common to all instances of the function, no matter how many times you call it, so it greatly simplifies cases like this:

function autoIncrement() {
    static let number = 0
    number++
    return number
}

The above function would return a new number every time we call it (assuming, of course, the static keyword is available for us in JS). We could do this with generators in JS, that’s true, but pretend we don’t have access to them, you could simulate a static variable like this:

let autoIncrement = (function() {
    let number = 0

    return function () {
     number++
     return number
    }
})()

What you’re seeing in there, is the magic of closures all wrapped up inside an IIFE. Pure magic. You’re basically returning a new function that will be assigned to the autoIncrement variable (thanks to the actual execution of the IIFE). And with the scoping mechanics of JS, your function will always have access to the number variable (as if it were a global variable).

Simulating private variables

As you may (or may not, I guess) already know, ES6 classes treat every member as public, meaning, there are no private properties or methods. That’s out of the question, but thanks to IIFEs you could potentially simulate that if you wanted to.

const autoIncrementer = (function() {
  let value = 0;

  return {
    incr() {
        value++
    },

    get value() {
        return value
    }
  };
})();
> autoIncrementer.incr()
undefined
> autoIncrementer.incr()
undefined
> autoIncrementer.value
2
> autoIncrementer.value = 3
3
> autoIncrementer.value
2

The above code shows you a way to do it. Although you’re not specifically defining a class which you can instantiate afterward, mind you, you are defining a structure, a set of properties and methods which can make use of variables that are common to the object you’re creating, but that are not accessible (as shown through the failed assignment) from outside.

Factory method pattern

This one, in particular, is one of my favorite patterns, since it acts as a tool you can implement to clean your code up a bit.

In essence, the factory method allows you to centralize the logic of creating objects (meaning, which object to create and why) in a single place. This allows you to forget about that part and focus on simply requesting the object you need and then using it.

This might seem like a small benefit, but bear with me for a second, it’ll make sense, trust me.

What does the factory method pattern look like?

This particular pattern would be easier to understand if you first look at its usage, and then at its implementation.

Here is an example:

( _ => {

    let factory = new MyEmployeeFactory()

    let types = ["fulltime", "parttime", "contractor"]
    let employees = [];
    for(let i = 0; i < 100; i++) {
     employees.push(factory.createEmployee({type: types[Math.floor( (Math.random(2) * 2) )]})    )}
    
    //....
    employees.forEach( e => {
     console.log(e.speak())
    })

})()

The key takeaway from the above code is the fact that you’re adding objects to the same array, all of which share the same interface (in the sense they have the same set of methods) but you don’t really need to care about which object to create and when to do it.

You can now look at the actual implementation, as you can see, there is a lot to look at, but it’s quite straightforward:

class Employee {

    speak() {
     return "Hi, I'm a " + this.type + " employee"
    }

}

class FullTimeEmployee extends Employee{
    constructor(data) {
     super()
     this.type = "full time"
     //....
    }
}


class PartTimeEmployee extends Employee{
    constructor(data) {
     super()
     this.type = "part time"
     //....
    }
}


class ContractorEmployee extends Employee{
    constructor(data) {
     super()
     this.type = "contractor"
     //....
    }
}

class MyEmployeeFactory {

    createEmployee(data) {
     if(data.type == 'fulltime') return new FullTimeEmployee(data)
     if(data.type == 'parttime') return new PartTimeEmployee(data)
     if(data.type == 'contractor') return new ContractorEmployee(data)
    }
}

Use case

The previous code already shows a generic use case, but if we wanted to be more specific, one particular use case I like to use this pattern for is handling error object creation.

Imagine having an Express application with about 10 endpoints, wherein every endpoint you need to return between two to three errors based on the user input. We’re talking about 30 sentences like the following:

if(err) {
  res.json({error: true, message: “Error message here”})
}

Now, that wouldn’t be a problem, unless of course, until the next time you had to suddenly add a new attribute to the error object. Now you have to go over your entire project, modifying all 30 places. And that would be solved by moving the definition of the error object into a class. That would be great unless of course, you had more than one error object, and again, you’re having to decide which object to instantiate based on some logic only you know. See where I’m trying to get to?

If you were to centralize the logic for creating the error object then all you’d have to do throughout your code would be something like:

if(err) {
  res.json(ErrorFactory.getError(err))
}

That is it, you’re done, and you never have to change that line again.

Singleton pattern

This one is another oldie but a goodie. It’s quite a simple pattern, mind you, but it helps you keep track of how many instances of a class you’re instantiating. Actually, it helps you keep that number to just one, all of the time. Mainly, the singleton pattern, allows you to instantiate an object once, and then use that one every time you need it, instead of creating a new one without having to keep track of a reference to it, either globally or just passing it as a dependency everywhere.

What does the singleton pattern look like?

Normally, other languages implement this pattern using a single static property where they store the instance once it exists. The problem here is that, as I mentioned before, we don’t have access to static variables in JS. So we could implement this in two ways, one would be by using IIFEs instead of classes.

The other would be by using ES6 modules and having our singleton class using a locally global variable, in which to store our instance. By doing this, the class itself gets exported out of the module, but the global variable remains local to the module.

I know, but trust me, it sounds a lot more complicated than it looks:

let instance = null

class SingletonClass {

    constructor() {
     this.value = Math.random(100)
    }

    printValue() {
     console.log(this.value)
    }

    static getInstance() {
     if(!instance) {
         instance = new SingletonClass()
     }

     return instance
    }
}

module.exports = SingletonClass

And you could use it like this:
const Singleton = require(“./singleton”)

const obj = Singleton.getInstance()
const obj2 = Singleton.getInstance()

obj.printValue()
obj2.printValue()

console.log("Equals:: ", obj === obj2)

The output of course being:

0.5035326348000628
0.5035326348000628
Equals::  true

Confirming that indeed, we’re only instantiating the object once, and returning the existing instance.

Use cases

When trying to decide if you need a singleton-like implementation or not, you need to consider something: how many instances of your classes will you really need? If the answer is 2 or more, then this is not your pattern.

But there might be times when having to deal with database connections that you might want to consider it.

Think about it, once you’ve connected to your database, it might be a good idea to keep that connection alive and accessible throughout your code. Mind you, this can be solved in a lot of different ways, yes, but this pattern is indeed, one of them.

Using the above example, we can extrapolate it into something like this:

const driver = require("...")

let instance = null


class DBClass {

    constructor(props) {
     this.properties = props
     this._conn = null
    }

    connect() {
     this._conn = driver.connect(this.props)
    }

    get conn() {
     return this._conn
    }

    static getInstance() {
     if(!instance) {
         instance = new DBClass()
     }

     return instance
    }
}

module.exports = DBClass

And now, you’re sure that no matter where you are if you’re using the getInstance method, you’ll be returning the only active connection (if any).

Observer pattern

This one is a very interesting pattern, in the sense that it allows you to respond to certain input by being reactive to it, instead of proactively checking if the input is provided. In other words, with this pattern, you can specify what kind of input you’re waiting for and passively wait until that input is provided in order to execute your code. It’s a set and forget kind of deal, if you will.

In here, the observers are your objects, which know the type of input they want to receive and the action to respond with, these are meant to “observe” another object and wait for it to communicate with them.

The observable, on the other hand, will let the observers know when a new input is available, so they can react to it, if applicable. If this sounds familiar, it’s because it is, anything that deals with events in Node is implementing this pattern.

What does the observer pattern look like?

Have you ever written your own HTTP server? Something like this:

const http = require('http');


const server = http.createServer((req, res) => {
  res.statusCode = 200;
  res.setHeader('Content-Type', 'text/plain');
  res.end('Your own server here');
});

server.on('error', err => {
    console.log(“Error:: “, err)
})

server.listen(3000, '127.0.0.1', () => {
  console.log('Server up and running');
});

There, hidden in the above code, you’re looking at the observer pattern in the wild. An implementation of it, at least. Your server object would act as the observable, whilst your callback function is the actual observer. The event-like interface here (see the bolded code), with the on method, and the event name there might obfuscate the view a bit, but consider the following implementation:

class Observable {

    constructor() {
     this.observers = {}
    }

    on(input, observer) {
     if(!this.observers[input]) this.observers[input] = []
     this.observers[input].push(observer)
    }

    triggerInput(input, params) {
     this.observers[input].forEach( o => {
         o.apply(null, params)    
     })
    }
}

class Server extends Observable {

    constructor() {
     super()
    }


    triggerError() {
     let errorObj = {
         errorCode: 500,
         message: 'Port already in use'
     }
     this.triggerInput('error', [errorObj])
    }
}

You can now, again, set the same observer, in exactly the same way:

server.on('error', err => {
    console.log(“Error:: “, err)
})

And if you were to call the triggerError method (which is there to show you how you would let your observers know that there is new input for them), you’d get the exact same output:

Error:: { errorCode: 500, message: 'Port already in use' }

If you were to be considering using this pattern in Node.js, please look at the EventEmitter object first, since it’s Node.js’ own implementation of this pattern, and might save you some time.

Use cases

This pattern is, as you might have already guessed, great for dealing with asynchronous calls, since getting the response from an external request can be considered a new input. And what do we have in Node.js, if not a constant influx of asynchronous code into our projects? So next time you’re having to deal with an async scenario consider looking into this pattern.

Another widely spread use case for this pattern, as you’ve seen, is that of triggering particular events. This pattern can be found on any module that is prone to having events triggered asynchronously (such as errors or status updates). Some examples are the HTTP module, any database driver, and even socket.io, which allows you to set observers on particular events triggered from outside your own code.

Chain of responsibility

The chain of responsibility pattern is one that many of use in the world of Node.js have used, without even realizing it.

It consists of structuring your code in a way that allows you to decouple the sender of a request with the object that can fulfill it. In other words, having object A sending request R, you might have three different receiving objects R1, R2, and R3, how can A know which one it should send R to? Should A care about that?

The answer to the last question is: no, it shouldn’t. So instead, if A shouldn’t care about who’s going to take care of the request, why don’t we let R1, R2 and R3 decide by themselves?

Here is where the chain of responsibility comes into play, we’re creating a chain of receiving objects, which will try to fulfill the request and if they can’t, they’ll just pass it along. Does it sound familiar yet?

What does the chain of responsibility look like?

Here is a very basic implementation of this pattern, as you can see at the bottom, we have four possible values (or requests) that we need to process, but we don’t care who gets to process them, we just need, at least, one function to use them, hence we just send it to the chain and let each one decide whether they should use it or ignore it.

function processRequest(r, chain) {

    let lastResult = null
    let i = 0
    do {
     lastResult = chain[i](r)
     i++
    } while(lastResult != null && i < chain.length)
    if(lastResult != null) {
     console.log("Error: request could not be fulfilled")
    }
}

let chain = [
    function (r) {
     if(typeof r == 'number') {
         console.log("It's a number: ", r)
         return null
     }
     return r
    },
    function (r) {
     if(typeof r == 'string') {
         console.log("It's a string: ", r)
         return null
     }
     return r
    },
    function (r) {
     if(Array.isArray(r)) {
         console.log("It's an array of length: ", r.length)
         return null
     }
     return r
    }
]

processRequest(1, chain)
processRequest([1,2,3], chain)
processRequest('[1,2,3]', chain)
processRequest({}, chain)

The output being:

It's a number:  1
It's an array of length:  3
It's a string:  [1,2,3]
Error: request could not be fulfilled

Use cases

The most obvious case of this pattern in our ecosystem is the middlewares for ExpressJS. With that pattern, you’re essentially setting up a chain of functions (middlewares) that evaluate the request object and decide to act on it or ignore it. You can think of that pattern as the asynchronous version of the above example, where instead of checking if the function returns a value or not, you’re checking what values are passed to the next callback they call.

var app = express();

app.use(function (req, res, next) {
  console.log('Time:', Date.now());
  next(); //call the next function on the chain
});

Middlewares are a particular implementation of this pattern since instead of only one member of the chain fulfilling the request, one could argue that all of them could do it. Nevertheless, the rationale behind it is the same.

Module pattern

The module pattern is definitely one of the most common ones because it seems to have been born out of the necessity for control over what to share and what to hide from your modules.

Let me explain. A very common practice in Node.js (and JavaScript in general), is to organize your code into modules (i.e set of functions that are related to each other, so you group them into a single file and export them out). By default, Node’s modules allow you to pick what to share and what to hide, so no problem there.

But if you’re either using plain old JavaScript or maybe have several modules inside the same file, this pattern helps you hide parts while, at the same time, letting you choose what to share.

The way you create a module is by creating a IIFE, like this:

const myLogger = ( _ => {
    const FILE_PATH = "./logfile.log"
    const fs = require("fs")
    const os = require("os")

    function writeLog(txt) {
        fs.appendFile(FILE_PATH, txt + os.EOL, err => {
            if(err) console.error(err)
        })
    }

    function info(txt) {
        writeLog("[INFO]: " + txt)
    }

    function error(txt) {
        writeLog("[ERROR]: " + txt)
    }
    return {
        info, 
        error
    }
})()


myLogger.info("Hey there! This is an info message!")
myLogger.error("Damn, something happened!")

Now, with the above code, you’re literally simulating a module that is exporting only the info and error functions (of course, that is if you were using Node.js).

The code sample is quite simple, but you still get the point, you can get a similar result by creating a class, yes, but you’re losing the ability to hide methods such as writeLog or even the constants I used here.

Use cases for the module pattern

This is a very straightforward pattern, so the code speaks for itself. That being said, I can cover some of the direct benefits of using this pattern in your code.

Cleaner namespace

By using the module pattern, you’re making sure global variables, constants or functions that your exported functions require, will not be available for all user code. And by user code, I mean any code that’ll be making use of your module.

This helps you keep things organized, avoid naming conflicts or even user code affecting the behavior of your functions by modifying any possible global variable you might have.

Disclaimer: I do not condone nor am I saying global variables are a good coding standard or something you should even be attempting to do, but considering you’re encapsulating them inside your module’s scope, they’re not global anymore. So make sure you think twice before using this pattern, but also consider the benefits provided by it!

Avoid import name collision

Let me explain this one. If you happen to be using several external libraries (especially when you’re working with plain JavaScript for your browser) they might be exporting their code into the same variable (name collision). So if you don’t use the module pattern like I’m going to show you, you might run into some unwanted behavior.

Have you ever used jQuery? Remember how once you include it into your code, besides the jQuery object, you also have available the $ variable at the global scope? Well, there were a few other libraries doing the same back in the day. So if you wanted your code to work with jQuery by using the $ anyways, you’d have to do something like this:

( $ => {
   var hiddenBox = $( "#banner-message" );
   $( "#button-container button" ).on( "click", function( event ) {
     hiddenBox.show();
   });
})(jQuery);

That way, your module, is safe and has no risk of running into a naming collision if included in other codebases that already make use of the $ variable. And this last bit is the most important, if you’re developing code that will be used by others, you need to make sure it’ll be compatible, so using the module pattern allows you to clean up the namespace and avoid name collisions.

Adapter pattern

The adapter pattern is another very simple, yet powerful one. Essentially it helps you adapt one API (and by API here I mean the set of methods a particular object has) into another.

By that I mean the adapter is basically a wrapper around a particular class or object, which provides a different API and utilizes the object’s original one in the background.

What does it look like?

Assuming a logger class that looks like this:

const fs = require("fs")

class OldLogger { 

    constructor(fname) {
        this.file_name = fname
    }

    info(text) {
        fs.appendFile(this.file_name, `[INFO] ${text}`, err => {
            if(err) console.error(err)
        })
    }

    error(text) {
        fs.appendFile(this.file_name, `[ERROR] ${text}`, err => {
            if(err) console.error(err)
        })
    }
}

You already have your code using it, like this:

let myLogger = new OldLogger("./file.log")
myLogger.info("Log message!")

If suddenly, the logger changes its API to be:

class NewLogger { 

    constructor(fname) {
        this.file_name = fname
    }

    writeLog(level, text) {
        fs.appendFile(this.file_name, `[${level}] ${text}`, err => {
            if(err) console.error(err)
        })
    }
}

Then, your code will stop working, unless, of course, you create an adapter for your logger, like so:

class LoggerAdapter {

    constructor(fname) {
        super(fname)
    }

    info(txt) {
        this.writeLog("INFO", txt)
    }

    error(txt) {
        this.writeLog("ERROR", txt)
    }
}

And with that, you created an adapter (or wrapper) for your new logger that no longer complies with the older API.

Use cases for the adapter pattern

This pattern is quite simple, yet the use cases I’ll mention are quite powerful in the sense that they work towards helping with isolating code modifications and mitigating possible problems.

On one side, you can use it to provide extra compatibility for an existing module, by providing an adapter for it.

Case in point, the package request-promise-native provides an adapter for the request package allowing you to use a promise-based API instead of the default one provided by request.

So with the promise adapter, you can do the following:

const request = require("request")
const rp = require("request-promise-native")

request //default API for request
  .get('http://www.google.com/', function(err, response, body) {
    console.log("[CALLBACK]", body.length, "bytes") 
  })


rp("http://www.google.com") //promise based API
  .then( resp => {
    console.log("[PROMISE]", resp.length, "bytes")
  })

On the other hand, you can also use the adapter pattern to wrap a component you already know might change its API in the future and write code that works with your adapter’s API. This will help you avoid future problems if your component either changes APIs or has to be replaced altogether.

One example of this would be a storage component, you can write one that wraps around your MySQL driver, and provides generic storage methods. If in the future, you need to change your MySQL database for an AWS RDS, you can simply re-write the adapter, use that module instead of the old driver, and the rest of your code can remain unaffected.

Decorator pattern

The decorator pattern is definitely one of my top five favorite design patterns because it helps extend the functionality of an object in a very elegant way. This pattern is used to dynamically extend or even change the behavior of an object during run-time. The effect might seem a lot like class inheritance, but this pattern allows you to switch between behaviors during the same execution, which is something inheritance does not.

This is such an interesting and useful pattern that there is a formal proposal to incorporate it into the language. If you’d like to read about it, you can find it here.

What does this pattern look like?

Thanks to JavaScript’s flexible syntax and parsing rules, we can implement this pattern quite easily. Essentially all we have to do is create a decorator function that receives an object and returns the decorated version, with either of the new methods and properties or changed ones.

For example:

class IceCream { 
    constructor(flavor) {
        this.flavor = flavor
    }

    describe() {
        console.log("Normal ice cream,", this.flavor, " flavored")
    }
}

function decorateWith(object, decoration) {
    object.decoration = decoration
    let oldDescr = object.describe //saving the reference to the method so we can use it later
    object.describe = function() {
        oldDescr.apply(object)
        console.log("With extra", this.decoration)
    }
    return object
}

let oIce = new IceCream("vanilla") //A normal vanilla flavored ice cream...
oIce.describe() 

let vanillaWithNuts = decorateWith(oIce, "nuts") //... and now we add some nuts on top of it
vanillaWithNuts.describe()

As you can see, the example is quite literally decorating an object (in this case, our vanilla ice cream). The decorator, in this case, is adding one attribute and overriding a method, notice how we’re still calling the original version of the method, thanks to the fact that we save the reference to it before doing the overwrite.

We could’ve also added extra methods to it just as easily.

Use cases for the decorator pattern

In practice, the whole point of this pattern is to encapsulate new behavior into different functions or extra classes that will decorate your original object. That would give you the ability to individually add extra ones with minimum effort or change existing ones without having to affect your related code everywhere.

With that being said, the following example tries to show exactly that with the idea of a pizza company’s back-end, trying to calculate the price of an individual pizza which can have a different price based on the toppings added to it:

class Pizza {
    constructor() {
        this.base_price = 10
    }
    calculatePrice() {
        return this.base_price
    }
}

function addTopping(pizza, topping, price) {

    let prevMethod = pizza.calculatePrice
    pizza.toppings = [...(pizza.toppings || []), topping]
    pizza.calculatePrice = function() {
        return price + prevMethod.apply(pizza)
    }
    return pizza
}

let oPizza = new Pizza()

oPizza = addTopping(
            addTopping(
                oPizza, "muzzarella", 10
            ), "anana", 100
        )

console.log("Toppings: ", oPizza.toppings.join(", "))
console.log("Total price: ", oPizza.calculatePrice())

We’re doing something similar to the previous example here, but with a more realistic approach. Every call to addTopping would be made from the front-end into your back-end somehow, and because of the way we’re adding extra toppings, we’re chaining the calls to the calculatePrice all the way up to the original method which simply returns the original price of the pizza.

And thinking of an even more relevant example — text formatting. Here I’m formatting text in my bash console, but you could be implementing this for all your UI formatting, adding components that have small variations and other similar cases.

const chalk = require("chalk")

class Text {
    constructor(txt) {
        this.string = txt
    }
    toString() {
        return this.string
    }
}

function bold(text) {
    let oldToString = text.toString

    text.toString = function() {
        return chalk.bold(oldToString.apply(text))
    }
    return text
}

function underlined(text) {
    let oldToString = text.toString

    text.toString = function() {
        return chalk.underline(oldToString.apply(text))
    }
    return text
}

function color(text, color) {
    let oldToString = text.toString

    text.toString = function() {
        if(typeof chalk[color] == "function") {
            return chalk\[color\](oldToString.apply(text))
        }
    }
    return text
}

console.log(bold(color(new Text("This is Red and bold"), "red")).toString())
console.log(color(new Text("This is blue"), "blue").toString())
console.log(underlined(bold(color(new Text("This is blue, underlined and bold"), "blue"))).toString())

Chalk, by the way, is a small little useful library to format text on the terminal. For this example, I created three different decorators that you can use just like the toppings by composing the end result from their individual calls.

Command pattern

Finally, the last pattern I’ll review today is my favorite pattern — the command pattern. This little fellow allows you to encapsulate complex behavior inside a single module (or class mind you) which can be used by an outsider with a very simple API.

The main benefit of this pattern is that by having the business logic split into individual command classes, all with the same API, you can do things like adding new ones or modifying existing code with minimum effect to the rest of your project.

What does it look like?

Implementing this pattern is quite simple, all you have to remember is to have a common API for your commands. Sadly, since JavaScript doesn’t have the concept of Interface , we can’t use that construct to help us here.

class BaseCommand {
    constructor(opts) {
        if(!opts) {
            throw new Error("Missing options object")
        }
    }
    run() {
        throw new Error("Method not implemented")
    }
}

class LogCommand extends BaseCommand{
    constructor(opts) {
        super(opts)
        this.msg = opts.msg,
        this.level = opts.level
    }
    run() {
        console.log("Log(", this.level, "): ", this.msg)
    }
}

class WelcomeCommand extends BaseCommand {
    constructor(opts) {
        super(opts)
        this.username = opts.usr
    }
    run() {
        console.log("Hello ", this.username, " welcome to the world!")
    }
}

let commands = [
    new WelcomeCommand({usr: "Fernando"}),
    new WelcomeCommand({usr: "reader"}),
    new LogCommand({
        msg: "This is a log message, careful now...",
        level: "info"
    }),
    new LogCommand({
        msg: "Something went terribly wrong! We're doomed!",
        level: "error"
    })
]

commands.forEach( c => {
    c.run()
})

The example showcases the ability to create different commands which have a very basic run method, which is where you would put the complex business logic. Notice how I used inheritance to try and force the implementation of some of the methods required.

Use cases for the command pattern

This pattern is amazingly flexible and, if you play your cards right, can provide a great amount of scalability for your code.

I particularly like to use it in conjunction with the require-dir module because it can require every module in a folder, so you can keep a command-specific folder, naming each file after the command. This module will require them all in a single line of code and returns a single object with the keys being the filenames (i.e the commands names). This, in turn, allows you to keep adding commands without having to add any code, simply create the file and throw it into the folder, your code will require it and use it automatically.

The standard API will ensure you’re calling the right methods, so again, nothing to change there. Something like this would help you get there:

function executeCommand(commandId) {
  let commands = require-dir("./commands")
  if(commands[commandId]) {
    commands[commandId].run()  
  } else {
    throw new Error("Invalid command!")
  }
}

With that simple function, you’re free to keep growing your library of commands without having to change anything! It’s the magic of a well-designed architecture!

In practice, this pattern is great for things like:

  • Taking care of the actions associated with a menu bar
  • Receiving commands from a client application, such as would be the case for a game, where the client application keeps sending command messages to the back-end server for it to process, run them and return the results
  • A chat server that receives events from different clients and needs to process them individually

The list can keep going since you can pretty much implement anything that is reactive to some form of input into a command-based approach. But the point here is the huge value added by implementing that logic (whatever it is for you). This way you gain amazing flexibility and ability to scale or re-factor with minimum effect on the rest of the code.

Conclusion

I hope this helped shed some light on these four new patterns, their implementations, and use cases. Understanding when to use them and, most importantly, why you should use them helps you gain their benefits and improve the quality of your code.

If you have any questions or comments about the code I showed, please leave a message down in the comments!

SiteLock