5.3 Code Patterns
Digging a bit deeper and into more specific elements of architecture design, in this section we’ll explore a few of the most common patterns for creating boundaries from which complexity cannot escape, encapsulating functionality, and communicating across these boundaries or application layers.
5.3.1 Revealing Module
The revealing module pattern has become a staple in the world of JavaScript. The premise is simple enough: expose precisely what consumers should be able to access, and avoid exposing anything else. The reasons for this are manifold. Preventing unwarranted access to implementation details reduces the likelihood of your module’s interface being abused for unsupported use cases that might bring headaches to both the module implementer and the consumer alike.
Explicitly avoid exposing methods that are meant to be private, such as a hypothetical _calculatePriceHistory method, which relies on the leading underscore as a way of discouraging direct access and signaling that it should be regarded as private. Avoiding such methods prevents test code from accessing private methods directly, resulting in tests that make assertions solely regarding the interface and which can be later referenced as documentation on how to use the interface; prevents consumers from monkey-patching implementation details, leading to more transparent interfaces; and also often results in cleaner interfaces due to the fact that the interface is all there is, and there’s no alternative ways of interacting with the module through direct use of its internals.
JavaScript modules are of a revealing nature by default, making it easy for us to follow the revealing pattern of not giving away access to implementation details. Functions, objects, classes, and any other bindings we declare are private unless we explicitly decide to export
them from the module.
When we expose only a thin interface, our implementation can change largely without having an impact on how consumers use the module, nor on the tests that cover the module. As a mental exercise, always be on the lookout for aspects of an interface that should be turned into implementation details and extricated from the interface itself.
5.3.2 Object Factories
Even when using JavaScript modules and following the revealing pattern strictly, we might end up with unintentional sharing of state across our usage of a module. Incidental state might result in unexpected results from an interface: consumers don’t have a complete picture because other consumers are contributing changes to this shared state as well, sometimes making it hard to figure out what exactly is going on in an application.
If we were to move our functional event emitter code snippet, with onEvent
and emitEvent
, into a JavaScript module, we’d notice that the emitters
map is now a lexical top-level binding for that module, meaning all of the module’s scope has access to emitters
. This is what we’d want, because that way we can register event listeners in onEvent
and fire them off in emitEvent
. In most other situations, however, sharing persistent state across public interface methods is a recipe for unexpected bugs.
Suppose we have a calculator
module that can be used to make basic calculations through a stream of operations. Even if consumers were supposed to use it synchronously and flush state in one fell swoop, without giving way for a second consumer to taint the state and produce unexpected results, our module shouldn’t rely on consumer behavior to provide consistent results. The following contrived implementation relies on local shared state, and would need consumers to use the module strictly as intended, making any calls to add
and multiply
, leaving calculate
as the last method that’s meant to be called only once.
- const operations = []
- let state = 0
- export function add(value) {
- operations.push(() => {
- state += value
- })
- }
- export function multiply(value) {
- operations.push(() => {
- state *= value
- })
- }
- export function calculate() {
- operations.forEach(op => op())
- return state
- }
Here’s an example of how consuming the previous module could work.
- import { add, multiply, calculate } from './calculator'
- add(3)
- add(4)
- multiply(-2)
- calculate() // <- -14
As soon as we tried to append operations in two places, things would start getting out of hand, with the operations array getting bits and pieces of unrelated computations, tainting our calculations.
- // a.js
- import { add, calculate } from './calculator'
- add(3)
- setTimeout(() => {
- add(4)
- calculate() // <- 14, an extra 7 because of b.js
- }, 100)
- // b.js
- import { add, calculate } from './calculator'
- add(2)
- calculate() // <- 5, an extra 3 from a.js
A slightly better approach would get rid of the state
variable, and instead pass the state around operation handlers, so that each operation knows the current state, and applies any necessary changes to it. The calculate
step would create a new initial state each time, and go from there.
- const operations = []
- export function add(value) {
- operations.push(state => state + value)
- }
- export function multiply(value) {
- operations.push(state => state * value)
- }
- export function calculate() {
- return operations.reduce((result, op) =>
- op(result)
- , 0)
- }
This approach presents problems too, however. Even though the state
is always reset to 0
, we’re treating unrelated operations as if they were all part of a whole, which is still wrong.
- // a.js
- import { add, calculate } from './calculator'
- add(3)
- setTimeout(() => {
- add(4)
- calculate() // <- 9, an extra 2 from b.js
- }, 100)
- // b.js
- import { add, calculate } from './calculator'
- add(2)
- calculate() // <- 5, an extra 3 from a.js
Blatantly, our contrived module is poorly designed, as its operations buffer should never be used to drive several unrelated calculations. We should instead expose a factory function that returns an object from its own self-contained scope, where all relevant state is shut off from the outside world. The methods on this object are equivalent to the exported interface of a plain JavaScript module, but state mutations are contained to instances that consumers create.
- export function getCalculator() {
- const operations = []
- function add(value) {
- operations.push(state => state + value)
- }
- function multiply(value) {
- operations.push(state => state * value)
- }
- function calculate() {
- return operations.reduce((result, op) =>
- op(result)
- , 0)
- }
- return { add, multiply, calculate }
- }
Using the calculator like this is just as straightforward, except that now we can do things asynchronously and even if other consumers are also making computations of their own, each user will have their own state, preventing data corruption.
- import { getCalculator } from './calculator'
- const { add, multiply, calculate } = getCalculator()
- add(3)
- add(4)
- multiply(-2)
- calculate() // <- -14
Even with our two-file example, we wouldn’t have any problems anymore, since each file would have its own atomic calculator.
- // a.js
- import { getCalculator } from './calculator'
- const { add, calculate } = getCalculator()
- add(3)
- setTimeout(() => {
- add(4)
- calculate() // <- 7
- }, 100)
- // b.js
- import { getCalculator } from './calculator'
- const { add, calculate } = getCalculator()
- add(2)
- calculate() // <- 2
As we just showed, even when using modern language constructs and JavaScript modules, it’s not too hard to create complications through shared state. Thus, we should always strive to contain mutable state as close to its consumers as possible.
5.3.3 Event Emission
We’ve already explored at length the pattern of registering event listeners associated to arbitrary plain JavaScript objects and firing events of any kind, triggering those listeners. Event handling is most useful when we want to have clearly-delineated side-effects.
In the browser, for instance, we can bind a click
event to an specific DOM element. When the click
event fires, we might issue an HTTP request, render a different page, start an animation, or play an audio file.
Events are a useful way of reporting progress whenever we’re dealing with a queue. While processing a queue, we could fire a progress
event whenever an item is processed, allowing the UI or any other consumer to render and update a progress indicator or apply a partial unit of work relying on the data processed by the queue.
Events also offer a mechanism to provide hooks into the lifecycle of an object, for example the Angular view rendering framework used event propagation to enable hierarchical communication across separate components. This allowed Angular codebases to keep components decoupled from one another while still being able to react to each other’s state changes and interact.
Having event listeners allowed a component to receive a message, perhaps process it by updating its display elements, and then maybe reply with an event of its own, allowing for rich interaction without necessarily having to introduce another module to act as an intermediary.
5.3.4 Message Passing and the Simplicity of JSON
When it comes to ServiceWorker, web workers, browser extensions, frames, API calls, or WebSocket integrations, we might run into issues if we don’t plan for robust data serialization ahead of time. This is a place where using classes to represent data can break down, because we need a way to serialize class instances into raw data (typically JSON) before sending it over the wire, and, crucially, the recipient needs to decode this JSON back into a class instance. It’s the second part where classes start to fail, since there isn’t a standardized way of reconstructing a class instance from JSON. For example:
- class Person {
- constructor(name, address) {
- this.name = name
- this.address = address
- }
- greet() {
- console.log(`Hi! My name is ${ this.name }.`)
- }
- }
- const rwanda = new Person('Rwanda', '123 Main St')
Although we can easily serialize our rwanda
instance with JSON.stringify(rwanda)
, and then send it over the wire, the code on the other end has no standard way of turning this JSON back into an instance of our Person
class, which might have a lot more functionality than merely a greet
function. The receiving end might have no business deserializing this data back into the class instance it originated from, but in some cases there’s merit to having an exact replica object back on the other end. For example, to reduce friction when passing messages between a website and a web worker, both sides should be dealing in the same data structure. In such scenarios, simple JavaScript objects are ideal.
JSON — now[3] a subset of the JavaScript grammar — was purpose-built for this use case, where we often have to serialize data, send it over the wire, and deserialize it on the other end. Plain JavaScript objects are a great way to store data in our applications, offer frictionless serialization out the box, and lead to cleaner data structures because we can keep logic decoupled from the data.
When the language on both the sending and receiving ends is JavaScript, we can share a module with all the functionality that we need around the data structure. This way, we don’t have to worry about serialization, since we’re using plain JavaScript objects and can rely on JSON for the transport layer. We don’t have to concern ourselves with sharing functionality either, because we can rely on the JavaScript module system for that part.
Armed with a foundation for writing solid modules based on your own reasoning, we now turn the page to operational concerns such as handling application secrets responsibly, making sure our dependencies don’t fail us, taking care of how we orchestrate build processes and continuous integration, and dealing with nuance in state management and the high-stakes decision-making around producing the right abstractions.
1. You can read a blog post I wrote about why template literals are better than strings at: https://mjavascript.com/out/template-literals. Practical Modern JavaScript (O’Reilly, 2017) is the first book in the Modular JavaScript series. You’re currently reading the second book of the same series.
2. Note also that, starting in Node.js v10.0.0, the native fs.promises
interface can be used to access promise-based versions of the fs
module’s methods.
3. Up until recently, JSON wasn’t — strictly speaking — a proper subset of ECMA-262. A recent proposal has amended the ECMAScript specification to consider bits of JSON that were previously invalid JavaScript to be valid JavaScript. Learn more at: https://mjavascript.com/out/json-subset.