5.1 Leveraging Modern JavaScript
When used judiciously, the latest JavaScript features can be of great help in reducing the amount of code whose sole purpose is to work around language limitations. This increases signal — the amount of valuable information that can be extracted from reading a piece of code — while eliminating boilerplate and repetition.
5.1.1 Template Literals
Before ES6, the JavaScript community came up with half a dozen ways of arriving at multi-line strings: from strings chained with \
escape characters or +
arithmetic operators, to using Array#join
, or resorting to the string representation of comments in a function — all merely for multi-line support.
Further, inserting variables into a string isn’t possible, but that’s easily circumvented by concatenating them with one or more strings.
- 'Hello ' + name + ', I\'m Nicolás!'
Template literals arrived in ES6 and resolve multi-line strings in a feature that was native to the language, without the need for any clever hacks in user-space.
Unlike strings, with template literals we can interpolate expressions using a streamlined syntax. They involve less escaping, too, thanks to using backticks instead of single or double quotation marks, which appear more frequently in English text.
- `Hello ${ name }, I'm Nicolás!`
Besides these improvements, template literals also offer the possibility of tagged templates. You can prefix the template with a custom function that transforms the template’s output, enabling use cases like input sanitization, formatting, or anything else.
As an illustrative example, the following function could be used for the sanitization use case mentioned above. Any expressions interpolated into a template go through the insane
function from a library by the same name, which strips out unsafe bits of HTML — tags, attributes, or whole trees — to keep user-provided strings honest.
- import insane from 'insane'
- function sanitize(template, ...expressions) {
- return template.reduce((accumulator, part, i) => {
- return accumulator + insane(expressions[i - 1]) + part
- })
- }
In the following example we embed a user-provided comment
as an interpolated expression in a template literal, and the sanitize
tag takes care of the rest.
- const comment = 'exploit time! <iframe src="http://evil.corp"></iframe>'
- const html = sanitize`<div>${ comment }</div>`
- console.log(html)
- // <- '<div>exploit time! </div>'
Whenever we need to compose a string using data, template literals are a terse alternative to string concatenation. When we want to avoid escaping single or double quotes, template literals can help. The same is true when we want to write multi-line strings.
In every other case — when there’s no interpolation, escaping, or multi-line needs — the choice comes down to a mere matter of style. In the last chapter of Practical Modern JavaScript, "Practical Considerations", I advocated[1] in favor of using template literals in every case. This was for a few factors, but here’s the two most important ones: because of convenience, so that you don’t have to convert a string back and forth between single-quoted string and template literals depending on its contents; and because of consistency, so that you don’t have to stop and think about which kind of quotation mark — single, double, or backtick — to use each time. Template literals may take some time to get accustomed to: we’ve used single-quoted strings for a long time, and template literals have only been around for a while. You or your team might prefer sticking with single-quoted strings, and that’s perfectly fine too.
Note | When it comes to style choices, you’ll rarely face problems if you let your team come to a consensus about the preferred style choice and later enforce that choice by way of a lint tool like ESLint. It’s entirely valid to stick with single-quoted strings and only use template literals when deemed absolutely necessary, if that’s what most of the team prefers.Using a tool like ESLint and a continuous integration job to enforce its rules means nobody has to perform the time-consuming job of keeping everyone in line with the house style. When tooling enforces style choices, discussions about those choices won’t crop up as often in discussion threads while contributors are collaborating on units of work. |
It’s important to differentiate between purely stylistic choices, which tend to devolve in contentious time-sinking discussions, and choices where there’s more ground to be covered in the everlasting battle against complexity. While the former may make a codebase subjectively easier to read, or more aesthetically pleasing, it is only through deliberate action that we keep complexity in check. Granted, a consistent style throughout a codebase can help contain complexity, but the exact style is unimportant as long as we enforce it consistently.
5.1.2 Destructuring, Rest, and Spread
The destructuring, rest, and spread features came into effect in ES6. These features accomplish a number of different things, which we’ll now discuss.
Destructuring helps us indicate the fields of an object that we’ll be using to compute the output of a function. In the following example, we destructure a few properties from a ticker
variable, and then combine that with a …details
rest pattern containing every property of ticker
that we haven’t explicitly named in our destructuring pattern.
- const { low, high, ask, ...details } = ticker
When we use destructuring methodically and near the top of our functions, or — even better — in the parameter list, we are making it obvious what the exact contract of our function is in terms of inputs.
Deep destructuring offers the ability to take this one step further, digging as deep as necessary into the structure of the object we’re accessing. Consider the following example, where we destructure the JSON response body with details about an apartment. When we have a destructuring statement like this near the top of a function that’s used to render a view, the aspects of the apartment listing are needed to render it become self-evident at a glance. In addition, we avoid repetition when accessing property chains like response.contact.name
or response.contact.phone
.
- const {
- title,
- description,
- askingPrice,
- features: {
- area,
- bathrooms,
- bedrooms,
- amenities
- },
- contact: {
- name,
- phone,
- }
- } = response
At times, a deeply destructured property name may not make sense outside of its context. For instance, we introduce name
to our scope, but it’s the name of the contact for the listing, not to be confused with the name of the listing itself. We can clarify this by giving the contact’s name
an alias, like contactName
or responseContactName
.
- const {
- title,
- description,
- askingPrice,
- features: {
- area,
- bathrooms,
- bedrooms,
- amenities
- },
- contact: {
- name: responseContactName,
- phone,
- }
- } = response
When using :
to alias, it can be difficult at first to remember whether the original name or the aliased name comes first. One helpful way to keep it straight is to mentally replace :
with the word "as". That way, name: responseContactName
would read as "name as responseContactName".
We can even have the same property listed twice, if we wanted to destructure some of its contents, while also maintaining access to the object itself. For example, if we wanted to destructure the contact
object’s contents, like we do above, but also take a reference to the whole contact
object, we can do the following:
- const {
- title,
- description,
- askingPrice,
- features: {
- area,
- bathrooms,
- bedrooms,
- amenities
- },
- contact: responseContact,
- contact: {
- name: responseContactName,
- phone,
- }
- } = response
Object spread helps us create a shallow copy of an object using a little native syntax. We can also combine object spread with our own properties, so that we create a copy that also overwrites the values in the original object we’re spreading.
- const faxCopy = { ...fax }
- const newCopy = { ...fax, date: new Date() }
This allows us to create slightly modified shallow copies of other objects. When dealing with discrete state management, this means we don’t need to resort to Object.assign
method calls or utility libraries. While there’s nothing inherently wrong with Object.assign
calls, the object spread …
abstraction is easier for us to internalize and mentally map its meaning back to Object.assign
without us realizing it, and so the code becomes easier to read because we’re dealing with less unabstracted knowledge.
Another benefit worth pointing out is that Object.assign()
can cause accidents: if we forget to pass an empty object literal as the first argument for this use case, we end up mutating the object. With object spread, there is no way to accidentally mutate anything, since the pattern always acts as if an empty object was passed to Object.assign
in the first position.
5.1.3 Striving for simple const bindings
If we use const
by default, then the need to use let
or var
can be ascribed to code that’s more complicated than it should be. Striving to avoid those kinds of bindings almost always leads to better and simpler code.
In section 4.2.4 we looked into the case where a let
binding is assigned a default value, and have conditional statements immediately after, that might change the contents of the variable binding.
- // …
- let type = 'contributor'
- if (user.administrator) {
- type = 'administrator'
- } else if (user.roles.includes('edit_articles')) {
- type = 'editor'
- }
- // …
Most reasons why we may need to use let
or var
bindings are variants of the above and can be resolved by extracting the assignments into a function where the returned value is then assigned to a const
binding. This moves the complexity out of the way, and eliminates the need for looking ahead to see if the binding is reassigned at some point in the code flow later on.
- // …
- const type = getUserType(user)
- // …
- function getUserType(user) {
- if (user.administrator) {
- return 'administrator'
- }
- if (user.roles.includes('edit_articles')) {
- return 'editor'
- }
- return 'contributor'
- }
A variant of this problem is when we repeatedly assign the result of an operation to the same binding, in order to split it into several lines.
- let values = [1, 2, 3, 4, 5]
- values = values.map(value => value * 2)
- values = values.filter(value => value > 5)
- // <- [6, 8, 10]
An alternative would be to avoid reassignment, and instead use chaining, as shown next.
- const finalValues = [1, 2, 3, 4, 5]
- .map(value => value * 2)
- .filter(value => value > 5)
- // <- [6, 8, 10]
A better approach would be to create new bindings every time, computing their values based on the previous binding, and picking up the benefits of using const
in doing so — where we can rest assured that the binding doesn’t change later in the flow.
- const initialValues = [1, 2, 3, 4, 5]
- const doubledValues = initialValues.map(value => value * 2)
- const finalValues = doubledValues.filter(value => value > 5)
- // <- [6, 8, 10]
Let’s move onto a more interesting topic: asynchronous code flows.
5.1.4 Navigating Callbacks, Promises, and Asynchronous Functions
JavaScript now offers several options when it comes to describing asynchronous algorithms: the plain callback pattern, promises, async functions, async iterators, async generators, plus any patterns offered by libraries consumed in our applications.
Each solution comes with its own set of strengths and weaknesses:
Callbacks are typically a solid choice, but we often need to get libraries involved when we want to execute our work concurrently
Promises might be hard to understand at first, but they offer a few utilities like
Promise#all
for concurrent work, yet they might be hard to debug under some circumstancesAsync functions require a bit of understanding on top of being comfortable with promises, but they’re easier to debug and often result in simpler code, plus they can be interspersed with synchronous functions rather easily as well
Iterators and generators are powerful tools, but there aren’t all that many practical use cases for them, so we must consider whether we’re using them because they fit our needs or just because we can.
It could be argued that callbacks are the simplest mechanism, although a similar case could be made for promises now that so much of the language is built around them. In any case, consistency should remain as the primary driving force of how we decide which pattern to use. While it’s okay to mix and match a few different patterns, most of the time we should be using the same patterns again and again, so that our team can develop a sense of familiarity with the codebase, instead of having to take a guess whenever encountering an unchartered portion of the application.
Using promises and async functions inevitably involves casting callbacks into this pattern. In the following example we make up a delay
function that returns promises which settle after a provided timeout.
- function delay(timeout) {
- const resolver = resolve => {
- setTimeout(() => {
- resolve()
- }, timeout)
- }
- return new Promise(resolver)
- }
- delay(2000).then(…)
A similar pattern would have to be used to consume functions taking a last argument that’s an error-first callback-style function in Node.js. Starting with Node.js v8.0.0, however, there’s a utility built-in that "promisifies" these callback-based functions so that they return promises.[2]
- import { promisify } from 'util'
- import { readFile } from 'fs'
- const readFilePromise = promisify(readFile)
- readFilePromise('./data.json', 'utf8').then(data => {
- console.log(`Data: ${ data }`)
- })
There are libraries that could do the same for the client-side, one such example being bluebird
, or we can create our own promisify
. In essence, promisify
takes the function that we want to use in promise-based flows, and returns a different — "promisified" — function which returns a promise where we call the original function passing all the provided arguments plus our own callback, where we settle the promise after deciding whether it should be fulfilled or rejected.
- // promisify.js
- export default function promisify(fn) {
- return (...rest) => {
- return new Promise((resolve, reject) => {
- fn(...rest, (err, result) => {
- if (err) {
- reject(err)
- return
- }
- resolve(result)
- })
- })
- }
- }
Using a promisify
function, then, would be no different than the earlier example with readFile
, except we’d be providing our own promisify
implementation.
- import promisify from './promisify'
- import { readFile } from 'fs'
- const readFilePromise = promisify(readFile)
- readFilePromise('./data.json', 'utf8').then(data => {
- console.log(`Data: ${ data }`)
- })
Casting promises back into a callback-based format is less involved because we can add reactions to handle both the fulfillment and rejection results, and call back done
passing in the corresponding result where appropriate.
- function unpromisify(p, done) {
- p.then(
- data => done(null, data),
- error => done(error)
- )
- }
- unpromisify(delay(2000), err => {
- // …
- })
Lastly, when it comes to converting promises to async functions, the language acts as a native compatibility layer, boxing every expression we await
on into promises, so there’s no need for any casting at the application level.
We can apply our guidelines of what constitutes clear code to asynchronous code flows just as well, since there aren’t fundamental differences at play in the way we write these functions. Our focus should be on how these flows are connected together, regardless of whether they’re comprised of callbacks, promises, or something else. When plumbing tasks together, one of the main sources of complexity is nesting. When several tasks are nested in a tree-like shape, we might end up with code that’s deeply nested. One of the best solutions to this readability problem is to break our flow into smaller trees, which would consequently be more shallow. We’ll have to connect these trees together by adding a few extra function calls, but we’ll have removed significant complexity when trying to understand the general flow of operations.