3.1 Growing a Module
Small, single-purpose functions are the lifeblood of clean module design. Purpose-built functions scale well because they introduce little organizational complexity into the module they belong to, even when that module grows to 500 lines of code. Small functions are not necessarily less powerful than large functions, but their power lies in composition.
Suppose that instead of implementing a single function with 100 lines of code we break it up into 3 or more smaller functions. We might later be able to reuse one of those smaller functions somewhere else in our module, or it might prove a useful addition to its public interface.
In this chapter we’ll discuss design considerations aimed at reducing complexity at the module level. While most of the concerns we’ll discuss here have an effect on the way we write functions, it is in the next chapter where we’ll be specifically devoting our time to the development of simple functions.
3.1.1 Composability and Scalability
Cleanly composed functions are at the heart of effective module design. Functions are the fundamental unit of our code. We could get away with writing the smallest possible number of functions required, the ones that are invoked by consumers or need to be passed for other interfaces to consume, but that wouldn’t get us much in the way of maintainability.
We could rely solely on intuition to decide what deserves to be its own function and what is better left inlined as part of a larger body of code, but this might leave us with inconsistencies that depend upon our frame of mind, as well as how each member of a team perceives functions are to be sliced. As we’ll see in the next chapter, pairing a few rules of thumb with our own intuition is an effective way of keeping functions simple, limiting their scope.
At the module level, it’s required that we implement features with the API surface in mind. When we plan out new functionality, we have to consider whether the abstraction is right for our consumers, how it might evolve and scale over time, and how narrowly or broadly it can support the use cases of its consumers.
When considering whether the abstraction is right, suppose we have a function that’s a draggable
object factory for DOM elements. Draggable objects can be moved around and then dropped in a container, but consumers often have to impose different limitations on the conditions under which the object can be moved, some of which we’ll outline in the following list.
Draggable elements must have a parent with a
draggable-list
classDraggable elements mustn’t have a
draggable-frozen
classDragging must initiate from a child with a
drag-handle
classElements may be dropped into containers with a
draggable-dropzone
classElements may be dropped into containers with at most 6 children
Elements may not be dropped into the container they’re being dragged from
Elements must be sortable in the container they’re dragged from, but they can’t be dropped into other containers
We’ve now spent quite a bit of time thinking about use cases for a drag and drop library, so we’re well equipped to come up with an API that will satisfy most or maybe even every one of these use cases, without dramatically broadening our API surface.
Consider, in contrast, the situation if we were to go off and implement a way of checking off each use case in isolation without taking into account similar use cases, or cases which might arise but are not an immediate need. We would end up with seven different ways of introducing specific restrictions on how elements are dragged and dropped. Since we’ve designed their interfaces in isolation, each of these solutions is likely to be at least slightly different from the rest. Maybe they’re similar enough that each of them is an option flag, but the consumer still can’t help but wonder why we have seven different flags for such similar use cases, and they can’t shake the feeling that we’ve designed the interface poorly. Except there wasn’t much in the way of design, we’ve mostly tacked requirement upon requirement onto our API surface as they came along, never daring to look at the road ahead and envisioning how the API might evolve in the future. If we had designed them with scalability in mind, we might’ve grouped many similar use cases under the same feature, and would’ve avoided an unnecessarily large API surface in the process.
Going back to the case where we do spend some time thinking ahead, and create a collection of similar requirements and use cases, we should be able to find a common denominator that’s suitable for most use cases. We’ll know when we have the right abstraction because it’ll cater to every requirement we have, and a few we didn’t even have to fulfill but which the abstraction satisfies anyhow. In the case of draggable elements, once we’ve taken all the requirements into account, we might choose to define a few options that impose restrictions based on a few CSS selectors, or we might introduce a callback where the user can determine whether an element can be dragged and another where they can determine whether the element can be dropped. These choices also depend on how heavily the API is going to be used, how flexible we want it to be, and how frequently we intend to make changes to it.
Sometimes we won’t have the opportunity to think ahead, we might not be able to foresee all possible use cases, our forecasts may fail us, or requirements may change, pulling the rug from under our feet. Granted, this never is the ideal situation to find ourselves in, but it is certain we wouldn’t be better off if we hadn’t paid attention to the use cases for our module in aggregate. On the other hand, extra requirements may fit within the bounds of an abstracted solution, provided the new use case is similar enough to what we expected when designing the abstraction.
Abstractions aren’t free, but they can shield portions of code from complexity. Naturally, we could boldly claim an elegant interface such as fn ⇒ fn()
solves all problems in computing — the consumer only needs to provide the right fn
callback. The reality is we wouldn’t be doing anything but offloading the problem onto the consumer, at the cost of implementing the right solution themselves while still consuming our API in the process.
When we’re weighing whether to offer an interface like CSS selectors or callbacks, we’re deciding how much we want to abstract, and how much we want to leave up to the consumer. When we choose to let the user provide CSS selectors, we keep the interface short, but the use cases will be limited as well. Consumers won’t be able, for example, to decide dynamically whether the element is draggable or not beyond what a CSS selector can offer. When we choose to let the user provide callbacks, we make it harder for them to use our interface, since they now have to provide bits and pieces of the implementation themselves, but that expense buys them great flexibility in how to decide what is draggable and what is not.
As most things in program design, API design is a constant tradeoff between simplicity and flexibility. For each particular case, it is our responsibility to decide how flexible we want the interface to be, but at the expense of simplicity. We can also decide how simple we want an interface to be, but at the expense of flexibility. Going back to jQuery, it’s interesting to note how they always favor simplicity, by allowing you to provide as little information as needed for most of their API methods. Meanwhile, they avoid sacrificing flexibility by offering countless overloads for each of their API methods. The complexity lies in their implementation, balancing arguments by figuring out whether they’re a NodeList
, a DOM element, an array, a function, a selector, or something else, — not to mention optional parameters — before even starting to fulfill the consumer’s goal when making an API call. Consumers observe some of the complexity at the seams, when sifting through documentation and finding out about all the different ways of accomplishing the same goals. And yet, despite all of jQuery’s internal complexity, code which consumes the jQuery API manages to stay ravishingly simple.
3.1.2 Design for Today
Before we go off and start pondering the best ways of abstracting a feature we need to implement so that it caters to every single requirement that might come in the future, it’s necessary to take a step back and consider simpler alternatives. A simple implementation means we pay smaller upfront costs, but it doesn’t necessarily mean that new requirements will result in breaking changes.
Interfaces don’t need to cater to every conceivable use case from the outset. As we’ve analyzed in chapter 2, sometimes we may get away with first implementing a solution for the simplest or most common use case, and then adding an options parameter through which newer use cases can be configured. As we get to more advanced use cases, we can make decisions as outlined in the previous section, choosing which use cases deserve to be grouped under an abstraction and which are too narrow for an abstraction to be worthwhile.
Similarly, the interface could start off supporting only one way of receiving its inputs, and as use cases evolve we might bake polymorphism into the mix, accepting multiple input types in the same parameter position. Grandiose thinking may take us to believe that, in order to be great, our interfaces must be able to handle every input type and be highly configurable with dozens of configuration options. This might well be true for the most advanced users of our interface, but if we don’t take the time to let the interface evolve and mature as needed, we might code our interface into a corner that can then only be repaired by writing a different component from a ground up with a better thought out interface, and later replacing references to the old component with the new one.
A larger interface is rarely better than a smaller interface which accomplishes the job consumers need it to fulfill. Elegance is of the essence here: if we wish for our interface to remain small but we predict the consumer will eventually need to hook into different pieces of our component’s internal behavior so that they can react accordingly, we’re better off waiting until this requirement materializes than building a solution for a problem we don’t yet have.
Not only will we be focusing development hours on functionality that’s needed today, but we’ll also avoid creating complexity that can be dispensed with for the time being. It might be argued that the ability to react to internal events of a library won’t introduce a lot of complexity. Consider, however, the case where the requirement never materializes. We’d have burdened our component with increased complexity to satisfy functionality we never needed. Worse yet, consider the case where the requirement changes between the moment we’ve implemented a solution and when it’s actually needed. We’d now have functionality we never needed, which clashes with different functionality that we do need.
Suppose we don’t only need hooks to react to events, but we need those hooks to be able to transform internal state — how would the event hooks interface change then? Chances are, someone might’ve found a use for the event listeners we’ve implemented earlier, and so we cannot dispose of them with ease. We might be forced to change the event listener API to support internal state transformations, which would result in a cringeworthy interface that’s bound to frustrate implementers and consumers alike.
Falling in the trap of implementing features consumers don’t yet need might be easy at first, but it’ll cost us dearly in terms of complexity, maintainability, and wasted developer hours. The best code is no code at all. This means fewer bugs, less time spent writing code, less time writing documentation, and less time fielding support requests. Latch onto that mentality and strive to keep functionality to exactly the absolute minimum that’s required.
3.1.3 Abstractions Evolve in Small Steps
It’s important to note that abstractions should evolve naturally, rather than have them force an implementation style upon us. When we’re unsure about whether to bundle a few use cases with an abstraction, the best option is often to wait and see whether more use cases would fall into the abstraction we’re considering. If we wait and the abstraction holds true for more and more use cases, we can go ahead and implement the abstraction. If the abstraction doesn’t hold, then we can be thankful we won’t have to bend the abstraction to fit the new use cases, often breaking the abstraction or causing more grief than the abstraction had originally set out to avoid on our behalf.
In a similar fashion to that of the last section, we should first wait until use cases emerge and then reconsider an abstraction when its benefits become clear. While developing unneeded functionality is little more than a waste of time, leveraging the wrong abstractions will kill or, at best, cripple our component’s interface. While good abstractions are a powerful tool that can reduce the complexity and volume of code we write, subjecting consumers to inappropriate abstractions might increase the amount of code they need to write and will forcibly increase complexity by having users bend to the will of the abstraction, causing frustration and eventual abandonment of the poorly abstracted component.
HTTP libraries are a great example of how the right abstraction for an interface depends entirely on the use cases its consumer has in mind. Plain GET
calls can be serviced with callbacks or promises, but streaming requires an event-driven interface which allows the consumer to act as soon as the stream has portions of data ready for consumption. A typical GET
request could be serviced by an event-driven interface as well, allowing the implementer to abstract every use case under an event-driven model. To the consumer, this model would feel a bit convoluted for the simplest case, however. Even when we’ve grouped every use case under a convenient abstraction, the consumer shouldn’t have to settle for get('/cats').on('data', gotCats)
when their use case doesn’t involve streaming and they could be using a simpler get('/cats', gotCats)
interface instead, which wouldn’t need to handle error events separately, either, instead relying on the Node.js convention where the first argument passed to callbacks is an error or null
when everything goes smoothly.
An HTTP library that’s primarily focused on streaming might go for the event-driven model in all cases, arguing that convenience methods such as a callback-based interface could be implemented on top of their primitive interface. This is acceptable, we’re focusing on the use case at hand and keeping our API surface as small as possible, while still allowing our library to be wrapped for higher-level consumption. If our library was primarily focused on the experience of leveraging its interface, we might go for the callback or promise based approach. When that library then has to support streaming, it might incorporate an event-driven interface. At this point we’d have to decide whether we’ll expose that kind of interface solely for streaming purposes, or if it’ll be available for commonplace scenarios as well. On the one hand, exposing it solely for the streaming use case keeps the API surface small. On the other, exposing it for every use case results in a more flexible and consistent API, which might be what consumers expect.
Context is of the utmost relevance here. When we’re developing an interface for an open-source or otherwise broadly available library, we might need to listen to a variety of folks who’ll be weighing into how the API should be designed. Depending on our audience, they may prefer a smaller API surface or a flexible interface. Over time, broadly available libraries tend to favor flexibility over simplicity, as the number of users grows and with them, the number of use cases the library needs to support. When the component is being developed in the context of our day jobs, we might not need to cater to a broad audience. It may well be that we ourselves are the only ones who will be consuming the API, or maybe our team. It might be that we belong to a UI platform team that serves the entire company, which would put us in a situation akin to the open-source case, though.
In any case, when we’re uncertain if our interface will be needing to expose certain surface areas, it’s highly recommended that we don’t expose any of it until we are indeed certain. Keeping API surfaces as small as possible reduces the odds of presenting the consumer with multiple ways of accomplishing the same task. This is often undesirable given that users will undoubtedly become confused and come knocking about which one is the best solution. There’s a few answers. When the best solution is always the same, the other offerings probably don’t belong in our public interface. When the best solution depends on the use case, then we should be in the lookout for better abstractions which encapsulate those similar use cases under a single solution. If the use cases are different enough, so should the solutions offered by the interface, in which case consumers shouldn’t be faced with uncertainty: our interface would only offer a single solution for that particular use case.
3.1.4 Move Deliberately and Experiment
You might have heard the "Move Fast and Break Things" mantra from Facebook. It’s dangerous to take this mantra literally in terms of software development, which shouldn’t be hurried nor frequently broken, let alone on purpose. The mantra is meant to be interpreted as an invitation to experiment, where the things we should be breaking are assumptions about how an application architecture should be laid out, how users behave, what advertisers want, and any other assumptions. Moving fast means to quickly hash out prototypes to test our newfound assumptions, to timely seize upon new markets, to avoid engineering slowing to a crawl as teams and requirements grow in size and complexity, and to constantly iterate on our products or codebases.
Taken literally, moving fast and breaking things is a dreadful way to go about software development. Any organization worth their salt would never encourage engineers to write code faster at the expense of their product quality. Code should exist mostly because it has to, in order for the products they make up to exist. The less complex the code we write, provided the product remains the same, the better.
The code that makes up a product should be covered by tests, minimizing the risk of bugs making their way to production. When we take "Move Fast and Break Things" literally, we are tempted to think testing is optional, since it slows us down and we need to move fast. A product that’s not test covered will be, ironically, unable to move fast when bugs inevitable arise and wind down engineering speed.
A better mantra might be one that can be taken literally, such as "Move Deliberately and Experiment". This mantra carries the same sentiment as the Facebook mantra of "Move Fast and Break Things", but its true meaning isn’t meant to be decoded or interpreted. Experimentation is a key aspect of software design and development. We should constantly try out and validate new ideas, verifying whether they pose better solutions than the status quo. We could interpret "Move Fast and Break Things" as "A/B test[1] early and A/B test often", and "Move Deliberately and Experiment" can convey this meaning as well.
To move deliberately is to move with cause. Engineering tempo will rarely be guided by the development team’s desire to move faster, but is most often instead bound by release cycles and the complexity in requirements needed to meet those releases. Of course, everyone wants engineering to move fast where possible, but interface design shouldn’t be hurried, regardless of whether the interface we’re dealing with is an architecture, a layer, a component, or a function. Internals aren’t as crucial to get right, for as long as the interface holds, the internals can be later improved for performance or readability gains. This is not to advocate sloppily developed internals, but rather to encourage respectfully and deliberately thought out interface design.