Promise Limitations

Many of the details we’ll discuss in this section have already been alluded to in this chapter, but we’ll just make sure to review these limitations specifically.

Sequence Error Handling

We covered Promise-flavored error handling in detail earlier in this chapter. The limitations of how Promises are designed — how they chain, specifically — creates a very easy pitfall where an error in a Promise chain can be silently ignored accidentally.

But there’s something else to consider with Promise errors. Because a Promise chain is nothing more than its constituent Promises wired together, there’s no entity to refer to the entire chain as a single thing, which means there’s no external way to observe any errors that may occur.

If you construct a Promise chain that has no error handling in it, any error anywhere in the chain will propagate indefinitely down the chain, until observed (by registering a rejection handler at some step). So, in that specific case, having a reference to the last promise in the chain is enough (p in the following snippet), because you can register a rejection handler there, and it will be notified of any propagated errors:

  1. // `foo(..)`, `STEP2(..)` and `STEP3(..)` are
  2. // all promise-aware utilities
  3. var p = foo( 42 )
  4. .then( STEP2 )
  5. .then( STEP3 );

Although it may seem sneakily confusing, p here doesn’t point to the first promise in the chain (the one from the foo(42) call), but instead from the last promise, the one that comes from the then(STEP3) call.

Also, no step in the promise chain is observably doing its own error handling. That means that you could then register a rejection error handler on p, and it would be notified if any errors occur anywhere in the chain:

  1. p.catch( handleErrors );

But if any step of the chain in fact does its own error handling (perhaps hidden/abstracted away from what you can see), your handleErrors(..) won’t be notified. This may be what you want — it was, after all, a “handled rejection” — but it also may not be what you want. The complete lack of ability to be notified (of “already handled” rejection errors) is a limitation that restricts capabilities in some use cases.

It’s basically the same limitation that exists with a try..catch that can catch an exception and simply swallow it. So this isn’t a limitation unique to Promises, but it is something we might wish to have a workaround for.

Unfortunately, many times there is no reference kept for the intermediate steps in a Promise-chain sequence, so without such references, you cannot attach error handlers to reliably observe the errors.

Single Value

Promises by definition only have a single fulfillment value or a single rejection reason. In simple examples, this isn’t that big of a deal, but in more sophisticated scenarios, you may find this limiting.

The typical advice is to construct a values wrapper (such as an object or array) to contain these multiple messages. This solution works, but it can be quite awkward and tedious to wrap and unwrap your messages with every single step of your Promise chain.

Splitting Values

Sometimes you can take this as a signal that you could/should decompose the problem into two or more Promises.

Imagine you have a utility foo(..) that produces two values (x and y) asynchronously:

  1. function getY(x) {
  2. return new Promise( function(resolve,reject){
  3. setTimeout( function(){
  4. resolve( (3 * x) - 1 );
  5. }, 100 );
  6. } );
  7. }
  8. function foo(bar,baz) {
  9. var x = bar * baz;
  10. return getY( x )
  11. .then( function(y){
  12. // wrap both values into container
  13. return [x,y];
  14. } );
  15. }
  16. foo( 10, 20 )
  17. .then( function(msgs){
  18. var x = msgs[0];
  19. var y = msgs[1];
  20. console.log( x, y ); // 200 599
  21. } );

First, let’s rearrange what foo(..) returns so that we don’t have to wrap x and y into a single array value to transport through one Promise. Instead, we can wrap each value into its own promise:

  1. function foo(bar,baz) {
  2. var x = bar * baz;
  3. // return both promises
  4. return [
  5. Promise.resolve( x ),
  6. getY( x )
  7. ];
  8. }
  9. Promise.all(
  10. foo( 10, 20 )
  11. )
  12. .then( function(msgs){
  13. var x = msgs[0];
  14. var y = msgs[1];
  15. console.log( x, y );
  16. } );

Is an array of promises really better than an array of values passed through a single promise? Syntactically, it’s not much of an improvement.

But this approach more closely embraces the Promise design theory. It’s now easier in the future to refactor to split the calculation of x and y into separate functions. It’s cleaner and more flexible to let the calling code decide how to orchestrate the two promises — using Promise.all([ .. ]) here, but certainly not the only option — rather than to abstract such details away inside of foo(..).

Unwrap/Spread Arguments

The var x = .. and var y = .. assignments are still awkward overhead. We can employ some functional trickery (hat tip to Reginald Braithwaite, @raganwald on Twitter) in a helper utility:

  1. function spread(fn) {
  2. return Function.apply.bind( fn, null );
  3. }
  4. Promise.all(
  5. foo( 10, 20 )
  6. )
  7. .then(
  8. spread( function(x,y){
  9. console.log( x, y ); // 200 599
  10. } )
  11. )

That’s a bit nicer! Of course, you could inline the functional magic to avoid the extra helper:

  1. Promise.all(
  2. foo( 10, 20 )
  3. )
  4. .then( Function.apply.bind(
  5. function(x,y){
  6. console.log( x, y ); // 200 599
  7. },
  8. null
  9. ) );

These tricks may be neat, but ES6 has an even better answer for us: destructuring. The array destructuring assignment form looks like this:

  1. Promise.all(
  2. foo( 10, 20 )
  3. )
  4. .then( function(msgs){
  5. var [x,y] = msgs;
  6. console.log( x, y ); // 200 599
  7. } );

But best of all, ES6 offers the array parameter destructuring form:

  1. Promise.all(
  2. foo( 10, 20 )
  3. )
  4. .then( function([x,y]){
  5. console.log( x, y ); // 200 599
  6. } );

We’ve now embraced the one-value-per-Promise mantra, but kept our supporting boilerplate to a minimum!

Note: For more information on ES6 destructuring forms, see the ES6 & Beyond title of this series.

Single Resolution

One of the most intrinsic behaviors of Promises is that a Promise can only be resolved once (fulfillment or rejection). For many async use cases, you’re only retrieving a value once, so this works fine.

But there’s also a lot of async cases that fit into a different model — one that’s more akin to events and/or streams of data. It’s not clear on the surface how well Promises can fit into such use cases, if at all. Without a significant abstraction on top of Promises, they will completely fall short for handling multiple value resolution.

Imagine a scenario where you might want to fire off a sequence of async steps in response to a stimulus (like an event) that can in fact happen multiple times, like a button click.

This probably won’t work the way you want:

  1. // `click(..)` binds the `"click"` event to a DOM element
  2. // `request(..)` is the previously defined Promise-aware Ajax
  3. var p = new Promise( function(resolve,reject){
  4. click( "#mybtn", resolve );
  5. } );
  6. p.then( function(evt){
  7. var btnID = evt.currentTarget.id;
  8. return request( "http://some.url.1/?id=" + btnID );
  9. } )
  10. .then( function(text){
  11. console.log( text );
  12. } );

The behavior here only works if your application calls for the button to be clicked just once. If the button is clicked a second time, the p promise has already been resolved, so the second resolve(..) call would be ignored.

Instead, you’d probably need to invert the paradigm, creating a whole new Promise chain for each event firing:

  1. click( "#mybtn", function(evt){
  2. var btnID = evt.currentTarget.id;
  3. request( "http://some.url.1/?id=" + btnID )
  4. .then( function(text){
  5. console.log( text );
  6. } );
  7. } );

This approach will work in that a whole new Promise sequence will be fired off for each "click" event on the button.

But beyond just the ugliness of having to define the entire Promise chain inside the event handler, this design in some respects violates the idea of separation of concerns/capabilities (SoC). You might very well want to define your event handler in a different place in your code from where you define the response to the event (the Promise chain). That’s pretty awkward to do in this pattern, without helper mechanisms.

Note: Another way of articulating this limitation is that it’d be nice if we could construct some sort of “observable” that we can subscribe a Promise chain to. There are libraries that have created these abstractions (such as RxJS — http://rxjs.codeplex.com/), but the abstractions can seem so heavy that you can’t even see the nature of Promises anymore. Such heavy abstraction brings important questions to mind such as whether (sans Promises) these mechanisms are as trustable as Promises themselves have been designed to be. We’ll revisit the “Observable” pattern in Appendix B.

Inertia

One concrete barrier to starting to use Promises in your own code is all the code that currently exists which is not already Promise-aware. If you have lots of callback-based code, it’s far easier to just keep coding in that same style.

“A code base in motion (with callbacks) will remain in motion (with callbacks) unless acted upon by a smart, Promises-aware developer.”

Promises offer a different paradigm, and as such, the approach to the code can be anywhere from just a little different to, in some cases, radically different. You have to be intentional about it, because Promises will not just naturally shake out from the same ol’ ways of doing code that have served you well thus far.

Consider a callback-based scenario like the following:

  1. function foo(x,y,cb) {
  2. ajax(
  3. "http://some.url.1/?x=" + x + "&y=" + y,
  4. cb
  5. );
  6. }
  7. foo( 11, 31, function(err,text) {
  8. if (err) {
  9. console.error( err );
  10. }
  11. else {
  12. console.log( text );
  13. }
  14. } );

Is it immediately obvious what the first steps are to convert this callback-based code to Promise-aware code? Depends on your experience. The more practice you have with it, the more natural it will feel. But certainly, Promises don’t just advertise on the label exactly how to do it — there’s no one-size-fits-all answer — so the responsibility is up to you.

As we’ve covered before, we definitely need an Ajax utility that is Promise-aware instead of callback-based, which we could call request(..). You can make your own, as we have already. But the overhead of having to manually define Promise-aware wrappers for every callback-based utility makes it less likely you’ll choose to refactor to Promise-aware coding at all.

Promises offer no direct answer to that limitation. Most Promise libraries do offer a helper, however. But even without a library, imagine a helper like this:

  1. // polyfill-safe guard check
  2. if (!Promise.wrap) {
  3. Promise.wrap = function(fn) {
  4. return function() {
  5. var args = [].slice.call( arguments );
  6. return new Promise( function(resolve,reject){
  7. fn.apply(
  8. null,
  9. args.concat( function(err,v){
  10. if (err) {
  11. reject( err );
  12. }
  13. else {
  14. resolve( v );
  15. }
  16. } )
  17. );
  18. } );
  19. };
  20. };
  21. }

OK, that’s more than just a tiny trivial utility. However, although it may look a bit intimidating, it’s not as bad as you’d think. It takes a function that expects an error-first style callback as its last parameter, and returns a new one that automatically creates a Promise to return, and substitutes the callback for you, wired up to the Promise fulfillment/rejection.

Rather than waste too much time talking about how this Promise.wrap(..) helper works, let’s just look at how we use it:

  1. var request = Promise.wrap( ajax );
  2. request( "http://some.url.1/" )
  3. .then( .. )
  4. ..

Wow, that was pretty easy!

Promise.wrap(..) does not produce a Promise. It produces a function that will produce Promises. In a sense, a Promise-producing function could be seen as a “Promise factory.” I propose “promisory” as the name for such a thing (“Promise” + “factory”).

The act of wrapping a callback-expecting function to be a Promise-aware function is sometimes referred to as “lifting” or “promisifying”. But there doesn’t seem to be a standard term for what to call the resultant function other than a “lifted function”, so I like “promisory” better as I think it’s more descriptive.

Note: Promisory isn’t a made-up term. It’s a real word, and its definition means to contain or convey a promise. That’s exactly what these functions are doing, so it turns out to be a pretty perfect terminology match!

So, Promise.wrap(ajax) produces an ajax(..) promisory we call request(..), and that promisory produces Promises for Ajax responses.

If all functions were already promisories, we wouldn’t need to make them ourselves, so the extra step is a tad bit of a shame. But at least the wrapping pattern is (usually) repeatable so we can put it into a Promise.wrap(..) helper as shown to aid our promise coding.

So back to our earlier example, we need a promisory for both ajax(..) and foo(..):

  1. // make a promisory for `ajax(..)`
  2. var request = Promise.wrap( ajax );
  3. // refactor `foo(..)`, but keep it externally
  4. // callback-based for compatibility with other
  5. // parts of the code for now -- only use
  6. // `request(..)`'s promise internally.
  7. function foo(x,y,cb) {
  8. request(
  9. "http://some.url.1/?x=" + x + "&y=" + y
  10. )
  11. .then(
  12. function fulfilled(text){
  13. cb( null, text );
  14. },
  15. cb
  16. );
  17. }
  18. // now, for this code's purposes, make a
  19. // promisory for `foo(..)`
  20. var betterFoo = Promise.wrap( foo );
  21. // and use the promisory
  22. betterFoo( 11, 31 )
  23. .then(
  24. function fulfilled(text){
  25. console.log( text );
  26. },
  27. function rejected(err){
  28. console.error( err );
  29. }
  30. );

Of course, while we’re refactoring foo(..) to use our new request(..) promisory, we could just make foo(..) a promisory itself, instead of remaining callback-based and needing to make and use the subsequent betterFoo(..) promisory. This decision just depends on whether foo(..) needs to stay callback-based compatible with other parts of the code base or not.

Consider:

  1. // `foo(..)` is now also a promisory because it
  2. // delegates to the `request(..)` promisory
  3. function foo(x,y) {
  4. return request(
  5. "http://some.url.1/?x=" + x + "&y=" + y
  6. );
  7. }
  8. foo( 11, 31 )
  9. .then( .. )
  10. ..

While ES6 Promises don’t natively ship with helpers for such promisory wrapping, most libraries provide them, or you can make your own. Either way, this particular limitation of Promises is addressable without too much pain (certainly compared to the pain of callback hell!).

Promise Uncancelable

Once you create a Promise and register a fulfillment and/or rejection handler for it, there’s nothing external you can do to stop that progression if something else happens to make that task moot.

Note: Many Promise abstraction libraries provide facilities to cancel Promises, but this is a terrible idea! Many developers wish Promises had natively been designed with external cancelation capability, but the problem is that it would let one consumer/observer of a Promise affect some other consumer’s ability to observe that same Promise. This violates the future-value’s trustability (external immutability), but moreover is the embodiment of the “action at a distance” anti-pattern (http://en.wikipedia.org/wiki/Action_at_a_distance_%28computer_programming%29). Regardless of how useful it seems, it will actually lead you straight back into the same nightmares as callbacks.

Consider our Promise timeout scenario from earlier:

  1. var p = foo( 42 );
  2. Promise.race( [
  3. p,
  4. timeoutPromise( 3000 )
  5. ] )
  6. .then(
  7. doSomething,
  8. handleError
  9. );
  10. p.then( function(){
  11. // still happens even in the timeout case :(
  12. } );

The “timeout” was external to the promise p, so p itself keeps going, which we probably don’t want.

One option is to invasively define your resolution callbacks:

  1. var OK = true;
  2. var p = foo( 42 );
  3. Promise.race( [
  4. p,
  5. timeoutPromise( 3000 )
  6. .catch( function(err){
  7. OK = false;
  8. throw err;
  9. } )
  10. ] )
  11. .then(
  12. doSomething,
  13. handleError
  14. );
  15. p.then( function(){
  16. if (OK) {
  17. // only happens if no timeout! :)
  18. }
  19. } );

This is ugly. It works, but it’s far from ideal. Generally, you should try to avoid such scenarios.

But if you can’t, the ugliness of this solution should be a clue that cancelation is a functionality that belongs at a higher level of abstraction on top of Promises. I’d recommend you look to Promise abstraction libraries for assistance rather than hacking it yourself.

Note: My asynquence Promise abstraction library provides just such an abstraction and an abort() capability for the sequence, all of which will be discussed in Appendix A.

A single Promise is not really a flow-control mechanism (at least not in a very meaningful sense), which is exactly what cancelation refers to; that’s why Promise cancelation would feel awkward.

By contrast, a chain of Promises taken collectively together — what I like to call a “sequence” — is a flow control expression, and thus it’s appropriate for cancelation to be defined at that level of abstraction.

No individual Promise should be cancelable, but it’s sensible for a sequence to be cancelable, because you don’t pass around a sequence as a single immutable value like you do with a Promise.

Promise Performance

This particular limitation is both simple and complex.

Comparing how many pieces are moving with a basic callback-based async task chain versus a Promise chain, it’s clear Promises have a fair bit more going on, which means they are naturally at least a tiny bit slower. Think back to just the simple list of trust guarantees that Promises offer, as compared to the ad hoc solution code you’d have to layer on top of callbacks to achieve the same protections.

More work to do, more guards to protect, means that Promises are slower as compared to naked, untrustable callbacks. That much is obvious, and probably simple to wrap your brain around.

But how much slower? Well… that’s actually proving to be an incredibly difficult question to answer absolutely, across the board.

Frankly, it’s kind of an apples-to-oranges comparison, so it’s probably the wrong question to ask. You should actually compare whether an ad-hoc callback system with all the same protections manually layered in is faster than a Promise implementation.

If Promises have a legitimate performance limitation, it’s more that they don’t really offer a line-item choice as to which trustability protections you want/need or not — you get them all, always.

Nevertheless, if we grant that a Promise is generally a little bit slower than its non-Promise, non-trustable callback equivalent — assuming there are places where you feel you can justify the lack of trustability — does that mean that Promises should be avoided across the board, as if your entire application is driven by nothing but must-be-utterly-the-fastest code possible?

Sanity check: if your code is legitimately like that, is JavaScript even the right language for such tasks? JavaScript can be optimized to run applications very performantly (see Chapter 5 and Chapter 6). But is obsessing over tiny performance tradeoffs with Promises, in light of all the benefits they offer, really appropriate?

Another subtle issue is that Promises make everything async, which means that some immediately (synchronously) complete steps still defer advancement of the next step to a Job (see Chapter 1). That means that it’s possible that a sequence of Promise tasks could complete ever-so-slightly slower than the same sequence wired up with callbacks.

Of course, the question here is this: are these potential slips in tiny fractions of performance worth all the other articulated benefits of Promises we’ve laid out across this chapter?

My take is that in virtually all cases where you might think Promise performance is slow enough to be concerned, it’s actually an anti-pattern to optimize away the benefits of Promise trustability and composability by avoiding them altogether.

Instead, you should default to using them across the code base, and then profile and analyze your application’s hot (critical) paths. Are Promises really a bottleneck, or are they just a theoretical slowdown? Only then, armed with actual valid benchmarks (see Chapter 6) is it responsible and prudent to factor out the Promises in just those identified critical areas.

Promises are a little slower, but in exchange you’re getting a lot of trustability, non-Zalgo predictability, and composability built in. Maybe the limitation is not actually their performance, but your lack of perception of their benefits?