Pre-ES6 Generators
You’re hopefully convinced now that generators are a very important addition to the async programming toolbox. But it’s a new syntax in ES6, which means you can’t just polyfill generators like you can Promises (which are just a new API). So what can we do to bring generators to our browser JS if we don’t have the luxury of ignoring pre-ES6 browsers?
For all new syntax extensions in ES6, there are tools — the most common term for them is transpilers, for trans-compilers — which can take your ES6 syntax and transform it into equivalent (but obviously uglier!) pre-ES6 code. So, generators can be transpiled into code that will have the same behavior but work in ES5 and below.
But how? The “magic” of yield
doesn’t obviously sound like code that’s easy to transpile. We actually hinted at a solution in our earlier discussion of closure-based iterators.
Manual Transformation
Before we discuss the transpilers, let’s derive how manual transpilation would work in the case of generators. This isn’t just an academic exercise, because doing so will actually help further reinforce how they work.
Consider:
// `request(..)` is a Promise-aware Ajax utility
function *foo(url) {
try {
console.log( "requesting:", url );
var val = yield request( url );
console.log( val );
}
catch (err) {
console.log( "Oops:", err );
return false;
}
}
var it = foo( "http://some.url.1" );
The first thing to observe is that we’ll still need a normal foo()
function that can be called, and it will still need to return an iterator. So, let’s sketch out the non-generator transformation:
function foo(url) {
// ..
// make and return an iterator
return {
next: function(v) {
// ..
},
throw: function(e) {
// ..
}
};
}
var it = foo( "http://some.url.1" );
The next thing to observe is that a generator does its “magic” by suspending its scope/state, but we can emulate that with function closure (see the Scope & Closures title of this series). To understand how to write such code, we’ll first annotate different parts of our generator with state values:
// `request(..)` is a Promise-aware Ajax utility
function *foo(url) {
// STATE *1*
try {
console.log( "requesting:", url );
var TMP1 = request( url );
// STATE *2*
var val = yield TMP1;
console.log( val );
}
catch (err) {
// STATE *3*
console.log( "Oops:", err );
return false;
}
}
Note: For more accurate illustration, we split up the val = yield request..
statement into two parts, using the temporary TMP1
variable. request(..)
happens in state *1*
, and the assignment of its completion value to val
happens in state *2*
. We’ll get rid of that intermediate TMP1
when we convert the code to its non-generator equivalent.
In other words, *1*
is the beginning state, *2*
is the state if the request(..)
succeeds, and *3*
is the state if the request(..)
fails. You can probably imagine how any extra yield
steps would just be encoded as extra states.
Back to our transpiled generator, let’s define a variable state
in the closure we can use to keep track of the state:
function foo(url) {
// manage generator state
var state;
// ..
}
Now, let’s define an inner function called process(..)
inside the closure which handles each state, using a switch
statement:
// `request(..)` is a Promise-aware Ajax utility
function foo(url) {
// manage generator state
var state;
// generator-wide variable declarations
var val;
function process(v) {
switch (state) {
case 1:
console.log( "requesting:", url );
return request( url );
case 2:
val = v;
console.log( val );
return;
case 3:
var err = v;
console.log( "Oops:", err );
return false;
}
}
// ..
}
Each state in our generator is represented by its own case
in the switch
statement. process(..)
will be called each time we need to process a new state. We’ll come back to how that works in just a moment.
For any generator-wide variable declarations (val
), we move those to a var
declaration outside of process(..)
so they can survive multiple calls to process(..)
. But the “block scoped” err
variable is only needed for the *3*
state, so we leave it in place.
In state *1*
, instead of yield request(..)
, we did return request(..)
. In terminal state *2*
, there was no explicit return
, so we just do a return;
which is the same as return undefined
. In terminal state *3*
, there was a return false
, so we preserve that.
Now we need to define the code in the iterator functions so they call process(..)
appropriately:
function foo(url) {
// manage generator state
var state;
// generator-wide variable declarations
var val;
function process(v) {
switch (state) {
case 1:
console.log( "requesting:", url );
return request( url );
case 2:
val = v;
console.log( val );
return;
case 3:
var err = v;
console.log( "Oops:", err );
return false;
}
}
// make and return an iterator
return {
next: function(v) {
// initial state
if (!state) {
state = 1;
return {
done: false,
value: process()
};
}
// yield resumed successfully
else if (state == 1) {
state = 2;
return {
done: true,
value: process( v )
};
}
// generator already completed
else {
return {
done: true,
value: undefined
};
}
},
"throw": function(e) {
// the only explicit error handling is in
// state *1*
if (state == 1) {
state = 3;
return {
done: true,
value: process( e )
};
}
// otherwise, an error won't be handled,
// so just throw it right back out
else {
throw e;
}
}
};
}
How does this code work?
- The first call to the iterator‘s
next()
call would move the generator from the uninitialized state to state1
, and then callprocess()
to handle that state. The return value fromrequest(..)
, which is the promise for the Ajax response, is returned back as thevalue
property from thenext()
call. - If the Ajax request succeeds, the second call to
next(..)
should send in the Ajax response value, which moves our state to2
.process(..)
is again called (this time with the passed in Ajax response value), and thevalue
property returned fromnext(..)
will beundefined
. - However, if the Ajax request fails,
throw(..)
should be called with the error, which would move the state from1
to3
(instead of2
). Againprocess(..)
is called, this time with the error value. Thatcase
returnsfalse
, which is set as thevalue
property returned from thethrow(..)
call.
From the outside — that is, interacting only with the iterator — this foo(..)
normal function works pretty much the same as the *foo(..)
generator would have worked. So we’ve effectively “transpiled” our ES6 generator to pre-ES6 compatibility!
We could then manually instantiate our generator and control its iterator — calling var it = foo("..")
and it.next(..)
and such — or better, we could pass it to our previously defined run(..)
utility as run(foo,"..")
.
Automatic Transpilation
The preceding exercise of manually deriving a transformation of our ES6 generator to pre-ES6 equivalent teaches us how generators work conceptually. But that transformation was really intricate and very non-portable to other generators in our code. It would be quite impractical to do this work by hand, and would completely obviate all the benefit of generators.
But luckily, several tools already exist that can automatically convert ES6 generators to things like what we derived in the previous section. Not only do they do the heavy lifting work for us, but they also handle several complications that we glossed over.
One such tool is regenerator (https://facebook.github.io/regenerator/), from the smart folks at Facebook.
If we use regenerator to transpile our previous generator, here’s the code produced (at the time of this writing):
// `request(..)` is a Promise-aware Ajax utility
var foo = regeneratorRuntime.mark(function foo(url) {
var val;
return regeneratorRuntime.wrap(function foo$(context$1$0) {
while (1) switch (context$1$0.prev = context$1$0.next) {
case 0:
context$1$0.prev = 0;
console.log( "requesting:", url );
context$1$0.next = 4;
return request( url );
case 4:
val = context$1$0.sent;
console.log( val );
context$1$0.next = 12;
break;
case 8:
context$1$0.prev = 8;
context$1$0.t0 = context$1$0.catch(0);
console.log("Oops:", context$1$0.t0);
return context$1$0.abrupt("return", false);
case 12:
case "end":
return context$1$0.stop();
}
}, foo, this, [[0, 8]]);
});
There’s some obvious similarities here to our manual derivation, such as the switch
/ case
statements, and we even see val
pulled out of the closure just as we did.
Of course, one trade-off is that regenerator’s transpilation requires a helper library regeneratorRuntime
that holds all the reusable logic for managing a general generator / iterator. A lot of that boilerplate looks different than our version, but even then, the concepts can be seen, like with context$1$0.next = 4
keeping track of the next state for the generator.
The main takeaway is that generators are not restricted to only being useful in ES6+ environments. Once you understand the concepts, you can employ them throughout your code, and use tools to transform the code to be compatible with older environments.
This is more work than just using a Promise
API polyfill for pre-ES6 Promises, but the effort is totally worth it, because generators are so much better at expressing async flow control in a reason-able, sensible, synchronous-looking, sequential fashion.
Once you get hooked on generators, you’ll never want to go back to the hell of async spaghetti callbacks!