Feature Testing
What is a feature test? It’s a test that you run to determine if a feature is available or not. Sometimes, the test is not just for existence, but for conformance to specified behavior — features can exist but be buggy.
This is a meta programming technique, to test the environment your program runs in to then determine how your program should behave.
The most common use of feature tests in JS is checking for the existence of an API and if it’s not present, defining a polyfill (see Chapter 1). For example:
if (!Number.isNaN) {
Number.isNaN = function(x) {
return x !== x;
};
}
The if
statement in this snippet is meta programming: we’re probing our program and its runtime environment to determine if and how we should proceed.
But what about testing for features that involve new syntax?
You might try something like:
try {
a = () => {};
ARROW_FUNCS_ENABLED = true;
}
catch (err) {
ARROW_FUNCS_ENABLED = false;
}
Unfortunately, this doesn’t work, because our JS programs are compiled. Thus, the engine will choke on the () => {}
syntax if it is not already supporting ES6 arrow functions. Having a syntax error in your program prevents it from running, which prevents your program from subsequently responding differently if the feature is supported or not.
To meta program with feature tests around syntax-related features, we need a way to insulate the test from the initial compile step our program runs through. For instance, if we could store the code for the test in a string, then the JS engine wouldn’t by default try to compile the contents of that string, until we asked it to.
Did your mind just jump to using eval(..)
?
Not so fast. See the Scope & Closures title of this series for why eval(..)
is a bad idea. But there’s another option with less downsides: the Function(..)
constructor.
Consider:
try {
new Function( "( () => {} )" );
ARROW_FUNCS_ENABLED = true;
}
catch (err) {
ARROW_FUNCS_ENABLED = false;
}
OK, so now we’re meta programming by determining if a feature like arrow functions can compile in the current engine or not. You might then wonder, what would we do with this information?
With existence checks for APIs, and defining fallback API polyfills, there’s a clear path for what to do with either test success or failure. But what can we do with the information that we get from ARROW_FUNCS_ENABLED
being true
or false
?
Because the syntax can’t appear in a file if the engine doesn’t support that feature, you can’t just have different functions defined in the file with and without the syntax in question.
What you can do is use the test to determine which of a set of JS files you should load. For example, if you had a set of these feature tests in a bootstrapper for your JS application, it could then test the environment to determine if your ES6 code can be loaded and run directly, or if you need to load a transpiled version of your code (see Chapter 1).
This technique is called split delivery.
It recognizes the reality that your ES6 authored JS programs will sometimes be able to entirely run “natively” in ES6+ browsers, but other times need transpilation to run in pre-ES6 browsers. If you always load and use the transpiled code, even in the new ES6-compliant environments, you’re running suboptimal code at least some of the time. This is not ideal.
Split delivery is more complicated and sophisticated, but it represents a more mature and robust approach to bridging the gap between the code you write and the feature support in browsers your programs must run in.
FeatureTests.io
Defining feature tests for all of the ES6+ syntax, as well as the semantic behaviors, is a daunting task you probably don’t want to tackle yourself. Because these tests require dynamic compilation (new Function(..)
), there’s some unfortunate performance cost.
Moreover, running these tests every single time your app runs is probably wasteful, as on average a user’s browser only updates once in a several week period at most, and even then, new features aren’t necessarily showing up with every update.
Finally, managing the list of feature tests that apply to your specific code base — rarely will your programs use the entirety of ES6 — is unruly and error-prone.
The “https://featuretests.io“ feature-tests-as-a-service offers solutions to these frustrations.
You can load the service’s library into your page, and it loads the latest test definitions and runs all the feature tests. It does so using background processing with Web Workers, if possible, to reduce the performance overhead. It also uses LocalStorage persistence to cache the results in a way that can be shared across all sites you visit which use the service, which drastically reduces how often the tests need to run on each browser instance.
You get runtime feature tests in each of your users’ browsers, and you can use those tests results dynamically to serve users the most appropriate code (no more, no less) for their environments.
Moreover, the service provides tools and APIs to scan your files to determine what features you need, so you can fully automate your split delivery build processes.
FeatureTests.io makes it practical to use feature tests for all parts of ES6 and beyond to make sure that only the best code is ever loaded and run for any given environment.