Recommendations

Recommendations

This document contains a set of recommendations when using Fastify.

Use A Reverse Proxy

Node.js is an early adopter of frameworks shipping with an easy-to-use web server within the standard library. Previously, with languages like PHP or Python, one would need either a web server with specific support for the language or the ability to set up some sort of CGI gateway that works with the language. With Node.js, one can write an application that directly handles HTTP requests. As a result, the temptation is to write applications that handle requests for multiple domains, listen on multiple ports (i.e. HTTP and HTTPS), and then expose these applications directly to the Internet to handle requests.

The Fastify team strongly considers this to be an anti-pattern and extremely bad practice:

  1. It adds unnecessary complexity to the application by diluting its focus.
  2. It prevents horizontal scalability.

See Why should I use a Reverse Proxy if Node.js is Production Ready? for a more thorough discussion of why one should opt to use a reverse proxy.

For a concrete example, consider the situation where:

  1. The app needs multiple instances to handle load.
  2. The app needs TLS termination.
  3. The app needs to redirect HTTP requests to HTTPS.
  4. The app needs to serve multiple domains.
  5. The app needs to serve static resources, e.g. jpeg files.

There are many reverse proxy solutions available, and your environment may dictate the solution to use, e.g. AWS or GCP. Given the above, we could use HAProxy or Nginx to solve these requirements:

HAProxy

  1. # The global section defines base HAProxy (engine) instance configuration.
  2. global
  3. log /dev/log syslog
  4. maxconn 4096
  5. chroot /var/lib/haproxy
  6. user haproxy
  7. group haproxy
  8. # Set some baseline TLS options.
  9. tune.ssl.default-dh-param 2048
  10. ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11
  11. ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
  12. ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11
  13. ssl-default-server-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
  14. # Each defaults section defines options that will apply to each subsequent
  15. # subsection until another defaults section is encountered.
  16. defaults
  17. log global
  18. mode http
  19. option httplog
  20. option dontlognull
  21. retries 3
  22. option redispatch
  23. # The following option makes haproxy close connections to backend servers
  24. # instead of keeping them open. This can alleviate unexpected connection
  25. # reset errors in the Node process.
  26. option http-server-close
  27. maxconn 2000
  28. timeout connect 5000
  29. timeout client 50000
  30. timeout server 50000
  31. # Enable content compression for specific content types.
  32. compression algo gzip
  33. compression type text/html text/plain text/css application/javascript
  34. # A "frontend" section defines a public listener, i.e. an "http server"
  35. # as far as clients are concerned.
  36. frontend proxy
  37. # The IP address here would be the _public_ IP address of the server.
  38. # Here, we use a private address as an example.
  39. bind 10.0.0.10:80
  40. # This redirect rule will redirect all traffic that is not TLS traffic
  41. # to the same incoming request URL on the HTTPS port.
  42. redirect scheme https code 308 if !{ ssl_fc }
  43. # Technically this use_backend directive is useless since we are simply
  44. # redirecting all traffic to this frontend to the HTTPS frontend. It is
  45. # merely included here for completeness sake.
  46. use_backend default-server
  47. # This frontend defines our primary, TLS only, listener. It is here where
  48. # we will define the TLS certificates to expose and how to direct incoming
  49. # requests.
  50. frontend proxy-ssl
  51. # The `/etc/haproxy/certs` directory in this example contains a set of
  52. # certificate PEM files that are named for the domains the certificates are
  53. # issued for. When HAProxy starts, it will read this directory, load all of
  54. # the certificates it finds here, and use SNI matching to apply the correct
  55. # certificate to the connection.
  56. bind 10.0.0.10:443 ssl crt /etc/haproxy/certs
  57. # Here we define rule pairs to handle static resources. Any incoming request
  58. # that has a path starting with `/static`, e.g.
  59. # `https://one.example.com/static/foo.jpeg`, will be redirected to the
  60. # static resources server.
  61. acl is_static path -i -m beg /static
  62. use_backend static-backend if is_static
  63. # Here we define rule pairs to direct requests to appropriate Node.js
  64. # servers based on the requested domain. The `acl` line is used to match
  65. # the incoming hostname and define a boolean indicating if it is a match.
  66. # The `use_backend` line is used to direct the traffic if the boolean is
  67. # true.
  68. acl example1 hdr_sub(Host) one.example.com
  69. use_backend example1-backend if example1
  70. acl example2 hdr_sub(Host) two.example.com
  71. use_backend example2-backend if example2
  72. # Finally, we have a fallback redirect if none of the requested hosts
  73. # match the above rules.
  74. default_backend default-server
  75. # A "backend" is used to tell HAProxy where to request information for the
  76. # proxied request. These sections are where we will define where our Node.js
  77. # apps live and any other servers for things like static assets.
  78. backend default-server
  79. # In this example we are defaulting unmatched domain requests to a single
  80. # backend server for all requests. Notice that the backend server does not
  81. # have to be serving TLS requests. This is called "TLS termination": the TLS
  82. # connection is "terminated" at the reverse proxy.
  83. # It is possible to also proxy to backend servers that are themselves serving
  84. # requests over TLS, but that is outside the scope of this example.
  85. server server1 10.10.10.2:80
  86. # This backend configuration will serve requests for `https://one.example.com`
  87. # by proxying requests to three backend servers in a round-robin manner.
  88. backend example1-backend
  89. server example1-1 10.10.11.2:80
  90. server example1-2 10.10.11.2:80
  91. server example2-2 10.10.11.3:80
  92. # This one serves requests for `https://two.example.com`
  93. backend example2-backend
  94. server example2-1 10.10.12.2:80
  95. server example2-2 10.10.12.2:80
  96. server example2-3 10.10.12.3:80
  97. # This backend handles the static resources requests.
  98. backend static-backend
  99. server static-server1 10.10.9.2:80

Nginx

  1. # This upstream block groups 3 servers into one named backend fastify_app
  2. # with 2 primary servers distributed via round-robin
  3. # and one backup which is used when the first 2 are not reachable
  4. # This also assumes your fastify servers are listening on port 80.
  5. # more info: https://nginx.org/en/docs/http/ngx_http_upstream_module.html
  6. upstream fastify_app {
  7. server 10.10.11.1:80;
  8. server 10.10.11.2:80;
  9. server 10.10.11.3:80 backup;
  10. }
  11. # This server block asks NGINX to respond with a redirect when
  12. # an incoming request from port 80 (typically plain HTTP), to
  13. # the same request URL but with HTTPS as protocol.
  14. # This block is optional, and usually used if you are handling
  15. # SSL termination in NGINX, like in the example here.
  16. server {
  17. # default server is a special parameter to ask NGINX
  18. # to set this server block to the default for this address/port
  19. # which in this case is any address and port 80
  20. listen 80 default_server;
  21. listen [::]:80 default_server;
  22. # With a server_name directive you can also ask NGINX to
  23. # use this server block only with matching server name(s)
  24. # listen 80;
  25. # listen [::]:80;
  26. # server_name example.tld;
  27. # This matches all paths from the request and responds with
  28. # the redirect mentioned above.
  29. location / {
  30. return 301 https://$host$request_uri;
  31. }
  32. }
  33. # This server block asks NGINX to respond to requests from
  34. # port 443 with SSL enabled and accept HTTP/2 connections.
  35. # This is where the request is then proxied to the fastify_app
  36. # server group via port 3000.
  37. server {
  38. # This listen directive asks NGINX to accept requests
  39. # coming to any address, port 443, with SSL, and HTTP/2
  40. # if possible.
  41. listen 443 ssl http2 default_server;
  42. listen [::]:443 ssl http2 default_server;
  43. # With a server_name directive you can also ask NGINX to
  44. # use this server block only with matching server name(s)
  45. # listen 443 ssl http2;
  46. # listen [::]:443 ssl http2;
  47. # server_name example.tld;
  48. # Your SSL/TLS certificate (chain) and secret key in the PEM format
  49. ssl_certificate /path/to/fullchain.pem;
  50. ssl_certificate_key /path/to/private.pem;
  51. # A generic best practice baseline for based
  52. # on https://ssl-config.mozilla.org/
  53. ssl_session_timeout 1d;
  54. ssl_session_cache shared:FastifyApp:10m;
  55. ssl_session_tickets off;
  56. # This tells NGINX to only accept TLS 1.3, which should be fine
  57. # with most modern browsers including IE 11 with certain updates.
  58. # If you want to support older browsers you might need to add
  59. # additional fallback protocols.
  60. ssl_protocols TLSv1.3;
  61. ssl_prefer_server_ciphers off;
  62. # This adds a header that tells browsers to only ever use HTTPS
  63. # with this server.
  64. add_header Strict-Transport-Security "max-age=63072000" always;
  65. # The following directives are only necessary if you want to
  66. # enable OCSP Stapling.
  67. ssl_stapling on;
  68. ssl_stapling_verify on;
  69. ssl_trusted_certificate /path/to/chain.pem;
  70. # Custom nameserver to resolve upstream server names
  71. # resolver 127.0.0.1;
  72. # This section matches all paths and proxies it to the backend server
  73. # group specified above. Note the additional headers that forward
  74. # information about the original request. You might want to set
  75. # trustProxy to the address of your NGINX server so the X-Forwarded
  76. # fields are used by fastify.
  77. location / {
  78. # more info: https://nginx.org/en/docs/http/ngx_http_proxy_module.html
  79. proxy_http_version 1.1;
  80. proxy_cache_bypass $http_upgrade;
  81. proxy_set_header Upgrade $http_upgrade;
  82. proxy_set_header Connection 'upgrade';
  83. proxy_set_header Host $host;
  84. proxy_set_header X-Real-IP $remote_addr;
  85. proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  86. proxy_set_header X-Forwarded-Proto $scheme;
  87. # This is the directive that proxies requests to the specified server.
  88. # If you are using an upstream group, then you do not need to specify a port.
  89. # If you are directly proxying to a server e.g.
  90. # proxy_pass http://127.0.0.1:3000 then specify a port.
  91. proxy_pass http://fastify_app;
  92. }
  93. }

Kubernetes

The readinessProbe uses (by default) the pod IP as the hostname. Fastify listens on 127.0.0.1 by default. The probe will not be able to reach the application in this case. To make it work, the application must listen on 0.0.0.0 or specify a custom hostname in the readinessProbe.httpGet spec, as per the following example:

  1. readinessProbe:
  2. httpGet:
  3. path: /health
  4. port: 4000
  5. initialDelaySeconds: 30
  6. periodSeconds: 30
  7. timeoutSeconds: 3
  8. successThreshold: 1
  9. failureThreshold: 5

Capacity Planning For Production

In order to rightsize the production environment for your Fastify application, it is highly recommended that you perform your own measurements against different configurations of the environment, which may use real CPU cores, virtual CPU cores (vCPU), or even fractional vCPU cores. We will use the term vCPU throughout this recommendation to represent any CPU type.

Tools such as k6 or autocannon can be used for conducting the necessary performance tests.

That said, you may also consider the following as a rule of thumb:

  • To have the lowest possible latency, 2 vCPU are recommended per app instance (e.g., a k8s pod). The second vCPU will mostly be used by the garbage collector (GC) and libuv threadpool. This will minimize the latency for your users, as well as the memory usage, as the GC will be run more frequently. Also, the main thread won’t have to stop to let the GC run.

  • To optimize for throughput (handling the largest possible amount of requests per second per vCPU available), consider using a smaller amount of vCPUs per app instance. It is totally fine to run Node.js applications with 1 vCPU.

  • You may experiment with an even smaller amount of vCPU, which may provide even better throughput in certain use-cases. There are reports of API gateway solutions working well with 100m-200m vCPU in Kubernetes.

See Node’s Event Loop From the Inside Out to understand the workings of Node.js in greater detail and make a better determination about what your specific application needs.

Running Multiple Instances

There are several use-cases where running multiple Fastify apps on the same server might be considered. A common example would be exposing metrics endpoints on a separate port, to prevent public access, when using a reverse proxy or an ingress firewall is not an option.

It is perfectly fine to spin up several Fastify instances within the same Node.js process and run them concurrently, even in high load systems. Each Fastify instance only generates as much load as the traffic it receives, plus the memory used for that Fastify instance.