- Hardening
- :beginner: Always keep Nginx up-to-date
- :beginner: Run as an unprivileged user
- :beginner: Disable unnecessary modules
- :beginner: Protect sensitive resources
- :beginner: Hide Nginx version number
- :beginner: Hide Nginx server signature
- :beginner: Hide upstream proxy headers
- :beginner: Force all connections over TLS
- :beginner: Use only the latest supported OpenSSL version
- :beginner: Use min. 2048-bit private keys
- :beginner: Keep only TLS 1.3 and TLS 1.2
- :beginner: Use only strong ciphers
- :beginner: Use more secure ECDH Curve
- :beginner: Use strong Key Exchange with Perfect Forward Secrecy
- :beginner: Prevent Replay Attacks on Zero Round-Trip Time
- :beginner: Defend against the BEAST attack
- :beginner: Mitigation of CRIME/BREACH attacks
- :beginner: HTTP Strict Transport Security
- :beginner: Reduce XSS risks (Content-Security-Policy)
- :beginner: Control the behaviour of the Referer header (Referrer-Policy)
- :beginner: Provide clickjacking protection (X-Frame-Options)
- :beginner: Prevent some categories of XSS attacks (X-XSS-Protection)
- :beginner: Prevent Sniff Mimetype middleware (X-Content-Type-Options)
- :beginner: Deny the use of browser features (Feature-Policy)
- :beginner: Reject unsafe HTTP methods
- :beginner: Prevent caching of sensitive data
- :beginner: Control Buffer Overflow attacks
- :beginner: Mitigating Slow HTTP DoS attacks (Closing Slow Connections)
Hardening
- ⬆ Hardening
- Always keep Nginx up-to-date
- Run as an unprivileged user
- Disable unnecessary modules
- Protect sensitive resources
- Hide Nginx version number
- Hide Nginx server signature
- Hide upstream proxy headers
- Force all connections over TLS
- Use only the latest supported OpenSSL version
- Use min. 2048-bit private keys
- Keep only TLS 1.3 and TLS 1.2
- Use only strong ciphers
- Use more secure ECDH Curve
- Use strong Key Exchange with Perfect Forward Secrecy
- Prevent Replay Attacks on Zero Round-Trip Time
- Defend against the BEAST attack
- Mitigation of CRIME/BREACH attacks
- HTTP Strict Transport Security
- Reduce XSS risks (Content-Security-Policy)
- Control the behaviour of the Referer header (Referrer-Policy)
- Provide clickjacking protection (X-Frame-Options)
- Prevent some categories of XSS attacks (X-XSS-Protection)
- Prevent Sniff Mimetype middleware (X-Content-Type-Options)
- Deny the use of browser features (Feature-Policy)
- Reject unsafe HTTP methods
- Prevent caching of sensitive data
- Control Buffer Overflow attacks
- Mitigating Slow HTTP DoS attacks (Closing Slow Connections)
In this chapter I will talk about some of the Nginx hardening approaches and security standards.
:beginner: Always keep Nginx up-to-date
Rationale
Nginx is a very secure and stable but vulnerabilities in the main binary itself do pop up from time to time. It’s the main reason for keep Nginx up-to-date as hard as you can.
A very safe way to plan the update is once a new stable version is released but for me the most common way to handle Nginx updates is to wait a few weeks after the stable release.
Before update/upgrade Nginx remember about do it on the testing environment.
Most modern GNU/Linux distros will not push the latest version of Nginx into their default package lists so maybe you should consider install it from sources.
External resources
:beginner: Run as an unprivileged user
Rationale
There is no real difference in security just by changing the process owner name. On the other hand in security, the principle of least privilege states that an entity should be given no more permission than necessary to accomplish its goals within a given system. This way only master process runs as root.
This is the default Nginx behaviour, but remember to check it.
Example
## Edit nginx.conf:
user nginx;
## Set owner and group for root (app, default) directory:
chown -R nginx:nginx /var/www/domain.com
External resources
:beginner: Disable unnecessary modules
Rationale
It is recommended to disable any modules which are not required as this will minimise the risk of any potential attacks by limiting the operations allowed by the web server.
The best way to unload unused modules is use the
configure
option during installation. If you have static linking a shared module you should re-compile Nginx.Use only high quality modules and remember about that:
Unfortunately, many third‑party modules use blocking calls, and users (and sometimes even the developers of the modules) aren’t aware of the drawbacks. Blocking operations can ruin Nginx performance and must be avoided at all costs.
Example
## 1) During installation:
./configure --without-http_autoindex_module
## 2) Comment modules in the configuration file e.g. modules.conf:
## load_module /usr/share/nginx/modules/ndk_http_module.so;
## load_module /usr/share/nginx/modules/ngx_http_auth_pam_module.so;
## load_module /usr/share/nginx/modules/ngx_http_cache_purge_module.so;
## load_module /usr/share/nginx/modules/ngx_http_dav_ext_module.so;
load_module /usr/share/nginx/modules/ngx_http_echo_module.so;
## load_module /usr/share/nginx/modules/ngx_http_fancyindex_module.so;
load_module /usr/share/nginx/modules/ngx_http_geoip_module.so;
load_module /usr/share/nginx/modules/ngx_http_headers_more_filter_module.so;
## load_module /usr/share/nginx/modules/ngx_http_image_filter_module.so;
## load_module /usr/share/nginx/modules/ngx_http_lua_module.so;
load_module /usr/share/nginx/modules/ngx_http_perl_module.so;
## load_module /usr/share/nginx/modules/ngx_mail_module.so;
## load_module /usr/share/nginx/modules/ngx_nchan_module.so;
## load_module /usr/share/nginx/modules/ngx_stream_module.so;
External resources
:beginner: Protect sensitive resources
Rationale
Hidden directories and files should never be web accessible - sometimes critical data are published during application deploy. If you use control version system you should defninitely drop the access to the critical hidden directories like a
.git
or.svn
to prevent expose source code of your application.Sensitive resources contains items that abusers can use to fully recreate the source code used by the site and look for bugs, vulnerabilities, and exposed passwords.
Example
if ($request_uri ~ "/\.git") {
return 403;
}
## or
location ~ /\.git {
deny all;
}
## or
location ~* ^.*(\.(?:git|svn|htaccess))$ {
return 403;
}
## or all . directories/files excepted .well-known
location ~ /\.(?!well-known\/) {
deny all;
}
External resources
:beginner: Hide Nginx version number
Rationale
Disclosing the version of Nginx running can be undesirable, particularly in environments sensitive to information disclosure.
But the “Official Apache Documentation (Apache Core Features)” (yep, it’s not a joke…) say:
Setting ServerTokens to less than minimal is not recommended because it makes it more difficult to debug interoperational problems. Also note that disabling the Server: header does nothing at all to make your server more secure. The idea of “security through obscurity” is a myth and leads to a false sense of safety.
Example
server_tokens off;
External resources
:beginner: Hide Nginx server signature
Rationale
One of the easiest first steps to undertake, is to prevent the web server from showing its used software via the server header. Certainly, there are several reasons why you would like to change the server header. It could be security, it could be redundant systems, load balancers etc.
In my opinion there is no real reason or need to show this much information about your server. It is easy to look up particular vulnerabilities once you know the version number.
You should compile Nginx from sources with
ngx_headers_more
to usedmore_set_headers
directive or use a nginx-remove-server-header.patch.
Example
more_set_headers "Server: Unknown";
External resources
- Shhh… don’t let your response headers talk too loudly
- How to change (hide) the Nginx Server Signature?
:beginner: Hide upstream proxy headers
Rationale
Securing a server goes far beyond not showing what’s running but I think less is more is better.
When Nginx is used to proxy requests to an upstream server (such as a PHP-FPM instance), it can be beneficial to hide certain headers sent in the upstream response (e.g. the version of PHP running).
Example
proxy_hide_header X-Powered-By;
proxy_hide_header X-AspNetMvc-Version;
proxy_hide_header X-AspNet-Version;
proxy_hide_header X-Drupal-Cache;
External resources
:beginner: Force all connections over TLS
Rationale
TLS provides two main services. For one, it validates the identity of the server that the user is connecting to for the user. It also protects the transmission of sensitive information from the user to the server.
In my opinion you should always use HTTPS instead of HTTP to protect your website, even if it doesn’t handle sensitive communications. The application can have many sensitive places that should be protected.
Always put login page, registration forms, all subsequent authenticated pages, contact forms, and payment details forms in HTTPS to prevent injection and sniffing. Them must be accessed only over TLS to ensure your traffic is secure.
If page is available over TLS, it must be composed completely of content which is transmitted over TLS. Requesting subresources using the insecure HTTP protocol weakens the security of the entire page and HTTPS protocol. Modern browsers should blocked or report all active mixed content delivered via HTTP on pages by default.
Also remember to implement the HTTP Strict Transport Security (HSTS).
We have currently the first free and open CA - Let’s Encrypt - so generating and implementing certificates has never been so easy. It was created to provide free and easy-to-use TLS and SSL certificates.
Example
force all traffic to use TLS:
server {
listen 10.240.20.2:80;
server_name domain.com;
return 301 https://$host$request_uri;
}
server {
listen 10.240.20.2:443 ssl;
server_name domain.com;
...
}
force e.g. login page to use TLS:
server {
listen 10.240.20.2:80;
server_name domain.com;
...
location ^~ /login {
return 301 https://domain.com$request_uri;
}
}
External resources
:beginner: Use only the latest supported OpenSSL version
Rationale
Before start see Release Strategy Policies and Changelog on the OpenSSL website.
Criteria for choosing OpenSSL version can vary and it depends all on your use.
The latest versions of the major OpenSSL library are (may be changed):
- the next version of OpenSSL will be 3.0.0
- version 1.1.1 will be supported until 2023-09-11 (LTS)
- last minor version: 1.1.1c (May 23, 2019)
- version 1.1.0 will be supported until 2019-09-11
- last minor version: 1.1.0k (May 28, 2018)
- version 1.0.2 will be supported until 2019-12-31 (LTS)
- last minor version: 1.0.2s (May 28, 2018)
- any other versions are no longer supported
In my opinion the only safe way is based on the up-to-date and still supported version of the OpenSSL. And what’s more, I recommend to hang on to the latest versions (e.g. 1.1.1).
If your system repositories do not have the newest OpenSSL, you can do the compilation process (see OpenSSL sub-section).
External resources
:beginner: Use min. 2048-bit private keys
Rationale
Advisories recommend 2048 for now. Security experts are projecting that 2048 bits will be sufficient for commercial use until around the year 2030 (as per NIST).
The latest version of FIPS-186 also say the U.S. Federal Government generate (and use) digital signatures with 1024, 2048, or 3072 bit key lengths.
Generally there is no compelling reason to choose 4096 bit keys over 2048 provided you use sane expiration intervals.
If you want to get A+ with 100%s on SSL Lab (for Key Exchange) you should definitely use 4096 bit private keys. That’s the main reason why you should use them.
Longer keys take more time to generate and require more CPU and power when used for encrypting and decrypting, also the SSL handshake at the start of each connection will be slower. It also has a small impact on the client side (e.g. browsers).
You can test above on your server with
openssl speed rsa
but remember: in OpenSSL speed tests you see difference on block cipher speed, while in real life most cpu time is spent on asymmetric algorithms during ssl handshake. On the other hand, modern processors are capable of executing at least 1k of RSA 1024-bit signs per second on a single core, so this isn’t usually an issue.Use of alternative solution: ECC Certificate Signing Request (CSR) -
ECDSA
certificates contain anECC
public key.ECC
keys are better thanRSA & DSA
keys in that theECC
algorithm is harder to break.
The “SSL/TLS Deployment Best Practices” book say:
The cryptographic handshake, which is used to establish secure connections, is an operation whose cost is highly influenced by private key size. Using a key that is too short is insecure, but using a key that is too long will result in “too much” security and slow operation. For most web sites, using RSA keys stronger than 2048 bits and ECDSA keys stronger than 256 bits is a waste of CPU power and might impair user experience. Similarly, there is little benefit to increasing the strength of the ephemeral key exchange beyond 2048 bits for DHE and 256 bits for ECDHE.
Konstantin Ryabitsev (Reddit):
Generally speaking, if we ever find ourselves in a world where 2048-bit keys are no longer good enough, it won’t be because of improvements in brute-force capabilities of current computers, but because RSA will be made obsolete as a technology due to revolutionary computing advances. If that ever happens, 3072 or 4096 bits won’t make much of a difference anyway. This is why anything above 2048 bits is generally regarded as a sort of feel-good hedging theatre.
My recommendation:
Use 2048-bit key instead of 4096-bit at this moment.
Example
### Example (RSA):
( _fd="domain.com.key" ; _len="2048" ; openssl genrsa -out ${_fd} ${_len} )
## Let's Encrypt:
certbot certonly -d domain.com -d www.domain.com --rsa-key-size 2048
### Example (ECC):
## _curve: prime256v1, secp521r1, secp384r1
( _fd="domain.com.key" ; _fd_csr="domain.com.csr" ; _curve="prime256v1" ; \
openssl ecparam -out ${_fd} -name ${_curve} -genkey ; \
openssl req -new -key ${_fd} -out ${_fd_csr} -sha256 )
## Let's Encrypt (from above):
certbot --csr ${_fd_csr} -[other-args]
For x25519
:
( _fd="private.key" ; _curve="x25519" ; \
openssl genpkey -algorithm ${_curve} -out ${_fd} )
:arrow_right: ssllabs score: 100%
( _fd="domain.com.key" ; _len="2048" ; openssl genrsa -out ${_fd} ${_len} )
## Let's Encrypt:
certbot certonly -d domain.com -d www.domain.com
:arrow_right: ssllabs score: 90%
External resources
- Key Management Guidelines by NIST
- Recommendation for Transitioning the Use of Cryptographic Algorithms and Key Lengths
- FIPS PUB 186-4 - Digital Signature Standard (DSS) [pdf]
- Cryptographic Key Length Recommendations
- So you’re making an RSA key for an HTTPS certificate. What key size do you use?
- RSA Key Sizes: 2048 or 4096 bits?
- Create a self-signed ECC certificate
:beginner: Keep only TLS 1.3 and TLS 1.2
Rationale
It is recommended to run TLS 1.2/1.3 and fully disable SSLv2, SSLv3, TLS 1.0 and TLS 1.1 that have protocol weaknesses and uses older cipher suites (do not provide any modern ciper modes).
TLS 1.0 and TLS 1.1 must not be used (see Deprecating TLSv1.0 and TLSv1.1) and were superceded by TLS 1.2, which has now itself been superceded by TLS 1.3. They are also actively being deprecated in accordance with guidance from government agencies (e.g. NIST SP 80052r2) and industry consortia such as the Payment Card Industry Association (PCI) [PCI-TLS1].
TLS 1.2 and TLS 1.3 are both without security issues. Only these versions provides modern cryptographic algorithms. TLS 1.3 is a new TLS version that will power a faster and more secure web for the next few years. What’s more, TLS 1.3 comes without a ton of stuff (was removed): renegotiation, compression, and many legacy algorithms:
DSA
,RC4
,SHA1
,MD5
,CBC
MAC-then-Encrypt ciphers. TLS 1.0 and TLS 1.1 protocols will be removed from browsers at the beginning of 2020.TLS 1.2 does require careful configuration to ensure obsolete cipher suites with identified vulnerabilities are not used in conjunction with it. TLS 1.3 removes the need to make these decisions. TLS 1.3 version also improves TLS 1.2 security, privace and performance issues.
Before enabling specific protocol version, you should check which ciphers are supported by the protocol. So if you turn on TLS 1.2 and TLS 1.3 both remember about the correct (and strong) ciphers to handle them. Otherwise, they will not be anyway works without supported ciphers (no TLS handshake will succeed).
I think the best way to deploy secure configuration is: enable TLS 1.2 without any
CBC
Ciphers (is safe enough) only TLS 1.3 is safer because of its handling improvement and the exclusion of everything that went obsolete since TLS 1.2 came up.If you told Nginx to use TLS 1.3, it will use TLS 1.3 only where is available. Nginx supports TLS 1.3 since version 1.13.0 (released in April 2017), when built against OpenSSL 1.1.1 or more.
For TLS 1.3, think about using
ssl_early_data
to allow TLS 1.3 0-RTT handshakes.
My recommendation:
Use only TLSv1.3 and TLSv1.2.
Example
TLS 1.3 + 1.2:
ssl_protocols TLSv1.3 TLSv1.2;
TLS 1.2:
ssl_protocols TLSv1.2;
:arrow_right: ssllabs score: 100%
TLS 1.3 + 1.2 + 1.1:
ssl_protocols TLSv1.3 TLSv1.2 TLSv1.1;
TLS 1.2 + 1.1:
ssl_protocols TLSv1.2 TLSv1.1;
:arrow_right: ssllabs score: 95%
External resources
- The Transport Layer Security (TLS) Protocol Version 1.2
- The Transport Layer Security (TLS) Protocol Version 1.3
- TLS1.2 - Every byte explained and reproduced
- TLS1.3 - Every byte explained and reproduced
- TLS1.3 - OpenSSLWiki
- TLS v1.2 handshake overview
- An Overview of TLS 1.3 - Faster and More Secure
- A Detailed Look at RFC 8446 (a.k.a. TLS 1.3)
- Differences between TLS 1.2 and TLS 1.3
- TLS 1.3 in a nutshell
- TLS 1.3 is here to stay
- How to enable TLS 1.3 on Nginx
- How to deploy modern TLS in 2019?
- Deploying TLS 1.3: the great, the good and the bad
- Downgrade Attack on TLS 1.3 and Vulnerabilities in Major TLS Libraries
- Phase two of our TLS 1.0 and 1.1 deprecation plan
- Deprecating TLS 1.0 and 1.1 - Enhancing Security for Everyone
- TLS/SSL Explained – Examples of a TLS Vulnerability and Attack, Final Part
- This POODLE bites: exploiting the SSL 3.0 fallback
- Are You Ready for 30 June 2018? Saying Goodbye to SSL/early TLS
- Deprecating TLSv1.0 and TLSv1.1
:beginner: Use only strong ciphers
Rationale
This parameter changes quite often, the recommended configuration for today may be out of date tomorrow.
To check ciphers supported by OpenSSL on your server:
openssl ciphers -s -v
,openssl ciphers -s -v ECDHE
oropenssl ciphers -s -v DHE
.For more security use only strong and not vulnerable cipher suites. Place
ECDHE
andDHE
suites at the top of your list. The order is important becauseECDHE
suites are faster, you want to use them whenever clients supports them. EphemeralDHE/ECDHE
are recommended and support Perfect Forward Secrecy.For backward compatibility software components you should use less restrictive ciphers. Not only that you have to enable at least one special
AES128
cipher for HTTP/2 support regarding to RFC7540: TLS 1.2 Cipher Suites, you also have to allowprime256
elliptic curves which reduces the score for key exchange by another 10% even if a secure server preferred order is set.Also modern cipher suites (e.g. from Mozilla recommendations) suffers from compatibility troubles mainly because drops
SHA-1
. But be careful if you want to use ciphers withHMAC-SHA-1
- there’s a perfectly good explanation why.If you want to get A+ with 100%s on SSL Lab (for Cipher Strength) you should definitely disable
128-bit
ciphers. That’s the main reason why you should not use them.In my opinion
128-bit
symmetric encryption doesn’t less secure. Moreover, there are about 30% faster and still secure. For example TLS 1.3 useTLS_AES_128_GCM_SHA256 (0x1301)
(for TLS-compliant applications).It is not possible to control ciphers for TLS 1.3 without support from client to use new API for TLS 1.3 cipher suites. Nginx isn’t able to influence that so at this moment it’s always on (also if you disable potentially weak cipher from Nginx). On the other hand the ciphers in TLSv1.3 have been restricted to only a handful of completely secure ciphers by leading crypto experts.
For TLS 1.2 you should consider disable weak ciphers without forward secrecy like ciphers with
CBC
algorithm. Using them also reduces the final grade because they don’t use ephemeral keys. In my opinion you should use ciphers withAEAD
(TLS 1.3 supports only these suites) encryption because they don’t have any known weaknesses.Recently new vulnerabilities like Zombie POODLE, GOLDENDOODLE, 0-Length OpenSSL and Sleeping POODLE were published for websites that use
CBC
(Cipher Block Chaining) block cipher modes. These vulnerabilities are applicable only if the server uses TLS 1.2 or TLS 1.1 or TLS 1.0 withCBC
cipher modes. Look at Zombie POODLE, GOLDENDOODLE, & How TLSv1.3 Can Save Us All presentation from Black Hat Asia 2019.Disable TLS cipher modes (all ciphers that start with
TLS_RSA_WITH_*
) that use RSA encryption because they are vulnerable to ROBOT attack. Not all servers that support RSA key exchange are vulnerable, but it is recommended to disable RSA key exchange ciphers as it does not support forward secrecy.You should also absolutely disable weak ciphers regardless of the TLS version do you use, like those with
DSS
,DSA
,DES/3DES
,RC4
,MD5
,SHA1
,null
, anon in the name.We have a nice online tool for testing compatibility cipher suites with user agents: CryptCheck. I think it will be very helpful for you.
My recommendation:
Use only TLSv1.3 and TLSv1.2 with below cipher suites:
ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256";
Example
Cipher suites for TLS 1.3:
ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-256-GCM-SHA384";
Cipher suites for TLS 1.2:
ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-SHA384";
:arrow_right: ssllabs score: 100%
Cipher suites for TLS 1.3:
ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256";
Cipher suites for TLS 1.2:
## 1)
ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-SHA384";
## 2)
ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256";
## 3)
ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256";
## 4)
ssl_ciphers "EECDH+CHACHA20:EDH+AESGCM:AES256+EECDH:AES256+EDH";
Cipher suites for TLS 1.1 + 1.2:
## 1)
ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256";
## 2)
ssl_ciphers "ECDHE-ECDSA-CHACHA20-POLY1305:ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:!AES256-GCM-SHA256:!AES256-GCM-SHA128:!aNULL:!MD5";
:arrow_right: ssllabs score: 90%
This will also give a baseline for comparison with Mozilla SSL Configuration Generator:
- Modern profile with OpenSSL 1.1.0b (TLSv1.2)
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
- Intermediate profile with OpenSSL 1.1.0b (TLSv1, TLSv1.1 and TLSv1.2)
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
External resources
- RFC 7525 - TLS Recommendations
- TLS Cipher Suites
- SSL/TLS: How to choose your cipher suite
- HTTP/2 and ECDSA Cipher Suites
- Which SSL/TLS Protocol Versions and Cipher Suites Should I Use?
- Recommendations for a cipher string by OWASP
- Recommendations for TLS/SSL Cipher Hardening by Acunetix
- Mozilla’s Modern compatibility suite
- Why use Ephemeral Diffie-Hellman
- Cipher Suite Breakdown
- Zombie POODLE and GOLDENDOODLE Vulnerabilities
- OpenSSL IANA Mapping
- Goodbye TLS_RSA
:beginner: Use more secure ECDH Curve
Rationale
In my opinion your main source of knowledge should be The SafeCurves web site. This site reports security assessments of various specific curves.
For a SSL server certificate, an “elliptic curve” certificate will be used only with digital signatures (
ECDSA
algorithm). Nginx provides directive to specifies a curve forECDHE
ciphers.
x25519
is a more secure (also with SafeCurves requirements) but slightly less compatible option. I think to maximise interoperability with existing browsers and servers, stick toP-256 prime256v1
andP-384 secp384r1
curves. Of course there’s tons of different opinions aboutP-256
andP-384
curves.NSA Suite B says that NSA uses curves
P-256
andP-384
(in OpenSSL, they are designated as, respectively,prime256v1
andsecp384r1
). There is nothing wrong withP-521
, except that it is, in practice, useless. Arguably,P-384
is also useless, because the more efficientP-256
curve already provides security that cannot be broken through accumulation of computing power.Bernstein and Lange believe that the NIST curves are not optimal and there are better (more secure) curves that work just as fast, e.g.
x25519
.Keep an eye also on this:
Secure implementations of the standard curves are theoretically possible but very hard.
The SafeCurves say:
NIST P-224
,NIST P-256
andNIST P-384
are UNSAFEFrom the curves described here only
x25519
is a curve meets all SafeCurves requirements.I think you can use
P-256
to minimise trouble. If you feel that your manhood is threatened by using a 256-bit curve where a 384-bit curve is available, then useP-384
: it will increases your computational and network costs.If you use TLS 1.3 you should enable
prime256v1
signature algorithm. Without this SSL Lab reportsTLS_AES_128_GCM_SHA256 (0x1301)
signature as weak.If you do not set
ssl_ecdh_curve
, then Nginx will use its default settings, e.g. Chrome will preferx25519
, but it is not recommended because you can not control default settings (seems to beP-256
) from the Nginx.Explicitly set
ssl_ecdh_curve X25519:prime256v1:secp521r1:secp384r1;
decreases the Key Exchange SSL Labs rating.Definitely do not use the
secp112r1
,secp112r2
,secp128r1
,secp128r2
,secp160k1
,secp160r1
,secp160r2
,secp192k1
curves. They have a too small size for security application according to NIST recommendation.
My recommendation:
Use only TLSv1.3 and TLSv1.2 and only strong ciphers with above curves:
ssl_ecdh_curve X25519:secp521r1:secp384r1:prime256v1;
Example
Curves for TLS 1.2:
ssl_ecdh_curve secp521r1:secp384r1:prime256v1;
:arrow_right: ssllabs score: 100%
## Alternative (this one doesn’t affect compatibility, by the way; it’s just a question of the preferred order).
## This setup downgrade Key Exchange score but is recommended for TLS 1.2 + 1.3:
ssl_ecdh_curve X25519:secp521r1:secp384r1:prime256v1;
External resources
- Elliptic Curves for Security
- Standards for Efficient Cryptography Group
- SafeCurves: choosing safe curves for elliptic-curve cryptography
- A note on high-security general-purpose elliptic curves
- P-521 is pretty nice prime
- Safe ECC curves for HTTPS are coming sooner than you think
- Cryptographic Key Length Recommendations
- Testing for Weak SSL/TLS Ciphers, Insufficient Transport Layer Protection (OTG-CRYPST-001)>)
- Elliptic Curve performance: NIST vs Brainpool
- Which elliptic curve should I use?
- Elliptic Curve Cryptography for those who are afraid of maths
:beginner: Use strong Key Exchange with Perfect Forward Secrecy
Rationale
To use a signature based authentication you need some kind of DH exchange (fixed or ephemeral/temporary), to exchange the session key. If you use it, Nginx will use the default Ephemeral Diffie-Hellman (
DHE
) paramaters to define how performs the Diffie-Hellman (DH) key-exchange. This uses a weak key (by default:1024 bit
) that gets lower scores.You should always use the Elliptic Curve Diffie Hellman Ephemeral (
ECDHE
). Due to increasing concern about pervasive surveillance, key exchanges that provide Forward Secrecy are recommended, see for example RFC 7525.For greater compatibility but still for security in key exchange, you should prefer the latter E (ephemeral) over the former E (EC). There is recommended configuration:
ECDHE
>DHE
(with min.2048 bit
size) >ECDH
. With this if the initial handshake fails, another handshake will be initiated usingDHE
.
DHE
is slower thanECDHE
. If you are concerned about performance, prioritizeECDHE-ECDSA
overDHE
. OWASP estimates that the TLS handshake withDHE
hinders the CPU by a factor of 2.4 compared toECDHE
.Diffie-Hellman requires some set-up parameters to begin with. Parameters from
ssl_dhparam
(which are generated withopenssl dhparam ...
) define how OpenSSL performs the Diffie-Hellman (DH) key-exchange. They include a field primep
and a generatorg
. The purpose of the availability to customize these parameter is to allow everyone to use own parameters for this. This can be used to prevent being affected from the Logjam attack.Modern clients prefer
ECDHE
instead other variants and if your Nginx accepts this preference then the handshake will not use the DH param at all since it will not do aDHE
key exchange but anECDHE
key exchange. Thus, if no plainDH/DHE
ciphers are configured at your server but only Eliptic curve DH (e.g.ECDHE
) then you don’t need to set your ownssl_dhparam
directive. EnablingDHE
requires us to take care of our DH primes (a.k.a.dhparams
) and to trust inDHE
.Elliptic curve Diffie-Hellman is a modified Diffie-Hellman exchange which uses Elliptic curve cryptography instead of the traditional RSA-style large primes. So while I’m not sure what parameters it may need (if any), I don’t think it needs the kind you’re generating (
ECDH
is based on curves, not primes, so I don’t think the traditional DH params will do you any good).Cipher suites using
DHE
key exchange in OpenSSL requiretmp_DH
parameters, which thessl_dhparam
directive provides. The same is true forDH_anon
key exchange, but in practice nobody uses those. The OpenSSL wiki page for Diffie Hellman Parameters it says: To use perfect forward secrecy cipher suites, you must set up Diffie-Hellman parameters (on the server side). Look also at SSL_CTX_set_tmp_dh_callback.If you use
ECDH/ECDHE
key exchange please see Use more secure ECDH Curve rule.Default key size in OpenSSL is
1024 bits
- it’s vulnerable and breakable. For the best security configuration use your own DH Group (min.2048 bit
) or use known safe ones pre-defined DH groups (it’s recommended) from the Mozilla.The
2048 bit
is generally expected to be safe and is already very far into the “cannot break it zone”. However years ago people expected 1024 bit to be safe so if you are after long term resistance You would go up to4096 bit
(for both RSA keys and DH parameters). It’s also important if you want to get 100% on Key Exchange of the SSL Labs test.You should remember that the
4096 bit
modulus will make DH computations slower and won’t actually improve security.
There is good explanation about DH parameters recommended size:
Current recommendations from various bodies (including NIST) call for a
2048-bit
modulus for DH. Known DH-breaking algorithms would have a cost so ludicrously high that they could not be run to completion with known Earth-based technology. See this site for pointers on that subject.You don’t want to overdo the size because the computational usage cost rises relatively sharply with prime size (somewhere between quadratic and cubic, depending on some implementation details) but a
2048-bit
DH ought to be fine (a basic low-end PC can do several hundreds of2048-bit
DH per second).
Look also at this answer by Matt Palmer:
Indeed, characterising
2048 bit
DH parameters as “weak as hell” is quite misleading. There are no known feasible cryptographic attacks against arbitrary strong 2048 bit DH groups. To protect against future disclosure of a session key due to breaking DH, sure, you want your DH parameters to be as long as is practical, but since1024 bit
DH is only just getting feasible,2048 bits
should be OK for most purposes for a while yet.
My recommendation:
If you use only TLS 1.3 -
ssl_dhparam
is not required (not used). Also, if you useECDHE/ECDH
-ssl_dhparam
is not required (not used). If you useDHE/DH
-ssl_dhparam
with DH parameters is required (min.2048 bit
). By default no parameters are set, and thereforeDHE
ciphers will not be used.
Example
## To generate a DH parameters:
openssl dhparam -out /etc/nginx/ssl/dhparam_4096.pem 4096
## To produce "DSA-like" DH parameters:
openssl dhparam -dsaparam -out /etc/nginx/ssl/dhparam_4096.pem 4096
## Nginx configuration only for DH/DHE:
ssl_dhparam /etc/nginx/ssl/dhparams_4096.pem;
:arrow_right: ssllabs score: 100%
## To generate a DH parameters:
openssl dhparam -out /etc/nginx/ssl/dhparam_2048.pem 2048
## To produce "DSA-like" DH parameters:
openssl dhparam -dsaparam -out /etc/nginx/ssl/dhparam_2048.pem 2048
## Nginx configuration only for DH/DHE:
ssl_dhparam /etc/nginx/ssl/dhparam_2048.pem;
:arrow_right: ssllabs score: 90%
External resources
- Weak Diffie-Hellman and the Logjam Attack
- Guide to Deploying Diffie-Hellman for TLS
- Pre-defined DHE groups
- Instructs OpenSSL to produce “DSA-like” DH parameters
- OpenSSL generate different types of self signed certificate
- Public Diffie-Hellman Parameter Service/Tool
- Vincent Bernat’s SSL/TLS & Perfect Forward Secrecy
- RSA and ECDSA performance
- SSL/TLS: How to choose your cipher suite
- Diffie-Hellman and its TLS/SSL usage
:beginner: Prevent Replay Attacks on Zero Round-Trip Time
Rationale
This rules is only important for TLS 1.3. By default enabling TLS 1.3 will not enable 0-RTT support. After all, you should be fully aware of all the potential exposure factors and related risks with the use of this option.
0-RTT Handshakes is part of the replacement of TLS Session Resumption and was inspired by the QUIC Protocol.
0-RTT creates a significant security risk. With 0-RTT, a threat actor can intercept an encrypted client message and resend it to the server, tricking the server into improperly extending trust to the threat actor and thus potentially granting the threat actor access to sensitive data.
On the other hand, including 0-RTT (Zero Round Trip Time Resumption) results in a significant increase in efficiency and connection times. TLS 1.3 has a faster handshake that completes in 1-RTT. Additionally, it has a particular session resumption mode where, under certain conditions, it is possible to send data to the server on the first flight (0-RTT).
For example, Cloudflare only supports 0-RTT for GET requests with no query parameters in an attempt to limit the attack surface. Moreover, in order to improve identify connection resumption attempts, they relay this information to the origin by adding an extra header to 0-RTT requests. This header uniquely identifies the request, so if one gets repeated, the origin will know it’s a replay attack (the application needs to track values received from that and reject duplicates on non-idempotent endpoints).
To protect against such attacks at the application layer, the
$ssl_early_data
variable should be used. You’ll also need to ensure that theEarly-Data
header is passed to your application.$ssl_early_data
returns 1 if TLS 1.3 early data is used and the handshake is not complete.However, as part of the upgrade, you should disable 0-RTT until you can audit your application for this class of vulnerability.
In order to send early-data, client and server must support PSK exchange mode (session cookies).
In addition, I would like to recommend this great discussion about TLS 1.3 and 0-RTT.
If you are unsure to enable 0-RTT, look what Cloudflare say about it:
Generally speaking, 0-RTT is safe for most web sites and applications. If your web application does strange things and you’re concerned about its replay safety, consider not using 0-RTT until you can be certain that there are no negative effects. […] TLS 1.3 is a big step forward for web performance and security. By combining TLS 1.3 with 0-RTT, the performance gains are even more dramatic.
Example
Test 0-RTT with OpenSSL:
## 1)
_host="example.com"
cat > req.in << __EOF__
HEAD / HTTP/1.1
Host: $_host
Connection: close
__EOF__
## or:
## echo -e "GET / HTTP/1.1\r\nHost: $_host\r\nConnection: close\r\n\r\n" > req.in
openssl s_client -connect ${_host}:443 -tls1_3 -sess_out session.pem -ign_eof < req.in
openssl s_client -connect ${_host}:443 -tls1_3 -sess_in session.pem -early_data req.in
## 2)
python -m sslyze --early_data "$_host"
Enable 0-RTT with $ssl_early_data
variable:
server {
...
ssl_protocols TLSv1.2 TLSv1.3;
## To enable 0-RTT (TLS 1.3):
ssl_early_data on;
location / {
proxy_pass http://backend_x20;
## It protect against such attacks at the application layer:
proxy_set_header Early-Data $ssl_early_data;
}
...
}
External resources
- Security Review of TLS1.3 0-RTT
- Introducing Zero Round Trip Time Resumption (0-RTT)
- What Application Developers Need To Know About TLS Early Data (0RTT)
- Replay Attacks on Zero Round-Trip Time: The Case of the TLS 1.3 Handshake Candidates
- 0-RTT and Anti-Replay
- Using Early Data in HTTP (2017)
- Using Early Data in HTTP (2018)
- 0-RTT Handshakes
:beginner: Defend against the BEAST attack
Rationale
Generally the BEAST attack relies on a weakness in the way
CBC
mode is used in SSL/TLS.More specifically, to successfully perform the BEAST attack, there are some conditions which needs to be met:
- vulnerable version of SSL must be used using a block cipher (
CBC
in particular)- JavaScript or a Java applet injection - should be in the same origin of the web site
- data sniffing of the network connection must be possible
To prevent possible use BEAST attacks you should enable server-side protection, which causes the server ciphers should be preferred over the client ciphers, and completely excluded TLS 1.0 from your protocol stack.
Example
ssl_prefer_server_ciphers on;
External resources
- An Illustrated Guide to the BEAST Attack
- Is BEAST still a threat?
- Beat the BEAST with TLS 1.1/1.2 and More
- Use only strong ciphers (from this handbook)
:beginner: Mitigation of CRIME/BREACH attacks
Rationale
Disable HTTP compression or compress only zero sensitive content.
You should probably never use TLS compression. Some user agents (at least Chrome) will disable it anyways. Disabling SSL/TLS compression stops the attack very effectively. A deployment of HTTP/2 over TLS 1.2 must disable TLS compression (please see RFC 7540: 9.2. Use of TLS Features).
CRIME exploits SSL/TLS compression which is disabled since nginx 1.3.2. BREACH exploits HTTP compression
Some attacks are possible (e.g. the real BREACH attack is a complicated) because of gzip (HTTP compression not TLS compression) being enabled on SSL requests. In most cases, the best action is to simply disable gzip for SSL.
Compression is not the only requirement for the attack to be done so using it does not mean that the attack will succeed. Generally you should consider whether having an accidental performance drop on HTTPS sites is better than HTTPS sites being accidentally vulnerable.
You shouldn’t use HTTP compression on private responses when using TLS.
I would gonna to prioritise security over performance but compression can be (I think) okay to HTTP compress publicly available static content like css or js and HTML content with zero sensitive info (like an “About Us” page).
Remember: by default, Nginx doesn’t compress image files using its per-request gzip module.
Gzip static module is better, for 2 reasons:
- you don’t have to gzip for each request
- you can use a higher gzip level
You should put the
gzip_static on;
inside the blocks that configure static files, but if you’re only running one site, it’s safe to just put it in the http block.
Example
## Disable dynamic HTTP compression:
gzip off;
## Enable dynamic HTTP compression for specific location context:
location / {
gzip on;
...
}
## Enable static gzip compression:
location ^~ /assets/ {
gzip_static on;
...
}
External resources
- Is HTTP compression safe?
- HTTP compression continues to put encrypted communications at risk
- SSL/TLS attacks: Part 2 – CRIME Attack
- Defending against the BREACH Attack
- To avoid BREACH, can we use gzip on non-token responses?
- Don’t Worry About BREACH
- Module ngx_http_gzip_static_module
- Offline Compression with Nginx
:beginner: HTTP Strict Transport Security
Rationale
Generally HSTS is a way for websites to tell browsers that the connection should only ever be encrypted. This prevents MITM attacks, downgrade attacks, sending plain text cookies and session ids.
The header indicates for how long a browser should unconditionally refuse to take part in unsecured HTTP connection for a specific domain.
When a browser knows that a domain has enabled HSTS, it does two things:
- always uses an
https://
connection, even when clicking on anhttp://
link or after typing a domain into the location bar without specifying a protocol- removes the ability for users to click through warnings about invalid certificates
I recommend to set the
max-age
to a big value like31536000
(12 months) or63072000
(24 months).There are a few simple best practices for HSTS (from The Importance of a Proper HTTP Strict Transport Security Implementation on Your Web Server):
The strongest protection is to ensure that all requested resources use only TLS with a well-formed HSTS header. Qualys recommends providing an HSTS header on all HTTPS resources in the target domain
It is advisable to assign the max-age directive’s value to be greater than
10368000
seconds (120 days) and ideally to31536000
(one year). Websites should aim to ramp up the max-age value to ensure heightened security for a long duration for the current domain and/or subdomainsRFC 6797, section 14.4 advocates that a web application must aim to add the
includeSubDomain
directive in the policy definition whenever possible. The directive’s presence ensures the HSTS policy is applied to the domain of the issuing host and all of its subdomains, e.g.example.com
andwww.example.com
The application should never send an HSTS header over a plaintext HTTP header, as doing so makes the connection vulnerable to SSL stripping attacks
It is not recommended to provide an HSTS policy via the http-equiv attribute of a meta tag. According to HSTS RFC 6797, user agents don’t heed
http-equiv="Strict-Transport-Security"
attribute on<meta>
elements on the received content`To meet the HSTS preload list standard a root domain needs to return a
strict-transport-security
header that includes both theincludeSubDomains
andpreload
directives and has a minimummax-age
of one year. Your site must also serve a valid SSL certificate on the root domain and all subdomains, as well as redirect all HTTP requests to HTTPS on the same host.You had better be pretty sure that your website is indeed all HTTPS before you turn this on because HSTS adds complexity to your rollback strategy. Google recommend enabling HSTS this way:
- Roll out your HTTPS pages without HSTS first
- Start sending HSTS headers with a short
max-age
. Monitor your traffic both from users and other clients, and also dependents’ performance, such as ads- Slowly increase the HSTS
max-age
- If HSTS doesn’t affect your users and search engines negatively, you can, if you wish, ask your site to be added to the HSTS preload list used by most major browsers
Example
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains" always;
:arrow_right: ssllabs score: A+
External resources
- Strict-Transport-Security
- Security HTTP Headers - Strict-Transport-Security
- HTTP Strict Transport Security
- HTTP Strict Transport Security Cheat Sheet
- HSTS Cheat Sheet
- HSTS Preload and Subdomains
- HTTP Strict Transport Security (HSTS) and Nginx
- Is HSTS as a proper substitute for HTTP-to-HTTPS redirects?
- How to configure HSTS on www and other subdomains
- HSTS: Is includeSubDomains on main domain sufficient?
- The HSTS preload list eligibility
- Check HSTS preload status and eligibility
- HSTS Deployment Recommendations
- How does HSTS handle mixed content?
:beginner: Reduce XSS risks (Content-Security-Policy)
Rationale
CSP reduce the risk and impact of XSS attacks in modern browsers.
Whitelisting known-good resource origins, refusing to execute potentially dangerous inline scripts, and banning the use of eval are all effective mechanisms for mitigating cross-site scripting attacks.
The inclusion of CSP policies significantly impedes successful XSS attacks, UI Redressing (Clickjacking), malicious use of frames or CSS injections.
CSP is a good defence-in-depth measure to make exploitation of an accidental lapse in that less likely.
The default policy that starts building a header is: block everything. By modifying the CSP value, the programmer loosens restrictions for specific groups of resources (e.g. separately for scripts, images, etc.).
Before enable this header you should discuss with developers about it. They probably going to have to update your application to remove any inline script and style, and make some additional modifications there.
Strict policies will significantly increase security, and higher code quality will reduce the overall number of errors. CSP can never replace secure code - new restrictions help reduce the effects of attacks (such as XSS), but they are not mechanisms to prevent them!
You should always validate CSP before implement: CSP Evaluator and Content Security Policy (CSP) Validator.
For generate a policy: https://report-uri.com/home/generate. Remember, however, that these types of tools may become outdated or have errors.
Example
## This policy allows images, scripts, AJAX, and CSS from the same origin, and does not allow any other resources to load:
add_header Content-Security-Policy "default-src 'none'; script-src 'self'; connect-src 'self'; img-src 'self'; style-src 'self';" always;
External resources
- Content Security Policy (CSP) Quick Reference Guide
- Content Security Policy Cheat Sheet – OWASP
- Content Security Policy – OWASP
- Content Security Policy - An Introduction - Scott Helme
- CSP Cheat Sheet - Scott Helme
- Security HTTP Headers - Content-Security-Policy
- CSP Evaluator
- Content Security Policy (CSP) Validator
- Can I Use CSP
- CSP Is Dead, Long Live CSP!
:beginner: Control the behaviour of the Referer header (Referrer-Policy)
Rationale
Determine what information is sent along with the requests.
Example
add_header Referrer-Policy "no-referrer";
External resources
:beginner: Provide clickjacking protection (X-Frame-Options)
Rationale
Helps to protect your visitors against clickjacking attacks. It is recommended that you use the
x-frame-options
header on pages which should not be allowed to render a page in a frame.
Example
add_header X-Frame-Options "SAMEORIGIN" always;
External resources
- HTTP Header Field X-Frame-Options
- Clickjacking Defense Cheat Sheet
- Security HTTP Headers - X-Frame-Options
- X-Frame-Options - Scott Helme
:beginner: Prevent some categories of XSS attacks (X-XSS-Protection)
Rationale
Enable the cross-site scripting (XSS) filter built into modern web browsers.
Example
add_header X-XSS-Protection "1; mode=block" always;
External resources
- XSS (Cross Site Scripting) Prevention Cheat Sheet_Prevention_Cheat_Sheet>)
- DOM based XSS Prevention Cheat Sheet
- X-XSS-Protection HTTP Header
- Security HTTP Headers - X-XSS-Protection
:beginner: Prevent Sniff Mimetype middleware (X-Content-Type-Options)
Rationale
It prevents the browser from doing MIME-type sniffing (prevents “mime” based attacks).
Example
add_header X-Content-Type-Options "nosniff" always;
External resources
- X-Content-Type-Options HTTP Header
- Security HTTP Headers - X-Content-Type-Options
- X-Content-Type-Options - Scott Helme
:beginner: Deny the use of browser features (Feature-Policy)
Rationale
This header protects your site from third parties using APIs that have security and privacy implications, and also from your own team adding outdated APIs or poorly optimised images.
Example
add_header Feature-Policy "geolocation 'none'; midi 'none'; notifications 'none'; push 'none'; sync-xhr 'none'; microphone 'none'; camera 'none'; magnetometer 'none'; gyroscope 'none'; speaker 'none'; vibrate 'none'; fullscreen 'none'; payment 'none'; usb 'none';";
External resources
:beginner: Reject unsafe HTTP methods
Rationale
Set of methods support by a resource. An ordinary web server supports the
HEAD
,GET
andPOST
methods to retrieve static and dynamic content. Other (e.g.OPTIONS
,TRACE
) methods should not be supported on public web servers, as they increase the attack surface.
Example
add_header Allow "GET, POST, HEAD" always;
if ($request_method !~ ^(GET|POST|HEAD)$) {
return 405;
}
External resources
:beginner: Prevent caching of sensitive data
Rationale
This policy should be implemented by the application architect, however, I know from experience that this does not always happen.
Don’ to cache or persist sensitive data. As browsers have different default behaviour for caching HTTPS content, pages containing sensitive information should include a
Cache-Control
header to ensure that the contents are not cached.One option is to add anticaching headers to relevant HTTP/1.1 and HTTP/2 responses, e.g.
Cache-Control: no-cache, no-store
andExpires: 0
.To cover various browser implementations the full set of headers to prevent content being cached should be:
Cache-Control: no-cache, no-store, private, must-revalidate, max-age=0, no-transform
>Pragma: no-cache
>Expires: 0
Example
location /api {
expires 0;
add_header Cache-Control "no-cache, no-store";
}
External resources
- RFC 2616 - Hypertext Transfer Protocol (HTTP/1.1): Standards Track
- RFC 7234 - Hypertext Transfer Protocol (HTTP/1.1): Caching
- HTTP Cache Headers - A Complete Guide
- Caching best practices & max-age gotchas
- Increasing Application Performance with HTTP Cache Headers
- HTTP Caching
:beginner: Control Buffer Overflow attacks
Rationale
Buffer overflow attacks are made possible by writing data to a buffer and exceeding that buffers’ boundary and overwriting memory fragments of a process. To prevent this in Nginx we can set buffer size limitations for all clients.
Example
client_body_buffer_size 100k;
client_header_buffer_size 1k;
client_max_body_size 100k;
large_client_header_buffers 2 1k;
External resources
:beginner: Mitigating Slow HTTP DoS attacks (Closing Slow Connections)
Rationale
Close connections that are writing data too infrequently, which can represent an attempt to keep connections open as long as possible.
You can close connections that are writing data too infrequently, which can represent an attempt to keep connections open as long as possible (thus reducing the server’s ability to accept new connections).
Example
client_body_timeout 10s;
client_header_timeout 10s;
keepalive_timeout 5s 5s;
send_timeout 10s;