How to Secure Nginx Web Server in a Few Simple Steps. Configure Nginx as a Safe Reverse Proxy!

Idego Idego • Jul 17
Post Img


In 2004 Igor Sysoyev released a web server written in C – an imperative, high-level language valued for it’s performance, called NGINX. It is amazing software which can also be used as a reverse proxy, load balancer, mail proxy and HTTP cache. 

It is used in more than one third of all websites that exist on the internet. It is used in 38.2% of top 10.000 websites, and it’s popularity grows every year. This brings out a vastness of security-related issues that one should embrace. 

A popular configuration is that NGINX is an entry point between the clients and your application.

Since it will be a point of a direct contact with potential threats, security measurements should be taken into consideration.

After all, no matter how complicated and important problem your application solves, if someone can break it – it’s worthless.

Our configuration can be represented on a following diagram:

We will try to portray some compelling techniques used to increase the security of application hidden behind NGINX web server.

Reconnaissance hindering 

One of the first steps of gaining an access to our system is gathering information. We can make it obvious, or not. 

server_tokens on | off; 

Enables or disables emitting nginx version in error messages and in the “Server” response header field.

proxy_hide_header X-Powered-By;

It’s often included by default in responses constructed via a particular scripting technology. What this means is that by default X-Powered-By is set by various servers to say what kind of server it is. You can even provide misleading information, for example different technology than your sensitive service is actually using to throw off attackers.

Unfortunately, it won’t be enough.

Many web servers use personalized error pages for popular codes – 404, 403 etc. But, what if someone will be able to force our sensitive service to return something else?

curl –header ‘X-Forwarded-For: TOO LOOOONG….’ \ http://our.nginx.server

It may be possible that the max buffer size of the header for our backend is smaller than the buffer size of the proxy, thus NGINX will count this request as valid as pass it down to upstream. 

However, for example if our sensitive service is an Apache web server might potentially return default error page for 400 (Bad Request) page, containing sensitive details:

< HTTP/1.1 400 Bad Request

< Server: nginx/1.10.0 (Ubuntu)

< Date: Mon, 24 Oct 2016 18:33:31 GMT

< Content-Type: text/html; charset=iso-8859-1

< Content-Length: 389

< Connection: keep-alive>



<title>400 Bad Request</title>


<h1>Bad Request</h1>

<p>Your browser sent a request that this server could not understand.<br />

Size of a request header field exceeds server limit.<br />






<address>Apache/2.4.18 (Ubuntu) Server at Port 80</address>


Solution? Ensure that the buffer size of the header on your proxy is smaller than on your backend, thus intercepting fingerprinting.

large_client_header_buffers number size;

A request header field cannot exceed the size of one buffer or the 400 (Bad Request) error is returned to the client. By default, the buffer size is equal to 8K bytes.

Wait.. there’s more.

Misleading programs identifying operating systems

Another method is to modify the behaviour of our NGINX server TCP/IP stack in a way it behaves like a different operating system than it actually is. 

This may be achieved by patching kernel with the code from The Linux IP Personality project – Reading their documentation, here’s what you can change:

  • TCP Initial Sequence Number (ISN)
  • TCP initial window size
  • TCP options (their types, values and order in the packet)
  • IP ID numbers
  • answers to some pathological TCP packets
  • answers to some UDP packets

When you manage to install the patch, you need to configure your kernel. You can do it using one of make xconfig, make menuconfig or make config commands.

You will be prompted with the graphical interface where you can configure IP Personality module. After saving the configuration you should be able to compile the kernel and the modules, and finally install them. 

Now you can restart your operating system with new kernel. You will also have to download patch for the iptables command from the Netfilter project (, prepared for the IP Personalty project, and compile it using fresh downloaded module.

It actually sounds harder than it is in reality, both IP Personality and Netfilter project have easy, step-by-step configuration guide. What you finally end up with is the possibility to use predefined system configuration in you iptables command, for example, in your Ubuntu system you can use:

iptables -t mangle -A PREROUTING -d -J PERS \

–tweak dst –local –conf /etc/personalities/macos9.conf

And now, using nmap -O yourdomain what you get is:


22/tcp open ssh

25/tcp open smtp

111/tcp open rpcbind

Running: Apple Mac OS 9.X

OS Details: Apple Mac OS 9 – 9.1

Tuning up your nginx.conf

NGINX have some neat configuration options that can increase your security for the perspective of Public Key Infrastructure. In fact, the internet is built on trust.

But what if we start to trust a sinister imposer who just pretends to be trustworthy? Here are some directives that will increase our chances that we are trusting an actual, valid entity.

First of all let’s generate strong Diffie-Hellman key-exchange parameters.

openssl dhparam -out /etc/nginx/ssl/dhparam.pem 4096

And put them inside your nginx.conf:

ssl_dhparam /etc/nginx/ssl/dhparam.pem;

Disable SSLv3, enabled by default, which is now considered deprecated due to discovered vulnerabilities.

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

During the SSL handshake process, ensure that the NGINX will use his ciphers, instead of those preferred by the client. This is because of potential possibility of protocol downgrade attack or BEAST attack:

 ssl_prefer_server_ciphers on;

Enable OCSP stapling mechanism for your server – a mechanism which makes your server responsible for inspection if the requested domain’s SSL certificate is valid and not revoked. This also increases performance during TLS handshake, because client do not need extra connection to OCSP server.

ssl_stapling on;

ssl_stapling_verify on;

Turn on the Web Application Firewall

Web application firewall (WAF), by inspecting HTTP traffic, can prevent attacks stemming from web application security flaws, such as SQL injection, cross-site scripting (XSS), file inclusion, and security misconfigurations.

 It uses the idea of rules that are applied for each request based on predefined patterns. Those rules cover 99% of known vulnerabilities. They are used to determine if the request should be forwarded or blocked. A great way to implement this is to use NAXSI (NGINX XSS & SQL Injection) third-party module. You will have to build NGINX from source with NAXSI as a dynamic extension. 

Software Engineers - development team

Securing your operating system

Assuming that pre-connection attacks can be mitigated entirely is at least doubtful.We can go step further and make it difficult for the attacker to damage our system, even when he managed to gain access.

One of the ways to achieve this is by limiting the options for a mounted partition.

You can use the mount command, or create record in /etc/fstab:

/dev/sda3 /nginx ext3 defaults,nodev,noexec,nosuid 1  2

The options that we are interested in are:

nodev – specifies that the filesystem cannot contain special devices

noexec – users cannot create executable binaries in mounted filesystem, which may potentially be used to switch to root user.

nosuid – filesystem cannot have binaries with SUID bit set, which is used to run executable with the permissions of its owner. Again, this might be utilized for gaining unauthorized root access.

Speaking of SUID – there is a chance that your server contains files with this bit set. It is possible that poorly written program can be used to relatively easy increase user’s permissions. One might want to rethink that maybe not all programs with SUID necessarily need it. You can find out all these programs using find command:

find / \( -perm -4000 -o -perm -2000 \) -type f -exec ls -la {} \;

Limiting NGINX to sandbox environment

On most UNIX-like systems exist a chroot command, which may be used to change the root directory of a process and all his child processes, thus restricting access only to a new root directory. You have to be careful though, and pay particular attention on the permissions of used files and libraries. 

Dynamically linked libraries must also be included in chroot environment in order to operate. It is crucial that chroot environment does not contain files with SUID and SGID bit, because gaining access to a root user means that attacker can escape it.


Hardening security with NGINX is a task with a complexity which depends solely on forethought of an administrator. It is reasonable to tune up security to fit current needs, and adjust it accordingly to significance of a task that the server is used for. It’s nothing wrong in getting a little paranoic – these security measures are there for a reason, in other case they would not have been developed.

Remember to keep your operating system up to date! This may not sound so but it may actually be the greatest security-oriented action you can perform.

Securing our sensitive service with NGINX is not only oriented around building trust and hindering reconnaissance – mitigating DDoS attacks through load balancing, scalability and readability of our infrastructure is equally important, but that’s another story.

Leave a Reply

Your email address will not be published. Required fields are marked *

Drive tactical delivery
without inflating the top line

Your Swiss Army Knife in AI, Cloud and Digital

Get in touch Button Arrow

GPTW Poland 2022 GPTW Poland GPTW Europe Lider Clutch Review

We value your privacy

We use cookies to enhance your browsing experience, serve personalized ads or content, and analy Read More