7 min read

 

Nginx 1 Web Server Implementation Cookbook

Nginx 1 Web Server Implementation Cookbook

Over 100 recipes to master using the Nginx HTTP server and reverse proxy

        Read more about this book      

(For more resources on Nginx, see here.)

Introduction

Nginx has found most applications acting as a reverse proxy for many sites. A reverse proxy is a type of proxy server that retrieves resources for a client from one or more servers. These resources are returned to the client as though they originated from the proxy server itself.

Due to its event driven architecture and C codebase, it consumes significantly lower CPU power and memory than many other better known solutions out there. This article will deal with the usage of Nginx as a reverse proxy in various common scenarios. We will have a look at how we can set up a rail application, set up load balancing, and also look at a caching setup using Nginx, which will potentially enhance the performance of your existing site without any codebase changes.

 

Using Nginx as a simple reverse proxy

Nginx in its simplest form can be used as a reverse proxy for any site; it acts as an intermediary layer for security, load distribution, caching, and compression purposes. In effect, it can potentially enhance the overall quality of the site for the end user without any change of application source code by distributing the load from incoming requests to multiple backend servers, and also caching static, as well as dynamic content.

Nginx 1 Web Server Implementation Cookbook

How to do it…

You will need to first define proxy.conf, which will be later included in the main configuration of the reverse proxy that we are setting up:

proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;s
proxy_buffers 32 4k

To use Nginx as a reverse proxy for a site running on a local port of the server, the following configuration will suffice:

server {
listen 80;
server_name example1.com;
access_log /var/www/example1.com/log/nginx.access.log;
error_log /var/www/example1.com/log/nginx_error.log debug;
location / {
include proxy.conf;
proxy_pass http://127.0.0.1:8080;
}
}

How it works…

In this recipe, Nginx simply acts as a proxy for the defined backend server which is running on the 8080 port of the server, which can be any HTTP web application. Later in this article, other advanced recipes will have a look at how one can define more backend servers, and how we can set them up to respond to requests.

 

Setting up a rails site using Nginx as a reverse proxy

In this recipe, we will set up a working rails site and set up Nginx working on top of the application. This will assume that the reader has some knowledge of rails and thin. There are other ways of running Nginx and rails, as well, like using Passenger Phusion.

Nginx 1 Web Server Implementation Cookbook

How to do it…

This will require you to set up thin first, then to configure thin for your application, and then to configure Nginx.

  • If you already have gems installed then the following command will install thin, otherwise you will need to install it from source:
    sudo gem install thin
  • Now you need to generate the thin configuration. This will create a configuration in the /etc/thin directory:
    sudo thin config -C /etc/thin/myapp.yml -c /var/rails/myapp
    --servers 5 -e production
  • Now you can start the thin service. Depending on your operating system the start up command will vary.
  • Assuming that you have Nginx installed, you will need to add the following to the configuration file:

    upstream thin_cluster {
    server unix:/tmp/thin.0.sock;
    server unix:/tmp/thin.1.sock;
    server unix:/tmp/thin.2.sock;
    server unix:/tmp/thin.3.sock;
    server unix:/tmp/thin.4.sock;
    }

    server {
    listen 80;
    server_name www.example1.com;
    root /var/www.example1.com/public;
    location / {
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_redirect false;
    try_files $uri $uri/index.html $uri.html @thin;
    location @thin {
    include proxy.conf;
    proxy_pass http://thin_cluster;
    }
    }
    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
    root html;
    }
    }

How it works…

This is a fairly simple rails stack, where we basically configure and run five upstream thin threads which interact with Nginx through socket connections.

There are a few rewrites that ensure that Nginx serves the static files, and all dynamic requests are processed by the rails backend. It can also be seen how we set proxy headers correctly to ensure that the client IP is forwarded correctly to the rails application. It is important for a lot of applications to be able to access the client IP to show geo-located information, and logging this IP can be useful in identifying if geography is a problem when the site is not working properly for specific clients.

 

Setting up correct reverse proxy timeouts

In this section we will set up correct reverse proxy timeouts which will affect your user’s interaction when your backend application is unable to respond to the client’s request.

In such a case, it is advisable to set up some sensible timeout pages so that the user can understand that further refreshing may only aggravate the issues on the web application.

How to do it…

You will first need to set up proxy.conf which will later be included in the configuration:

proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;s
proxy_buffers 32 4k

Reverse proxy timeouts are some fairly simple flags that we need to set up in the Nginx configuration like in the following example:

server {
listen 80;
server_name example1.com;
access_log /var/www/example1.com/log/nginx.access.log;
error_log /var/www/example1.com/log/nginx_error.log debug;
#set your default location
location / {
include proxy.conf;
proxy_read_timeout 120;
proxy_connect_timeout 120;
proxy_pass http://127.0.0.1:8080;
}
}

How it works…

In the preceding configuration we have set the following variables, it is fairly clear what these variables achieve in the context of the configurations:

Nginx 1 Web Server Implementation Cookbook

 

Setting up caching on the reverse proxy

In a setup where Nginx acts as the layer between the client and the backend web application, it is clear that caching can be one of the benefits that can be achieved. In this recipe, we will have a look at setting up caching for any site to which Nginx is acting as a reverse proxy. Due to extremely small footprint and modular architecture, Nginx has become quite the Swiss knife of the modern web stack.

Nginx 1 Web Server Implementation Cookbook

How to do it…

This example configuration shows how we can use caching when utilizing Nginx as a reverse proxy web server:

http {
proxy_cache_path /var/www/cache levels=1:2 keys_zone=my-cache:8m
max_size=1000m inactive=600m;
proxy_temp_path /var/www/cache/tmp;

server {
listen 80;
server_name example1.com;
access_log /var/www/example1.com/log/nginx.access.log;
error_log /var/www/example1.com/log/nginx_error.log debug;
#set your default location
location / {
include proxy.conf;
proxy_pass http://127.0.0.1:8080/;
proxy_cache my-cache;
proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;
}
}
}

How it works…

This configuration implements a simple cache with 1000MB maximum size, and keeps all HTTP response 200 pages in the cache for 60 minutes and HTTP response 404 pages in cache for 1 minute.

There is an initial directive that creates the cache file on initialization, in further directives we basically configure the location that is going to be cached.

It is possible to actually set up more than one cache path for multiple locations.

There’s more…

This was a relatively small show of what can be achieved with the caching aspect of the proxy module. Here are some more directives that can be really useful in optimizing and making your stack faster and more efficient:

Nginx 1 Web Server Implementation Cookbook

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here