27 min read

(For more resources related to this topic, see here.)

NGINX’s architecture

NGINX consists of a single master process and multiple worker processes. Each of these is single-threaded and designed to handle thousands of connections simultaneously. The worker process is where most of the action takes place, as this is the component that handles client requests. NGINX makes use of the operating system’s event mechanism to respond quickly to these requests.

The NGINX master process is responsible for reading the configuration, handling sockets, spawning workers, opening log files, and compiling embedded Perl scripts. The master process is the one that responds to administrative requests via signals.

The NGINX worker process runs in a tight event loop to handle incoming connections. Each NGINX module is built into the worker, so that any request processing, filtering, handling of proxy connections, and much more is done within the worker process. Due to this worker model, the operating system can handle each process separately and schedule the processes to run optimally on each processor core. If there are any processes that would block a worker, such as disk I/O, more workers than cores can be configured to handle the load.

There are also a small number of helper processes that the NGINX master process spawns to handle dedicated tasks. Among these are the cache loader and cache manager processes. The cache loader is responsible for preparing the metadata for worker processes to use the cache. The cache manager process is responsible for checking cache items and expiring invalid ones.

NGINX is built in a modular fashion. The master process provides the foundation upon which each module may perform its function. Each protocol and handler is implemented as its own module. The individual modules are chained together into a pipeline to handle connections and process requests. After a request is handled, it is then passed on to a series of filters, in which the response is processed. One of these filters is responsible for processing subrequests, one of NGINX’s most powerful features.

Subrequests are how NGINX can return the results of a request that differs from the URI that the client sent. Depending on the configuration, they may be multiply nested and call other subrequests. Filters can collect the responses from multiple subrequests and combine them into one response to the client. The response is then finalized and sent to the client. Along the way, multiple modules come into play. See http://www.aosabook.org/en/nginx.html for a detailed explanation of NGINX internals.

We will be exploring the http module and a few helper modules in the remainder of this article.

The HTTP core module

The http module is NGINX’s central module, which handles all interactions with clients over HTTP. We will have a look at the directives in the rest of this section, again divided by type.

The server

The server directive starts a new context. We have already seen examples of its usage throughout the book so far. One aspect that has not yet been examined in-depth is the concept of a default server.

A default server in NGINX means that it is the first server defined in a particular configuration with the same listen IP address and port as another server. A default server may also be denoted by the default_server parameter to the listen directive.

The default server is useful to define a set of common directives that will then be reused for subsequent servers listening on the same IP address and port:

server { listen 127.0.0.1:80; server_name default.example.com; server_name_in_redirect on; } server { listen 127.0.0.1:80; server_name www.example.com; }


In this example, the www.example.com server will have the server_name_in_redirect directive set to on as well as the default.example.com server. Note that this would also work if both servers had no listen directive, since they would still both match the same IP address and port number (that of the default value for listen, which is *:80). Inheritance, though, is not guaranteed. There are only a few directives that are inherited, and which ones are changes over time.

A better use for the default server is to handle any request that comes in on that IP address and port, and does not have a Host header. If you do not want the default server to handle requests without a Host header, it is possible to define an empty server_name directive. This server will then match those requests.

server { server_name “”; }


The following table summarizes the directives relating to server:

Table: HTTP server directives

Directive Explanation
port_in_redirect Determines whether or not the port will be specified in a redirect issued by NGINX.
server Creates a new configuration context, defining a virtual host. The listen directive specifies the IP address(es) and port(s); the server_name directive lists the Host header values that this context matches.
server_name Configures the names that a virtual host may respond to.
server_name_in_redirect Activates using the first value of the server_name directive in any redirect issued by NGINX within this context.
server_tokens Disables sending the NGINX version string in error messages and the Server response header (default value is on).

Logging

NGINX has a very flexible logging model . Each level of configuration may have an access log. In addition, more than one access log may be specified per level, each with a different log_format. The log_format directive allows you to specify exactly what will be logged, and needs to be defined within the http section.

The path to the log file itself may contain variables, so that you can build a dynamic configuration. The following example describes how this can be put into practice:

http { log_format vhost ‘$host $remote_addr – $remote_user [$time_local] ‘ ‘”$request” $status $body_bytes_sent ‘ ‘”$http_referer” “$http_user_agent”‘; log_format downloads ‘$time_iso8601 $host $remote_addr ‘ ‘”$request” $status $body_bytes_sent $request_ time’; open_log_file_cache max=1000 inactive=60s; access_log logs/access.log; server { server_name ~^(www.)?(.+)$; access_log logs/combined.log vhost; access_log logs/$2/access.log; location /downloads { access_log logs/downloads.log downloads; } } }


The following table describes the directives used in the preceding code:

Table: HTTP logging directives

Directive Explanation
access_log Describes where and how access logs are to be written. The first parameter is a path to the file where the logs are to be stored. Variables may be used in constructing the path. The special value off disables the access log. An optional second parameter indicates log_format that will be used to write the logs. If no second parameter is configured, the predefined combined format is used. An optional third parameter indicates the size of the buffer if write buffering should be used to record the logs. If write buffering is used, this size cannot exceed the size of the atomic disk write for that filesystem. If this third parameter is gzip, then the buffered logs will be compressed on-the-fly, provided that the nginx binary was built with the zlib library. A final flush parameter indicates the maximum length of time buffered log data may remain in memory before being flushed to disk.
log_format Specifies which fields should appear in the log file and what format they should take. See the next table for a description of the log-specific variables.
log_not_found Disables reporting of 404 errors in the error log (default value is on).
log_subrequest Enables logging of subrequests in the access log (default value is off ).
open_log_file_cache Stores a cache of open file descriptors used in access_logs with a variable in the path. The parameters used are:

  • max: The maximum number of file descriptors present in the cache
  • inactive: NGINX will wait this amount of time for something to be written to this log before its file descriptor is closed
  • min_uses: The file descriptor has to be used this amount of times within the inactive period in order to remain open
  • valid: NGINX will check this often to see if the file descriptor still matches a file with the same name
  • off: Disables the cache

In the following example, log entries will be compressed at a gzip level of 4. The buffer size is the default of 64 KB and will be flushed to disk at least every minute.

access_log /var/log/nginx/access.log.gz combined gzip=4 flush=1m;


Note that when specifying gzip the log_format parameter is not optional.The default combined log_format is constructed like this:

log_format combined ‘$remote_addr – $remote_user [$time_local] ‘ ‘”$request” $status $body_bytes_sent ‘ ‘”$http_referer” “$http_user_agent”‘;


As you can see, line breaks may be used to improve readability. They do not affect the log_format itself. Any variables may be used in the log_format directive. The variables in the following table which are marked with an asterisk ( *) are specific to logging and may only be used in the log_format directive. The others may be used elsewhere in the configuration, as well.

Table: Log format variables

Variable Name Value
$body_bytes_sent The number of bytes sent to the client, excluding the response header.
$bytes_sent The number of bytes sent to the client.
$connection A serial number, used to identify unique connections.
$connection_requests The number of requests made through a particular connection.
$msec The time in seconds, with millisecond resolution.
$pipe * Indicates if the request was pipelined (p) or not (.).
$request_length * The length of the request, including the HTTP method, URI, HTTP protocol, header, and request body.
$request_time The request processing time, with millisecond resolution, from the first byte received from the client to the last byte sent to the client.
$status The response status.
$time_iso8601 * Local time in ISO8601 format.
$time_local * Local time in common log format (%d/%b/%Y:%H:%M:%S %z).

In this section, we have focused solely on access_log and how that can be configured. You can also configure NGINX to log errors.

Finding files

In order for NGINX to respond to a request, it passes it to a content handler, determined by the configuration of the location directive. The unconditional content handlers are tried first: perl, proxy_pass, flv, mp4, and so on. If none of these is a match, the request is passed to one of the following, in order: random index, index, autoindex, gzip_static, static. Requests with a trailing slash are handled by one of the index handlers. If gzip is not activated, then the static module handles the request. How these modules find the appropriate file or directory on the filesystem is determined by a combination of certain directives. The root directive is best defined in a default server directive, or at least outside of a specific location directive, so that it will be valid for the whole server:

server { root /home/customer/html; location / { index index.html index.htm; } location /downloads { autoindex on; } }


In the preceding example any files to be served are found under the root /home/customer/html. If the client entered just the domain name, NGINX will try to serve index.html. If that file does not exist, then NGINX will serve index.htm. When a user enters the /downloads URI in their browser, they will be presented with a directory listing in HTML format. This makes it easy for users to access sites hosting software that they would like to download. NGINX will automatically rewrite the URI of a directory so that the trailing slash is present, and then issue an HTTP redirect. NGINX appends the URI to the root to find the file to deliver to the client. If this file does not exist, the client receives a 404 Not Found error message. If you don’t want the error message to be returned to the client, one alternative is to try to deliver a file from different filesystem locations, falling back to a generic page, if none of those options are available. The try_files directive can be used as follows:

location / { try_files $uri $uri/ backups/$uri /generic-not-found.html; }


As a security precaution, NGINX can check the path to a file it’s about to deliver, and if part of the path to the file contains a symbolic link, it returns an error message to the client:

server { root /home/customer/html; disable_symlinks if_not_owner from=$document_root; }


In the preceding example, NGINX will return a “Permission Denied” error if a symlink is found after /home/customer/html, and that symlink and the file it points to do not both belong to the same user ID.

The following table summarizes these directives:

Table: HTTP file-path directives

Directive Explanation
disable_symlinks Determines if NGINX should perform a symbolic link check on the path to a file before delivering it to the client. The following parameters are recognized:

  • off : Disables checking for symlinks (default)
  • on: If any part of a path is a symlink, access is denied
  • if_not_owner: If any part of a path contains a symlink in which the link and the referent have different owners, access to the file is denied
  • from=part: When specified, the path up to part is not checked for symlinks, everything afterward is according to either the on or if_not_owner parameter
root Sets the path to the document root. Files are found by appending the URI to the value of this directive.
try_files Tests the existence of files given as parameters. If none of the previous files are found, the last entry is used as a fallback, so ensure that this path or named location exists, or is set to return a status code indicated by  =<status code>.

Name resolution

If logical names instead of IP addresses are used in an upstream or *_pass directive, NGINX will by default use the operating system’s resolver to get the IP address, which is what it really needs to connect to that server. This will happen only once, the first time upstream is requested, and won’t work at all if a variable is used in the *_pass directive. It is possible, though, to configure a separate resolver for NGINX to use. By doing this, you can override the TTL returned by DNS, as well as use variables in the *_pass directives.

server { resolver 192.168.100.2 valid=300s; }


Table: Name resolution directives

Directive Explanation
resolver

 

Configures one or more name servers to be used to resolve upstream server names into IP addresses. An optional  valid parameter overrides the TTL of the domain name record.

In order to get NGINX to resolve an IP address anew, place the logical name into a variable. When NGINX resolves that variable, it implicitly makes a DNS look-up to find the IP address. For this to work, a resolver directive must be configured:

server { resolver 192.168.100.2; location / { set $backend upstream.example.com; proxy_pass http://$backend; } }


Of course, by relying on DNS to find an upstream, you are dependent on the resolver always being available. When the resolver is not reachable, a gateway error occurs. In order to make the client wait time as short as possible, the resolver_timeout parameter should be set low. The gateway error can then be handled by an error_ page designed for that purpose.

server { resolver 192.168.100.2; resolver_timeout 3s; error_page 504 /gateway-timeout.html; location / { proxy_pass http://upstream.example.com; } }


Client interaction

There are a number of ways in which NGINX can interact with clients. This can range from attributes of the connection itself (IP address, timeouts, keepalive, and so on) to content negotiation headers. The directives listed in the following table describe how to set various headers and response codes to get the clients to request the correct page or serve up that page from its own cache:

Table: HTTP client interaction directives

Directive Explanation
default_type Sets the default MIME type of a response. This comes into play if the MIME type of the file cannot be matched to one of those specified by the types directive.
error_page Defines a URI to be served when an error level response code is encountered. Adding an = parameter allows the response code to be changed. If the argument to this parameter is left empty, the response code will be taken from the URI, which must in this case be served by an upstream server of some sort.
etag Disables automatically generating the ETag response header for static resources (default is on).
if_modified_since Controls how the modification time of a response is compared to the value of the If-Modified-Since request header:

  • off: The If-Modified-Since header is ignored
  • exact: An exact match is made (default)
  • before: The modification time of the response is less than or equal to the value of the If-Modified-Since header
ignore_invalid_headers Disables ignoring headers with invalid names (default is on). A valid name is composed of ASCII letters, numbers, the hyphen, and possibly the underscore (controlled by the underscores_in_headers directive).
merge_slashes Disables the removal of multiple slashes. The default value of on means that NGINX will compress two or more / characters into one.
recursive_error_pages Enables doing more than one redirect using the error_page directive (default is off).
types Sets up a map of MIME types to file name extensions. NGINX ships with a conf/mime.types file that contains most MIME type mappings. Using include to load this file should be sufficient for most purposes.
underscores_in_headers Enables the use of the underscore character in client request headers. If left at the default value off , evaluation of such headers is subject to the value of the ignore_invalid_headers directive.

The error_page directive is one of NGINX’s most flexible. Using this directive, we may serve any page when an error condition presents. This page could be on the local machine, but could also be a dynamic page produced by an application server, and could even be a page on a completely different site.

http { # a generic error page to handle any server-level errors error_page 500 501 502 503 504 share/examples/nginx/50x.html; server { server_name www.example.com; root /home/customer/html; # for any files not found, the page located at # /home/customer/html/404.html will be delivered error_page 404 /404.html; location / { # any server-level errors for this host will be directed # to a custom application handler error_page 500 501 502 503 504 = @error_handler; } location /microsite { # for any non-existent files under the /microsite URI, # the client will be shown a foreign page error_page 404 http://microsite.example.com/404.html; } # the named location containing the custom error handler location @error_handler { # we set the default type here to ensure the browser # displays the error page correctly default_type text/html; proxy_pass http://127.0.0.1:8080; } } }


Using limits to prevent abuse

We build and host websites because we want users to visit them. We want our websites to always be available for legitimate access. This means that we may have to take measures to limit access to abusive users. We may define “abusive” to mean anything from one request per second to a number of connections from the same IP address. Abuse can also take the form of a DDOS (distributed denial-of-service) attack, where bots running on multiple machines around the world all try to access the site as many times as possible at the same time. In this section, we will explore methods to counter each type of abuse to ensure that our websites are available.

First, let’s take a look at the different configuration directives that will help us achieve our goal:

Table: HTTP limits directives

Directive Explanation
limit_conn Specifies a shared memory zone (configured with limit_conn_zone) and the maximum number of connections that are allowed per key value.
limit_conn_log_level When NGINX limits a connection due to the limit_conn directive, this directive specifies at which log level that limitation is reported.
limit_conn_zone Specifies the key to be limited in limit_conn as the first parameter. The second parameter, zone, indicates the name of the shared memory zone used to store the key and current number of connections per key and the size of that zone (name:size).
limit_rate Limits the rate (in bytes per second) at which clients can download content. The rate limit works on a connection level, meaning that a single client could increase their throughput by opening multiple connections.
limit_rate_after Starts the limit_rate after this number of bytes have been transferred.
limit_req Sets a limit with bursting capability on the number of requests for a specific key in a shared memory store (configured with limit_req_zone). The burst can be specified with the second parameter. If there shouldn’t be a delay in between requests up to the burst, a third parameter nodelay needs to be configured.
limit_req_log_level When NGINX limits the number of requests due to the limit_req directive, this directive specifies at which log level that limitation is reported. A delay is logged at a level one less than the one indicated here.
limit_req_zone Specifies the key to be limited in limit_req as the first parameter. The second parameter, zone, indicates the name of the shared memory zone used to store the key and current number of requests per key and the size of that zone ( name:size). The third parameter, rate, configures the number of requests per second (r/s) or per minute (r/m) before the limit is imposed.
max_ranges Sets the maximum number of ranges allowed in a byte-range request. Specifying 0 disables byte-range support.

Here we limit access to 10 connections per unique IP address. This should be enough for normal browsing, as modern browsers open two to three connections per host. Keep in mind, though, that any users behind a proxy will all appear to come from the same address. So observe the logs for error code 503 (Service Unavailable), meaning that this limit has come into effect:

http { limit_conn_zone $binary_remote_addr zone=connections:10m; limit_conn_log_level notice; server { limit_conn connections 10; } }


Limiting access based on a rate looks almost the same, but works a bit differently. When limiting how many pages per unit of time a user may request, NGINX will insert a delay after the first page request, up to a burst. This may or may not be what you want, so NGINX offers the possibility to remove this delay with the nodelay parameter:

http { limit_req_zone $binary_remote_addr zone=requests:10m rate=1r/s; limit_req_log_level warn; server { limit_req zone=requests burst=10 nodelay; } }


Using $binary_remote_addr

We use the $binary_remote_addr variable in the preceding example to know exactly how much space storing an IP address will take. This variable takes 32 bytes on 32-bit platforms and 64 bytes on 64-bit platforms. So the 10m zone we configured previously is capable of holding up to 320,000 states on 32-bit platforms or 160,000 states on 64-bit platforms.

We can also limit the bandwidth per client. This way we can ensure that a few clients don’t take up all the available bandwidth. One caveat, though: the limit_rate directive works on a connection basis. A single client that is allowed to open multiple connections will still be able to get around this limit:

location /downloads { limit_rate 500k; }


Alternatively, we can allow a kind of bursting to freely download smaller files, but make sure that larger ones are limited:

location /downloads { limit_rate_after 1m; limit_rate 500k; }


Combining these different rate limitations enables us to create a configuration that is very flexible as to how and where clients are limited:

http { limit_conn_zone $binary_remote_addr zone=ips:10m; limit_conn_zone $server_name zone=servers:10m; limit_req_zone $binary_remote_addr zone=requests:10m rate=1r/s; limit_conn_log_level notice; limit_req_log_level warn; reset_timedout_connection on; server { # these limits apply to the whole virtual server limit_conn ips 10; # only 1000 simultaneous connections to the same server_name limit_conn servers 1000; location /search { # here we want only the /search URL to be rate-limited limit_req zone=requests burst=3 nodelay; } location /downloads { # using limit_conn to ensure that each client is # bandwidth-limited # with no getting around it limit_conn connections 1; limit_rate_after 1m; limit_rate 500k; } } }


Restricting access

In the previous section, we explored ways to limit abusive access to websites running under NGINX. Now we will take a look at ways to restrict access to a whole website or certain parts of it. Access restriction can take two forms here: restricting to a certain set of IP addresses, or restricting to a certain set of users. These two methods can also be combined to satisfy requirements that some users can access the website either from a certain set of IP addresses or if they are able to authenticate with a valid username and password.

The following directives will help us achieve these goals:

Table: HTTP access module directives

Directive Explanation
allow Allows access from this IP address, network, or all.
auth_basic Enables authentication using HTTP Basic Authentication. The parameter string is used as the realm name. If the special value off is used, this indicates that the auth_basic value of the parent configuration level is negated.
auth_basic_user_file Indicates the location of a file of username:password:comment tuples used to authenticate users. The password field needs to be encrypted with the crypt algorithm. The comment field is optional.
deny Denies access from this IP address, network, or all.
satisfy Allows access if all or any of the preceding directives grant access. The default value all indicates that a user must come from a specific network address and enter the correct password.

To restrict access to clients coming from a certain set of IP addresses, the allow and deny directives can be used as follows:

location /stats { allow 127.0.0.1; deny all; }


This configuration will allow access to the /stats URI from the localhost only.

To restrict access to authenticated users, the auth_basic and auth_basic_user_file directives are used as follows:

server { server_name restricted.example.com; auth_basic “restricted”; auth_basic_user_file conf/htpasswd; }


Any user wanting to access restricted.example.com would need to provide credentials matching those in the htpasswd file located in the conf directory of NGINX’s root. The entries in the htpasswd file can be generated using any available tool that uses the standard UNIX crypt() function. For example, the following Ruby script will generate a file of the appropriate format:

#!/usr/bin/env ruby # setup the command-line options require ‘optparse’ OptionParser.new do |o| o.on(‘-f FILE’) { |file| $file = file } o.on(‘-u’, “–username USER”) { |u| $user = u } o.on(‘-p’, “–password PASS”) { |p| $pass = p } o.on(‘-c’, “–comment COMM (optional)”) { |c| $comm = c } o.on(‘-h’) { puts o; exit } o.parse! if $user.nil? or $pass.nil? puts o; exit end end # initialize an array of ASCII characters to be used for the salt ascii = (‘a’..’z’).to_a + (‘A’..’Z’).to_a + (‘0’..’9′).to_a + [ “.”, “/” ] $lines = [] begin # read in the current http auth file File.open($file) do |f| f.lines.each { |l| $lines << l } end rescue Errno::ENOENT # if the file doesn’t exist (first use), initialize the array $lines = [“#{$user}:#{$pass}n”] end # remove the user from the current list, since this is the one we’re editing $lines.map! do |line| unless line =~ /#{$user}:/ line end end # generate a crypt()ed password pass = $pass.crypt(ascii[rand(64)] + ascii[rand(64)]) # if there’s a comment, insert it if $comm $lines << “#{$user}:#{pass}:#{$comm}n” else $lines << “#{$user}:#{pass}n” end # write out the new file, creating it if necessary File.open($file, File::RDWR|File::CREAT) do |f| $lines.each { |l| f << l} end


Save this file as http_auth_basic.rb and give it a filename (-f), a user (-u), and a password (-p), and it will generate entries appropriate to use in NGINX’s auth_ basic_user_file directive:

$ ./http_auth_basic.rb -f htpasswd -u testuser -p 123456


To handle scenarios where a username and password should only be entered if not coming from a certain set of IP addresses, NGINX has the satisfy directive. The any parameter is used here for this either/or scenario:

server { server_name intranet.example.com; location / { auth_basic “intranet: please login”; auth_basic_user_file conf/htpasswd-intranet; allow 192.168.40.0/24; allow 192.168.50.0/24; deny all; satisfy any; }


If, instead, the requirements are for a configuration in which the user must come from a certain IP address and provide authentication, the all parameter is the default. So, we omit the satisfy directive itself and include only allow, deny, auth_basic, and auth_basic_user_file:

server { server_name stage.example.com; location / { auth_basic “staging server”; auth_basic_user_file conf/htpasswd-stage; allow 192.168.40.0/24; allow 192.168.50.0/24; deny all; }


Streaming media files

NGINX is capable of serving certain video media types. The flv and mp4 modules, included in the base distribution, can perform what is called pseudo-streaming. This means that NGINX will seek to a certain location in the video file, as indicated by the start request parameter.

In order to use the pseudo-streaming capabilities, the corresponding module needs to be included at compile time: –with-http_flv_module for Flash Video (FLV) files and/or –with-http_mp4_module for H.264/AAC files. The following directives will then become available for configuration:

Table: HTTP streaming directives

Directive Explanation
flv Activates the flv  module for this location.
mp4 Activates the mp4  module for this location.
mp4_buffer_size Sets the initial buffer size for delivering MP4 files.
mp4_max_buffer_size Sets the maximum size of the buffer used to process MP4 metadata.

Activating FLV pseudo-streaming for a location is as simple as just including the flv keyword:

location /videos { flv; }


There are more options for MP4 pseudo-streaming, as the H.264 format includes metadata that needs to be parsed. Seeking is available once the “moov atom” has been parsed by the player. So to optimize performance, ensure that the metadata is at the beginning of the file. If an error message such as the following shows up in the logs, the mp4_max_buffer_size needs to be increased:

mp4 moov atom is too large


mp4_max_buffer_size can be increased as follows:

location /videos { mp4; mp4_buffer_size 1m; mp4_max_buffer_size 20m; }


Predefined variables

NGINX makes constructing configurations based on the values of variables easy. Not only can you instantiate your own variables by using the set or map directives, but there are also predefined variables used within NGINX. They are optimized for quick evaluation and the values are cached for the lifetime of a request. You can use any of them as a key in an if statement, or pass them on to a proxy. A number of them may prove useful if you define your own log file format. If you try to redefine any of them, though, you will get an error message as follows:

<timestamp> [emerg] <master pid>#0: the duplicate “<variable_name>” variable in <path-to-configuration-file>:<line-number>


They are also not made for macro expansion in the configuration—they are mostly used at run time.

Summary

In this article, we have explored a number of directives used to make NGINX serve files over HTTP. Not only does the http module provide this functionality, but there are also a number of helper modules that are essential to the normal operation of NGINX. These helper modules are enabled by default. Combining the directives of these various modules enables us to build a configuration that meets our needs. We explored how NGINX finds files based on the URI requested. We examined how different directives control how the HTTP server interacts with the client, and how the error_page directive can be used to serve a number of needs. Limiting access based on bandwidth usage, request rate, and number of connections is all possible.

We saw, too, how we can restrict access based on either IP address or through requiring authentication. We explored how to use NGINX’s logging capabilities to capture just the information we want. Pseudo-streaming was examined briefly, as well. NGINX provides us with a number of variables that we can use to construct our configurations.

Resources for Article :


Further resources on this subject:


LEAVE A REPLY

Please enter your comment!
Please enter your name here