How to Optimize Nginx For High Traffic?

Managing a website is all fun and games, but before you know you have over thousands of visitors a day, which is why you need to make sure your website keeps up with everything. You need to optimize nginx for high traffic and especially for a wordpress installation. There’s a tons of ways you can achieve this to get great results and your users can content with your website delivering the content instantly.

Now, if you haven’t already, read the official Nginx Tuning Documentation. You can probably get good results just following that.

Working with High Traffic Websites

Before we start to optimize your nginx for high traffic, first of all, before you dive into the guide and start copy pasting configurations to “optimize” your server, you should first be assessing the current situation for server. If you’re running a server with 2000-3000 requests/second it’s time to start thinking about dual servers and load balancing. Depending on what you’re serving you can easily get more out of any of those servers, but at those rates you’re serving something important (or at least high-traffic), so you want redundancy in addition to the ability to handle momentary load spikes.
You should seriously start considering a load balancing infrastructure.

If you are using Apache, you are better off using our similar guide for optimizing Apache.

optimize apache for high traffic

About

NGINX is well known as a high‑performance load balancercache, and web server, powering over 40% of the busiest websites in the world. For most use cases, default NGINX and Linux settings work well, but achieving optimal performance sometimes requires a bit of tuning. This blog post discusses some of the NGINX and Linux settings to consider when tuning a system.

A basic understanding of the NGINX architecture and configuration concepts is assumed. This post does not attempt to duplicate the NGINX documentation, but provides an overview of the various options and links to the relevant documentation.

TCP vs Unix sockets

For better performance, you can choose to switch to Unix domain sockets over TCP sockets.

upstream backend 
{ 
 # Unix sockets 
 server unix:/var/run/php56-fpm.sock; 
 # TCP sockets 
 # server 127.0.0.1:8080; 
}

Sometimes depending on the cloud instance’s I/O speed, TCP ports provide much more scaling than Unix sockets.

Adjusting worker_processes

The Nginx master and worker process architecture is explained as follows: “nginx has one master process and several worker processes. The main purpose of the master process is to read and evaluate configuration, and maintain worker processes. Worker processes do actual processing of requests. nginx employs event-based model and OS-dependent mechanisms to efficiently distribute requests among worker processes.”

In other words, worker_processes tells your virtual server how many cores are assigned to each processor so that Nginx can manage the concurrent requests in an optimized way. The default Nginx configuration path is: /etc/nginx/nginx.conf

To find out how many processors you have in your web server, run the following command.

grep processor /proc/cpuinfo | wc –l

In this case, say the output number shows 1 core CPU.

It is common practice to run 1 worker process per core, so in this case you should set work processes to 1 for the greatest performance in the Nginx configuration file.

worker_processes 1;

Adjusting worker_rlimit_nofile

worker_rlimit_nofile 10240;

Changes the limit on the maximum number of open files (RLIMIT_NOFILE) for worker processes. Used to increase the limit without restarting the main process.

Events Handling (under events section)

  1. use epoll;
    @see: epoll
  2. multi_accept on; (default: off)
    If multi_accept is disabled, a worker process will accept one new connection at a time. Otherwise, a worker process will accept all new connections immediately.
  3. worker_connections 4096; (default is: 512)
    The maximum number of connections that each worker process can handle simultaneously. Increase this number if you need to. Testing required to determine optimal value.

Timeouts (all of these go under the ‘http’ section)

  1. keepalive_timeout 10; (default is: 75 seconds)
    This should be as close to your average response time as possible. Set it higher and you are wasting server resources, potentially: significantly affecting performance. Set it lower and you are not utilizing keep-alives on most of your requests, slowing down client. We assume that on a fast system, most requests return in under ~ 5seconds.
  2. keepalive_requests 1024; (default is: 100)
  3. client_header_timeout 10; (default is: 60 seconds)
    Defines a timeout for reading client request header. If a client does not transmit the entire header within this time, the 408 (Request Time-out) error is returned to the client.
  4. client_body_timeout 10; (default is: 60 seconds)
  5. send_timeout 10; (default is: 60 seconds)
    Sets a timeout for transmitting a response to the client. The timeout is set only between two successive write operations, not for the transmission of the whole response. If the client does not receive anything within this time, the connection is closed.
  6. sendfile on; (default: off)
    Sendfile optimizes serving large files, such as images and videos. When the setting is ‘on’ Nginx will use the kernel sendfile support instead of using its own resources.
  7. tcp_nopush on; (default: off)
    @see: tcp_nopush documentation
  8. tcp_nodelay on; (default: on)

GZipping Output

  1. gzip on;
  2. gzip_vary on;
  3. gzip_comp_level 2;
  4. gzip_buffers 4 8k;
  5. gzip_min_length 1024;
  6. gzip_proxied expired no-cache no-store private auth;
  7. gzip_types text/plain application/javascript application/x-javascript text/xml text/css application/json;
  8. gzip_disable "MSIE [1-6]\.";
    if for some unknown reason you still care about older IE versions.

Conclusion

For getting the full potential of any web server we need to tune it according to the server resource usage and gain its full potential. At the end of the day we need to keep on track of the server by monitoring the server resource utilization, tweaking according to needs making it more efficient. Making all the tweaking and optimization we can make nginx lightweight, scalable, and high-performance web server.

Leave a Reply

Your email address will not be published. Required fields are marked *