Considering to empower your WordPress site with Nginx and Redis Cache? Are you wondering on how many visitors your site can handle? Keep reading!
This article shows you a performance benchmark of Nginx web server combined with Redis object cache server to run a WordPress site.
We really hope that this review will be beneficial for you. Especially if you are considering to move your site to a VPS/cloud server.
This is a simulation test, not a real live production test.
We calculate and predict to project the result for actual situation.
WordOps is employed to deploy a complete LEMP stack plus Redis. LEMP stands for Linux, Nginx, MySQL/MariaDB, and PHP.
This test seeks answer for these questions:
- How is Hetzner EPYC cloud instance performance?
- How is Nginx and Redis Cache performance to host a WordPress blog?
- How good are Nginx and Redis to handle web traffic?
- How many visitors can a WordPress site serve with Nginx and Redis?
Benchmark Test Parameters
We use following configurations and server specs for this benchmark test:
Hetzner EPYC cloud instance was deployed for the purpose of this benchmark test. The specs are as below:
- CPU: 2 Core AMD EPYC
- RAM: 2GB
- Bandwidth: 20TB
- Storage: 40GB SSD
- Price: €4.05
- Provider: Hetzner.
Server specs and performance:
Webserver, MySQL, and PHP:
As for webserver, I use Nginx which is a free high-performance webserver, cache, and reverse proxy.
Thanks to WordOps, LEMP deployment would never be easier.
Configs we use:
- Linux: Ubuntu 20.04 LTS
- Webserver: Nginx v1.18.0
- MySQL server by MariaDB v10.3.23
- PHP v7.3 (php-fpm)
- Brotli activated
- DNS server by third-party DNS hosting (Cloudflare)
WordPress is the most popular publishing framework. We use it for obvious reasons: easy to set up and free. Here are the details:
- WP version 5.5
- 46 posts in total (including Hello World)
- All articles have images (except Hello World)
- Premium theme by Contentberg.
- 14 active plugins (Autoptimize, Redis Object Cache, Really Simple SSL, WP Cerber, etc).
- Domain: speedy.monster (registered at Namesilo)
- Page tested: Main index page (home page).
Website caching is done by two plugins: Autoptimize and Redis Object Cache.
A. Autoptimize configs:
- CSS optimized, aggregated
- HTML code optimized
- Save aggregated script/CSS as static files? Yes
- Lazy-load images? No
- Remove emojis? Yes
- Remove query strings from static resources? Yes
- Combine and preload Google Fonts in head? Yes
B. Redis Cache configs:
Redis Object Cache plugin does not have further customizations rather helping us to flush caches. It helps connect WordPress with Redis server in the backend.
The parameters used to benchmark the performance of WordOps Nginx server are:
- Page load speed – Tools: Pingdom and GTMetrix.
- Time To First Byte (TTFB) – Tools: GTMetrix and KeyCDN
- Number of clients/visitors – Tool: Loader.io free plan
We put screenshot images for every benchmark test. Do not hesitate to click on images to view it larger.
1. Page Load Speed
Pingdom Speed Test
Pingdom ranks our site as Grade A with 93 points. The tested page loads in 581 ms with a total of 40 requests.
The test was done from Pingdom’s UK server.
We chose UK location since GTMetrix also has this location.
Pingdom says the speed grade is B. Nevertheless, loading in 581 ms is still a fast performance.
GTmetrix Website Speed Test
Settings we use to perform page load test in GTMetrix:
- Browser: Google Chrome (Desktop)
- Test server region: London, UK
- Connection mode: off/default
- Adblock: off
- Onload: off
- Video: off
Page load speed result:
The index page of our test site loads in 1.0 second. Considering the heavy content it has, that speed is lightning fast.
The RUM Speed Index is 519. Speed Index is a page load performance metric that shows you how quickly the page was visibly populated.
The lower RUM Speed Index is, the better.
As seen from the response header above, contents are served over HTTP/2.0 protocol and brotli compression.
The x-srcache-fetch-status indicates whether the site is served over Redis Cache or not.
The GTMetrix score above was done by the WordPress site served over the caches of Redis server (cache status Hit).
KeyCDN Tools: Website Speed Test
KeyCDN page load speed test shows different result than what GTmetrix shows.
It requires only 705.38 ms which is incredibly fast although the grade is still B. It’s still below 1 second.
2. Time To First Byte
Time to First Byte (TTFB) is the total amount of time spent to receive the first byte of the response once it has been requested.
Time To First Byte (TTFB) seems also satisfiying.
While the whole page load can be dependent on server performance and the site’s content, TTFB is mostly influenced by server performance.
The formula is the sum of Redirect duration + Connection duration + Backend duration.
GTmetrix Page Load Timings:
It takes only 115 ms for the server to deliver the first byte of the website.
KeyCDN Performance Test:
We use KeyCDN Tools to test the access speed of main homepage index.php file.
The table depicts different performance across different locations.
The fastest TTFB is 31.77 ms (Frankfurt, DE) while the slowest one is 998.56 ms (Sydney, Australia).
3. Stress Load Test (Client Requests)
This test simulates a number of clients connecting or accessing the webserver together at some defined amount of time.
We use free service provided by Loader.io by Sendgrid Labs.
The free plan allows us to have a maximum of 10,000 clients per test and 1-minute test duration.
Test #1: 10,000 clients over 1 minute
Clients will be distributed evenly throughout the test duration. The test will answer this question: How does my server perform when 10000 users connect over the course of 1 minute?
Serving 10000 clients within 1 minute duration is not a problem for Nginx and Redis.
Average response time is 96 ms without a single timeout, no timeout, no network error, and no server error.
The graph shows 200 clients (approx) connecting to the server in the same time.
How heavy is this for the server?
The monitoring graph in the WordOps dashboard shows how the server behaves in handling such traffic.
As can be seen above, the CPU barely hits 10% load. Serving 10K visitors every 1 minute does not give the CPU any stress yet.
Let’s do the Math:
Given the 10,000 visitors per minute, we can calculate how many visitors would be per day.
One day equals to 1,440 minutes. Multiply that by 10,000 and we have 14,400,000 visitors/day. That gives you roughly 430 million visitors a month.
Let’s say it a rough estimate. If the real performance can handle half of that number, it is still a very impressive performance by a server with Nginx and Redis Cache can do.
Test #2: Clients per second over 1 minute
This test simulates a number of clients connecting to the server every second.
The test will answer questions like: How does my server perform when X number of users connect every second over 1 minute period?
That means new users will try to access your every second.
Thus, we can find out the closest number of maximal visitors the server can handle.
Let’s start from 100 clients/sec and see how the server performs.
What can we see here?
Average response time tells us how fast or slow the server work to respond to clients’ requests. The lower the better.
This indicator does not increase significantly. The margin is still negligible when serving 500 clients/second.
At this point, CPU utilization is still below 25%.
With Redis cache in action, serving 500 visitors consumes around 40% of RAM.
Let’s increase the number of client requests and see how the server behaves.
Now what we have here?
Nginx and Redis can serve 1000 clients/second in a normal operation without any single glitch or slow down.
Increasing the number of requests a little further at 1500 clients/second will make the server a little bit slower.
It may still serving web pages normally to visitor but with increased load times.
Referring to the number of max response time which is 6512 ms (around 6.5 seconds), the real world performance may still negligible by users.
At this point, CPU utilizations may reache 70%-80%.
Take the number of requests higher to 2000 clients/second and you’ll start seeing some some timeouts. This translates to glitches or inaccessible pages by some of your site visitors.
On top of that, the performance peaks at its limit of 100% utilization.
Sorting these up, driving 1000 visitors/second to your WordPress site hosted on Nginx, PHP-fpm, and Redis Cache is the safest line.
Getting occasional traffic spike? That would not be a problem.
Traffic flood from Reddit, Twitter, or facebook will land safely as well within the 1500 clients/second limit.
The worst scenario, handling 2000 clients/second still makes sense with a consequence of losing 20% of your total visitor (within a minute).
At least your web server does not go down by traffic spike.
This might sound small for you, but we haven’t do the math yet.
Take the lowest number this server can still handle clients requests without any slowing down, that is 1000 clients/second.
Multiply it by 60 seconds and that equals to 60,000 clients in a minute. One day is 1440 minutes.
60,000 x 1,440 = 86,400,000
Unbelievable! You can have 86.4 millions unique visitors per day.
That equals to 2.5 billions UV/month.
That sounds impossible for you?
Ok, let’s say your site has heavy content and theme so that you can achieve only one-quarter of that number. That is 21 million UV/day.
Nginx webserver, Php-fpm, MariaDB, plus Redis Cache combination can handle millions unique visitors per day to a WordPress site hosted on a AMD EPYC cloud server (Thanks to Hetzner).
So, if you ask: how many visitors Nginx+Redis VPS can handle? The answer is millions – assuming you use the same configs as we did.
Using Nginx+Redis setup to host a WordPress site allows you to have:
1000 clients/second (in normal production situation)
1500 clients/second (safest traffic spike within a day)
2000 clients/second (safest traffic spike in few minutes)
Unless you rent a Dedicated server or a cloud server with dedicated CPU, you should not serve 2000 clients/second constantly within 1 hour.
Have any questions? Write them down below.