In a series of benchmarking tests, LiteSpeed’s HTTP/3 implementation demonstrated higher performance than that of the hybrid NGINX/Quiche.
Overall, LiteSpeed managed to transfer resources faster, demonstrate stronger scaling, and utilized less CPU & memory in the process. LiteSpeed bettered NGINX in all of these metrics by a factor of two or above. While NGINX was unable to complete certain tests, it achieved just below TCP speed in others.
Below, we’ll delve deep into the setup process and the range of benchmarks used.
LiteSpeed vs NGINX HTTP/3: Why it Matters
HTTP/3 is a new web protocol, succeeding Google QUIC and HTTP/2. With the IETF QUIC Working Group approaching a finalized version of the drafts, it’s clear that the nascent HTTP/3 implementations are becoming more mature and some are beginning to see production use.
LiteSpeed became the first to ship HTTP/3 support in LiteSpeed Web Server, and QUIC has been supported since 2017. Improvements have been made here and there. It’s on this foundation that the support for HTTP/3 stands.
Recently, Cloudflare launched a special HTTP/3 NGINX patch and encouraged users to start experimenting.
What is Cloudlare’s Patch for Quiche?
Quiche is Cloudflare’s HTTP/3 and QUIC library, written in Rust (this language is high level and new). Said library provides a C API, which is how NGINX uses it.
Benchmarking Setup
The Platform
Both the servers and load tool run on the same VM. This is an Ubuntu-14 machine featuring a 20 core Intel Xeon E7-4870 and 32GB RAM. Netem was used to modify the bandwidth and RTT.
The Servers
We leveraged OpenLiteSpeed, an open-source version of the LiteSpeed Web Server (specifically, version 1.6.4.).
With regards to NGINX, we utilized 1.16.1 with Cloudflare’s Quiche patch.
OpenLiteSpeed and NGINX had been configured to utilize a single worker process, and NGINX’s maximum requests setting was boosted to 10,000. This enabled us to issue 1,000,000 requests with 100 connections.
# OpenLiteSpeed
httpdWorkers 1
# nginx
worker_processes 1;
http {
server {
access_log off;
http3_max_requests 10000;
}
}
The Website
This comprised a simple collection of static files, including a 163 byte index file with files of varied sizes (1MB, 10MB, 100MB, 1GB).
The Load Tool
Load was generated with h2load with HTTP/3 support . This is easy to build when utilizing the Dockerfile supplied.
NGINX vs LiteSpeed: Benchmark Testing
We ran three LiteSpeed or NGINX tests and took the median value to get each number (the requests per second or resource fetch time).LiteSpeed or NGINX: small page fetching
This testing involved fetching the 163 byte index page in numerous ways, through multiple network conditions. Key h2load options:- -n: Overall number of requests to be sent
- -c: Amount of connections
- -m: Amount of concurrent requests for each connection
- -t: Amount of of h2load threads
LiteSpeed | NGINX | |
100 mbps, 100 ms RTT | 925 reqs/sec | 880 reqs/sec |
100 mbps, 20 ms RTT | 3910 reqs/sec | 2900 reqs/sec |
100 mbps, 10 ms RTT | 6400 reqs/sec | 4150 reqs/sec |
LiteSpeed | NGINX | |
100 mbps, 100 ms RTT | 995 reqs/sec | 975 reqs/sec |
100 mbps, 20 ms RTT | 4675 reqs/sec | 4510 reqs/sec |
100 mbps, 10 ms RTT | 8410 reqs/sec | 7100 reqs/sec |
LiteSpeed | NGINX | |
100 mbps, 100 ms RTT | 9100 reqs/sec | 7380 reqs/sec |
100 mbps, 20 ms RTT | 24550 reqs/sec | 5790 reqs/sec * |
100 mbps, 10 ms RTT | 25120 reqs/sec | 6730 reqs/sec * |
LiteSpeed | NGINX | |
200 mbps, 10 ms RTT | 29140 reqs/sec | 7120 reqs/sec * |
Single file fetching
In this situation, we fetched a single file using alternative network conditions and measured the length of time required to download it. 10MBLiteSpeed | NGINX | |
10 mbps, 100 ms RTT | 9.6 sec | 10.9 sec |
10 mbps, 20 ms RTT | 9.6 sec | 10.5 sec |
10 mbps, 10 ms RTT | 9.4 sec | 10.8 sec |
LiteSpeed | NGINX | |
100 mbps, 100 ms RTT | 11.9 sec | 39 sec * |
100 mbps, 20 ms RTT | 9.2 sec | 38 sec * |
100 mbps, 10 ms RTT | 8.9 sec | 29 sec |
NGINX vs LiteSpeed: Shallow Queue
NGINX has struggled with high bandwidth, so we decided to see how it would fare with low bandwidth instead. We used a shallow queue with netem’s limit parameter and set it to seven. Single 10MB file fetchinglimit | LiteSpeed | NGINX | |
5 mbps, 20 ms RTT | 1000 * | 19.5 sec | 23.1 sec |
5 mbps, 20 ms RTT | 7 | 29.5 sec | 47.9 sec |
Conclusion – LiteSpeed Found to be More Impressive than NGINX
In all, we used numerous LiteSpeed vs NGINX benchmarking tests, and LiteSpeed performed to a higher standard than NGINX.
Files were transferred faster and less CPU & memory were used. NGINX never reached TCP level throughput when at a low bandwidth, and its throughput was a fraction of LiteSpeed’s at a high bandwidth.
NGINX’s HTTP/3 is unprepared for production use, as it provides weak performance and consumes more memory and CPU.
We’re not surprised by this result, frankly. QUIC and HTTP/3 are complicated protocols, and new implementations will find it difficult to match LiteSpeed’s performance.
NGINX is likely to show improvement in years to come, and we’ll be interested in running further benchmark tests when that time comes. But LiteSpeed HTTP/3 can’t be beaten in the meantime.
4 Comments
I have been using a vps with nginx in webuzo control panel. It did not add litespeed yet! I am still stratified with nginx and webuzo!
Could this be mostly related to the single worker process being used?
You do realize if you enable more workers in Nginx then the same amount of workers would have to be enabled in LiteSpeed for it to be a fair testing environment and well you get the picture Nginx just isn’t going to beat LiteSpeed and the version they tested here is the “FREE” version of LiteSpeed which is actually limited over say Enterprise LiteSpeed which would totally kill Nginx hands down without even breaking a sweat.
Very interesting article, it would be great to do some comparison for eCommerce market with LiteSpeed free and enterprise, apache and NGINX in both directions.