Plesk & SocialBee have teamed up. Save 30% now!

Litespeed vs NGINX

In a series of benchmarking tests, LiteSpeed’s HTTP/3 implementation demonstrated higher performance than that of the hybrid NGINX/Quiche.

Overall, LiteSpeed managed to transfer resources faster, demonstrate stronger scaling, and utilized less CPU & memory in the process. LiteSpeed bettered NGINX in all of these metrics by a factor of two or above. While NGINX was unable to complete certain tests, it achieved just below TCP speed in others.

Below, we’ll delve deep into the setup process and the range of benchmarks used.

  1. LiteSpeed vs NGINX: Why it Matters
  2. What is Cloudlare’s Patch for Quiche?
  3. Benchmarking Setup
  4. NGINX vs LiteSpeed: Benchmark Testing
  5. LiteSpeed Found to be More Impressive than NGINX

LiteSpeed vs NGINX HTTP/3: Why it Matters

HTTP/3 is a new web protocol, succeeding Google QUIC and HTTP/2. With the IETF QUIC Working Group approaching a finalized version of the drafts, it’s clear that the nascent HTTP/3 implementations are becoming more mature and some are beginning to see production use.

LiteSpeed became the first to ship HTTP/3 support in LiteSpeed Web Server, and QUIC has been supported since 2017. Improvements have been made here and there. It’s on this foundation that the support for HTTP/3 stands.

Recently, Cloudflare launched a special HTTP/3 NGINX patch and encouraged users to start experimenting.

What is Cloudlare’s Patch for Quiche?

Quiche is Cloudflare’s HTTP/3 and QUIC library, written in Rust (this language is high level and new). Said library provides a C API, which is how NGINX uses it.

Benchmarking Setup

The Platform

Both the servers and load tool run on the same VM. This is an Ubuntu-14 machine featuring a 20 core Intel Xeon E7-4870 and 32GB RAM. Netem was used to modify the bandwidth and RTT.

The Servers

We leveraged OpenLiteSpeed, an open-source version of the LiteSpeed Web Server (specifically, version 1.6.4.).

With regards to NGINX, we utilized 1.16.1 with Cloudflare’s Quiche patch.

OpenLiteSpeed and NGINX had been configured to utilize a single worker process, and NGINX’s maximum requests setting was boosted to 10,000. This enabled us to issue 1,000,000 requests with 100 connections.

# OpenLiteSpeed
httpdWorkers 1
# nginx
worker_processes 1;
http {
    server {
        access_log off;
        http3_max_requests 10000;
    }
}

The Website

This comprised a simple collection of static files, including a 163 byte index file with files of varied sizes (1MB, 10MB, 100MB, 1GB).

The Load Tool

Load was generated with h2load with HTTP/3 support . This is easy to build when utilizing the Dockerfile supplied.

 

Server Monitoring Solutions

NGINX vs LiteSpeed: Benchmark Testing

We ran three LiteSpeed or NGINX tests and took the median value to get each number (the requests per second or resource fetch time).

LiteSpeed or NGINX: small page fetching

This testing involved fetching the 163 byte index page in numerous ways, through multiple network conditions. Key h2load options:
  • -n: Overall number of requests to be sent
  • -c: Amount of connections
  • -m: Amount of concurrent requests for each connection
  • -t: Amount of of h2load threads
-n 10000 -c 100
LiteSpeed NGINX
100 mbps, 100 ms RTT 925 reqs/sec 880 reqs/sec
100 mbps, 20 ms RTT 3910 reqs/sec 2900 reqs/sec
100 mbps, 10 ms RTT 6400 reqs/sec 4150 reqs/sec
-n 100000 -c 100 -t 10 Every connection will send 1000 requests now, with this being a longer run.
LiteSpeed NGINX
100 mbps, 100 ms RTT 995 reqs/sec 975 reqs/sec
100 mbps, 20 ms RTT 4675 reqs/sec 4510 reqs/sec
100 mbps, 10 ms RTT 8410 reqs/sec 7100 reqs/sec
At 100ms, LiteSpeed is quite a bit faster. But its speed increases substantially at 20ms and 10ms RTT. -n 100000 -c 100 -m 10 -t 10
LiteSpeed NGINX
100 mbps, 100 ms RTT 9100 reqs/sec 7380 reqs/sec
100 mbps, 20 ms RTT 24550 reqs/sec 5790 reqs/sec *
100 mbps, 10 ms RTT 25120 reqs/sec 6730 reqs/sec *
* High variance It was fascinating to find in the initial test that NGINX used 100 percent CPU, when OpenLiteSpeed utilized around 45 percent CPU only. This is why NGINX’s numbers tend to not improve despite the RTT dropping. Still, OpenLiteSpeed doesn’t utilize 100 percent CPU even with 20 and 10ms RTTs. -n 1000000 -c 100 -m 10 -t 10 We had to set the NGINX http3_max_requests parameter to 10000 (from the 1000 default value) to make sure it could issue in excess of 1000 requests for each connection.
LiteSpeed NGINX
200 mbps, 10 ms RTT 29140 reqs/sec 7120 reqs/sec *
* High variance At this point, LiteSpeed was using 100 percent CPU. NGINX allocated in excess of 1GB of memory during this test, while LiteSpeed remained below 28MB. So, that equates to greater than four times the performance for approximately 1/37th of the price.

Single file fetching

In this situation, we fetched a single file using alternative network conditions and measured the length of time required to download it. 10MB
LiteSpeed NGINX
10 mbps, 100 ms RTT 9.6 sec 10.9 sec
10 mbps, 20 ms RTT 9.6 sec 10.5 sec
10 mbps, 10 ms RTT 9.4 sec 10.8 sec
It’s clear that NGINX is a little slower here. It also utilizes three to four times more CPU than LiteSpeed did in the above tests. 100MB
LiteSpeed NGINX
100 mbps, 100 ms RTT 11.9 sec 39 sec *
100 mbps, 20 ms RTT 9.2 sec 38 sec *
100 mbps, 10 ms RTT 8.9 sec 29 sec
*High variance Across all three of the NGINX vs LiteSpeed benchmarks, NGINX utilized 100 percent CPU. It’s highly likely this is the cause of its weaker performance. 1GB We attempted to download a 1GB file with NGINX at 1gbps. However, we became tired of waiting for the download to complete. We believe the contrast in LiteSpeed and NGINX’s respective performance was pronounced at this speed.

NGINX vs LiteSpeed: Shallow Queue

NGINX has struggled with high bandwidth, so we decided to see how it would fare with low bandwidth instead. We used a shallow queue with netem’s limit parameter and set it to seven. Single 10MB file fetching
limit LiteSpeed NGINX
5 mbps, 20 ms RTT 1000 * 19.5 sec 23.1 sec
5 mbps, 20 ms RTT 7 29.5 sec 47.9 sec
*netem default We found LiteSpeed’s performance decreased by around 50 percent when we introduced a shallow queue on path. However, NGINX’s performance was degraded by more than 100 percent. So, LiteSpeed was found to be substantially faster than NGINX in both situations.

Conclusion – LiteSpeed Found to be More Impressive than NGINX

In all, we used numerous LiteSpeed vs NGINX benchmarking tests, and LiteSpeed performed to a higher standard than NGINX.

Files were transferred faster and less CPU & memory were used. NGINX never reached TCP level throughput when at a low bandwidth, and its throughput was a fraction of LiteSpeed’s at a high bandwidth.

NGINX’s HTTP/3 is unprepared for production use, as it provides weak performance and consumes more memory and CPU.

We’re not surprised by this result, frankly. QUIC and HTTP/3 are complicated protocols, and new implementations will find it difficult to match LiteSpeed’s performance.

NGINX is likely to show improvement in years to come, and we’ll be interested in running further benchmark tests when that time comes. But LiteSpeed HTTP/3 can’t be beaten in the meantime.

4 Comments

  1. I have been using a vps with nginx in webuzo control panel. It did not add litespeed yet! I am still stratified with nginx and webuzo!

  2. Could this be mostly related to the single worker process being used?

    • You do realize if you enable more workers in Nginx then the same amount of workers would have to be enabled in LiteSpeed for it to be a fair testing environment and well you get the picture Nginx just isn’t going to beat LiteSpeed and the version they tested here is the “FREE” version of LiteSpeed which is actually limited over say Enterprise LiteSpeed which would totally kill Nginx hands down without even breaking a sweat.

  3. Very interesting article, it would be great to do some comparison for eCommerce market with LiteSpeed free and enterprise, apache and NGINX in both directions.

Add a Comment

Your email address will not be published. Required fields are marked *

GET LATEST NEWS AND TIPS

  • Yes, please, I agree to receiving my personal Plesk Newsletter! WebPros International GmbH and other WebPros group companies may store and process the data I provide for the purpose of delivering the newsletter according to the WebPros Privacy Policy. In order to tailor its offerings to me, Plesk may further use additional information like usage and behavior data (Profiling). I can unsubscribe from the newsletter at any time by sending an email to [email protected] or use the unsubscribe link in any of the newsletters.

  • Hidden
  • Hidden
  • Hidden
  • Hidden
  • Hidden
  • Hidden

Related Posts

Knowledge Base