Benchmark
The primary objective of this benchmarking is to evaluate the performance limits of recursive name server implementations across different deployment scenarios.
The charts shown below represent latest measurements of the most widely used name servers. Each server is configured according to its official documentation. It is important to note that the absolute values shown in the graphs are not directly comparable, as a resolver’s performance is highly dependent on the state of the Internet and the environment in which it operates. The process of benchmarking is fairly time-consuming and therefore if you are interested in supporting further research and development, sponsorship is encouraged.
Specifications
| Machine | Dell PowerEdge R6515 |
|---|---|
| CPU | AMD EPYC 7702P |
| RAM | 8 × 8 GB DDR4 |
| Network Card | Intel X710 10GbE |
| Certificate Signing Algorithm | ECDSA with SHA-256 (P-256/secp256r1/prime256v1) |
Environment setup
Our testing setup uses two physical servers linked by a switch with 10GbE network interfaces. The first server fires pre-recorded DNS queries at a controlled rate toward the second server, which runs the name server software we're testing. We monitor the responses back on the querying server, using a tool called DNS shotgun to send queries and measure performance. All the measurements are 120 seconds long.
Client Maximums
This table shows the maximum number of clients each resolver could sustain, measured over a 120-second window. To qualify, resolvers had to reach a semi-stable response rate of approximately 100% within that period. Because no single number can precisely capture these limits, treat the values as orientational rather than exact.
The section Absolute shows the raw client count each resolver could handle per transport. Loss to UDP normalizes UDP to 1, where a value of 2.0 means the protocol supported half as many clients as UDP.
Compare results within the same environment only. Absolute numbers vary by hardware, CPU, TLS configuration, and workload. Encrypted transports (DoT, DoH) carry additional overhead from TLS handshakes and HTTP framing. Results are informative, not definitive.
| Absolute | Loss to UDP | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| UDP | DoT | DoH-GET | DoH-POST | UDP | DoT | DoH-GET | DoH-POST | |||
| Knot Resolver 6.2.0 | 175K | 100K | 80K | 80K | 1 | 1.75 | 2.19 | 2.19 | ||
| Knot Resolver 5.7.6 | 200K | 110K | 90K | 90K | 1 | 1.81 | 2.22 | 2.22 | ||
| Bind 9.21.17 (dev) | 125K | 70K | 60K | 50K | 1 | 1.79 | 2.08 | 2.5 | ||
| Bind 9.20.18 | 225K | 100K | 80K | 80K | 1 | 2.25 | 2.81 | 2.81 | ||
| Unbound 1.24.2 | >300K | 110K | 90K | 90K | 1 | >2.72 | >3.33 | >3.33 | ||
Disclaimer
Unfortunately, we were not able to make the development build of Bind run smoothly with UDP. We believe this is unlikely to be our mistake, as the server configuration was identical across all resolver measurements and the Bind package was installed directly from the ISC repository. Also, since it is an experimental build, some instability is to be expected. That said, we are human and errors can happen. If you notice something we missed or know of a mistake we made, please contact us so we can address it.
As many factors can affect measurement results (hardware, operating system, configuration, zone data, human error, etc.), the results provided here are informative only.
Response Rate Benchmark
The response rate measures how many DNS queries per second a recursive name server can successfully handle under sustained load. In our setup, the querying server sends captured DNS queries at a steady, controlled pace over a 120-second period, while the tested server processes and responds to them.
We track the proportion of queries that get any response, regardless of the return code. But the charts show the average percentage of responses with a NOERROR return code. This gives the clearest picture of successful resolution when the server is under pressure.
Latency Benchmark
Latency measures how long it takes the tested name server to respond to individual DNS queries. Using the same 10 GbE direct connection and DNS shotgun replay setup, we record response times for each query over a 120-second run.
Rather than reporting a single average value, we visualize the full distribution of response times using a logarithmic percentile histogram. Both axes of this graph use a logarithmic scale. The y-axis shows response times in milliseconds, while the x-axis shows the "slowest percentage."
Here's how to read it: the "slowest percentage" tells you what proportion of responses were slower than a given response time. For example, if the graph shows 50 ms at the 10% mark, that means 10% of responses took longer than 50 ms to answer. This approach lets you see both typical performance and the slower tail-end responses in a single view.