diff --git a/howto/benchmark.md b/howto/benchmark.md index f5a759d89e4ed0541ce6933e8c377f70faf4e150..6258c51f36e9083814c6208598f407cd739583ad 100644 --- a/howto/benchmark.md +++ b/howto/benchmark.md @@ -79,42 +79,6 @@ Then running the benchmark is as simple as: ./go/bin/bombardier --duration=2m --latencies https://blog.torproject.org/ -Baseline benchmark, from cache02: - - anarcat@cache-02:~$ ./go/bin/bombardier --duration=2m --latencies https://blog.torproject.org/ -c 100 - Bombarding https://blog.torproject.org:443/ for 2m0s using 100 connection(s) - [================================================================================================================================================================] 2m0s - Done! - Statistics Avg Stdev Max - Reqs/sec 2796.01 716.69 6891.48 - Latency 35.96ms 22.59ms 1.02s - Latency Distribution - 50% 33.07ms - 75% 40.06ms - 90% 47.91ms - 95% 54.66ms - 99% 75.69ms - HTTP codes: - 1xx - 0, 2xx - 333646, 3xx - 0, 4xx - 0, 5xx - 0 - others - 0 - Throughput: 144.79MB/s - -This is strangely much higher, in terms of throughput, and faster, in -terms of latency, than testing against our own servers. Different -avenues were explored to explain that disparity with our servers: - - * jumbo frames? nope, both connexions see packets larger than 1500 - bytes - * protocol differences? nope, both go over IPv6 and (probably) HTTP/2 - (at least not over UDP) - * different link speeds - -The last theory is currently the only one standing. Indeed, 144.79MB/s -should not be possible on regular gigabit ethernet (GigE), as it is -actually *more* than 1000Mbit/s (1158.32Mbit/s). Sometimes the above -benchmark even gives 152MB/s (1222Mbit/s), way beyond what a regular -GigE link should be able to provide. - ## wrk Note that wrk works similarly to `bombardier`, sampled above, and has diff --git a/howto/cache.md b/howto/cache.md index 03aff172e005dc41e7f966282cec4b77b2149e83..b3245501a3017c3e0a85d2c00353562d343fe26d 100644 --- a/howto/cache.md +++ b/howto/cache.md @@ -433,6 +433,44 @@ See [#32239](https://bugs.torproject.org/32239) for a followup on the launch pro See the [benchmark procedures](howto/benchmark). +### Baseline benchmark + +Baseline benchmark of the actual blog site, from `cache02`: + + anarcat@cache-02:~$ ./go/bin/bombardier --duration=2m --latencies https://blog.torproject.org/ -c 100 + Bombarding https://blog.torproject.org:443/ for 2m0s using 100 connection(s) + [================================================================================================================================================================] 2m0s + Done! + Statistics Avg Stdev Max + Reqs/sec 2796.01 716.69 6891.48 + Latency 35.96ms 22.59ms 1.02s + Latency Distribution + 50% 33.07ms + 75% 40.06ms + 90% 47.91ms + 95% 54.66ms + 99% 75.69ms + HTTP codes: + 1xx - 0, 2xx - 333646, 3xx - 0, 4xx - 0, 5xx - 0 + others - 0 + Throughput: 144.79MB/s + +This is strangely much higher, in terms of throughput, and faster, in +terms of latency, than testing against our own servers. Different +avenues were explored to explain that disparity with our servers: + + * jumbo frames? nope, both connexions see packets larger than 1500 + bytes + * protocol differences? nope, both go over IPv6 and (probably) HTTP/2 + (at least not over UDP) + * different link speeds + +The last theory is currently the only one standing. Indeed, 144.79MB/s +should not be possible on regular gigabit ethernet (GigE), as it is +actually *more* than 1000Mbit/s (1158.32Mbit/s). Sometimes the above +benchmark even gives 152MB/s (1222Mbit/s), way beyond what a regular +GigE link should be able to provide. + ## Alternatives considered Four alternatives were seriously considered: