Skip to content
Snippets Groups Projects
Unverified Commit 37f04c56 authored by anarcat's avatar anarcat
Browse files

move the cache benchmark baseline back to the cache page

that is where it belongs. also clarify wtf it is
parent 670aaf32
No related branches found
No related tags found
No related merge requests found
......@@ -79,42 +79,6 @@ Then running the benchmark is as simple as:
./go/bin/bombardier --duration=2m --latencies https://blog.torproject.org/
Baseline benchmark, from cache02:
anarcat@cache-02:~$ ./go/bin/bombardier --duration=2m --latencies https://blog.torproject.org/ -c 100
Bombarding https://blog.torproject.org:443/ for 2m0s using 100 connection(s)
[================================================================================================================================================================] 2m0s
Done!
Statistics Avg Stdev Max
Reqs/sec 2796.01 716.69 6891.48
Latency 35.96ms 22.59ms 1.02s
Latency Distribution
50% 33.07ms
75% 40.06ms
90% 47.91ms
95% 54.66ms
99% 75.69ms
HTTP codes:
1xx - 0, 2xx - 333646, 3xx - 0, 4xx - 0, 5xx - 0
others - 0
Throughput: 144.79MB/s
This is strangely much higher, in terms of throughput, and faster, in
terms of latency, than testing against our own servers. Different
avenues were explored to explain that disparity with our servers:
* jumbo frames? nope, both connexions see packets larger than 1500
bytes
* protocol differences? nope, both go over IPv6 and (probably) HTTP/2
(at least not over UDP)
* different link speeds
The last theory is currently the only one standing. Indeed, 144.79MB/s
should not be possible on regular gigabit ethernet (GigE), as it is
actually *more* than 1000Mbit/s (1158.32Mbit/s). Sometimes the above
benchmark even gives 152MB/s (1222Mbit/s), way beyond what a regular
GigE link should be able to provide.
## wrk
Note that wrk works similarly to `bombardier`, sampled above, and has
......
......@@ -433,6 +433,44 @@ See [#32239](https://bugs.torproject.org/32239) for a followup on the launch pro
See the [benchmark procedures](howto/benchmark).
### Baseline benchmark
Baseline benchmark of the actual blog site, from `cache02`:
anarcat@cache-02:~$ ./go/bin/bombardier --duration=2m --latencies https://blog.torproject.org/ -c 100
Bombarding https://blog.torproject.org:443/ for 2m0s using 100 connection(s)
[================================================================================================================================================================] 2m0s
Done!
Statistics Avg Stdev Max
Reqs/sec 2796.01 716.69 6891.48
Latency 35.96ms 22.59ms 1.02s
Latency Distribution
50% 33.07ms
75% 40.06ms
90% 47.91ms
95% 54.66ms
99% 75.69ms
HTTP codes:
1xx - 0, 2xx - 333646, 3xx - 0, 4xx - 0, 5xx - 0
others - 0
Throughput: 144.79MB/s
This is strangely much higher, in terms of throughput, and faster, in
terms of latency, than testing against our own servers. Different
avenues were explored to explain that disparity with our servers:
* jumbo frames? nope, both connexions see packets larger than 1500
bytes
* protocol differences? nope, both go over IPv6 and (probably) HTTP/2
(at least not over UDP)
* different link speeds
The last theory is currently the only one standing. Indeed, 144.79MB/s
should not be possible on regular gigabit ethernet (GigE), as it is
actually *more* than 1000Mbit/s (1158.32Mbit/s). Sometimes the above
benchmark even gives 152MB/s (1222Mbit/s), way beyond what a regular
GigE link should be able to provide.
## Alternatives considered
Four alternatives were seriously considered:
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment