Skip to content
Snippets Groups Projects
Unverified Commit 03e0047c authored by anarcat's avatar anarcat
Browse files

move benchmark procedures to a subpage

parent 3367f74a
No related branches found
No related tags found
No related merge requests found
...@@ -5,6 +5,7 @@ various procedures not necessarily associated with a specific service. ...@@ -5,6 +5,7 @@ various procedures not necessarily associated with a specific service.
<!-- update with `ls -d howto/*.md | sed 's/.md$//;s/\(.*\)/ * [\1](howto\/\1)/'` --> <!-- update with `ls -d howto/*.md | sed 's/.md$//;s/\(.*\)/ * [\1](howto\/\1)/'` -->
* [benchmark](howto/benchmark)
* [build_and_upload_debs](howto/build_and_upload_debs) * [build_and_upload_debs](howto/build_and_upload_debs)
* [create-a-new-user](howto/create-a-new-user) * [create-a-new-user](howto/create-a-new-user)
* [cumin](howto/cumin) * [cumin](howto/cumin)
......
Will require a test VM (or two?) to hit the caches.
TODO: migrate this to a separate howto?
### Common procedure
1. punch a hole in the firewall to allow cache2 to access cache1
iptables -I INPUT -s 78.47.61.104 -j ACCEPT
ip6tables -I INPUT -s 2a01:4f8:c010:25ff::1 -j ACCEPT
2. point the blog to cache1 on cache2 in `/etc/hosts`:
116.202.120.172 blog.torproject.org
2a01:4f8:fff0:4f:266:37ff:fe26:d6e1 blog.torproject.org
3. disable Puppet:
puppet agent --disable 'benchmarking requires /etc/hosts override'
4. launch the benchmark
### Siege
Siege configuration sample:
```
verbose = false
fullurl = true
concurrent = 100
time = 2M
url = http://www.example.com/
delay = 1
internet = false
benchmark = true
```
Might require this, which might work only with varnish:
```
proxy-host = 209.44.112.101
proxy-port = 80
```
Alternative is to hack `/etc/hosts`.
### apachebench
Classic commandline:
ab2 -n 1000 -c 100 -X cache01.torproject.org https://example.com/
`-X` also doesn't work with ATS, hacked `/etc/hosts`.
### bombardier
Unfortunately, the [bombardier package in Debian](https://tracker.debian.org/pkg/bombardier) is *not* the HTTP
benchmarking tool but a commandline game. It's still possible to
install it in Debian with:
export GOPATH=$HOME/go
apt install golang
go get -v github.com/codesenberg/bombardier
Then running the benchmark is as simple as:
./go/bin/bombardier --duration=2m --latencies https://blog.torproject.org/
Baseline benchmark, from cache02:
anarcat@cache-02:~$ ./go/bin/bombardier --duration=2m --latencies https://blog.torproject.org/ -c 100
Bombarding https://blog.torproject.org:443/ for 2m0s using 100 connection(s)
[================================================================================================================================================================] 2m0s
Done!
Statistics Avg Stdev Max
Reqs/sec 2796.01 716.69 6891.48
Latency 35.96ms 22.59ms 1.02s
Latency Distribution
50% 33.07ms
75% 40.06ms
90% 47.91ms
95% 54.66ms
99% 75.69ms
HTTP codes:
1xx - 0, 2xx - 333646, 3xx - 0, 4xx - 0, 5xx - 0
others - 0
Throughput: 144.79MB/s
This is strangely much higher, in terms of throughput, and faster, in
terms of latency, than testing against our own servers. Different
avenues were explored to explain that disparity with our servers:
* jumbo frames? nope, both connexions see packets larger than 1500
bytes
* protocol differences? nope, both go over IPv6 and (probably) HTTP/2
(at least not over UDP)
* different link speeds
The last theory is currently the only one standing. Indeed, 144.79MB/s
should not be possible on regular gigabit ethernet (GigE), as it is
actually *more* than 1000Mbit/s (1158.32Mbit/s). Sometimes the above
benchmark even gives 152MB/s (1222Mbit/s), way beyond what a regular
GigE link should be able to provide.
### Other tools
Siege has trouble going above ~100 concurrent clients because of its
design (and ulimit) limitations. Its interactive features are also
limited, here's a set of interesting alternatives:
* [ali](https://github.com/nakabonne/ali) - golang, HTTP/2, real-time graph, duration, not in
Debian, mouse support, unsearchable name
* [bombardier](https://github.com/codesenberg/bombardier) - golang, HTTP/2, better performance than siege in
my (2017) tests, not in debian
* [boom](https://github.com/tarekziade/boom) - python rewrite of apachebench, supports duration,
HTTP/2, not in debian, unsearchable name
* [go-wrk](https://github.com/adjust/go-wrk/) - golang rewrite of wrk with HTTPS, had performance
issues in my first tests (2017), [no duration target](https://github.com/adjust/go-wrk/issues/2), not in
Debian
* [hey](https://github.com/rakyll/hey) - golang rewrite of apachebench, similar to boom, not in
debian ([ITP #943596](https://bugs.debian.org/943596)), unsearchable name
* [Jmeter](https://jmeter.apache.org/) - interactive behavior, can replay recorded sessions
from browsers
* [k6.io](https://k6.io/) - commandline JMeter replacement with accompanied
"cloud" Software-as-a-service
* [Locust](https://locust.io/) - distributed, can model login and interactive
behavior, not in Debian
* [Tsung](http://tsung.erlang-projects.org/1/01/about/) - multi-protocol, distributed, erlang
* [wrk](https://github.com/wg/wrk/) - multithreaded, epoll, Lua scriptable, no HTTPS, only in
Debian unstable
...@@ -431,136 +431,7 @@ See [#32239](https://bugs.torproject.org/32239) for a followup on the launch pro ...@@ -431,136 +431,7 @@ See [#32239](https://bugs.torproject.org/32239) for a followup on the launch pro
## Benchmarking procedures ## Benchmarking procedures
Will require a test VM (or two?) to hit the caches. See the [benchmark procedures](howto/benchmark).
TODO: migrate this to a separate howto?
### Common procedure
1. punch a hole in the firewall to allow cache2 to access cache1
iptables -I INPUT -s 78.47.61.104 -j ACCEPT
ip6tables -I INPUT -s 2a01:4f8:c010:25ff::1 -j ACCEPT
2. point the blog to cache1 on cache2 in `/etc/hosts`:
116.202.120.172 blog.torproject.org
2a01:4f8:fff0:4f:266:37ff:fe26:d6e1 blog.torproject.org
3. disable Puppet:
puppet agent --disable 'benchmarking requires /etc/hosts override'
4. launch the benchmark
### Siege
Siege configuration sample:
```
verbose = false
fullurl = true
concurrent = 100
time = 2M
url = http://www.example.com/
delay = 1
internet = false
benchmark = true
```
Might require this, which might work only with varnish:
```
proxy-host = 209.44.112.101
proxy-port = 80
```
Alternative is to hack `/etc/hosts`.
### apachebench
Classic commandline:
ab2 -n 1000 -c 100 -X cache01.torproject.org https://example.com/
`-X` also doesn't work with ATS, hacked `/etc/hosts`.
### bombardier
Unfortunately, the [bombardier package in Debian](https://tracker.debian.org/pkg/bombardier) is *not* the HTTP
benchmarking tool but a commandline game. It's still possible to
install it in Debian with:
export GOPATH=$HOME/go
apt install golang
go get -v github.com/codesenberg/bombardier
Then running the benchmark is as simple as:
./go/bin/bombardier --duration=2m --latencies https://blog.torproject.org/
Baseline benchmark, from cache02:
anarcat@cache-02:~$ ./go/bin/bombardier --duration=2m --latencies https://blog.torproject.org/ -c 100
Bombarding https://blog.torproject.org:443/ for 2m0s using 100 connection(s)
[================================================================================================================================================================] 2m0s
Done!
Statistics Avg Stdev Max
Reqs/sec 2796.01 716.69 6891.48
Latency 35.96ms 22.59ms 1.02s
Latency Distribution
50% 33.07ms
75% 40.06ms
90% 47.91ms
95% 54.66ms
99% 75.69ms
HTTP codes:
1xx - 0, 2xx - 333646, 3xx - 0, 4xx - 0, 5xx - 0
others - 0
Throughput: 144.79MB/s
This is strangely much higher, in terms of throughput, and faster, in
terms of latency, than testing against our own servers. Different
avenues were explored to explain that disparity with our servers:
* jumbo frames? nope, both connexions see packets larger than 1500
bytes
* protocol differences? nope, both go over IPv6 and (probably) HTTP/2
(at least not over UDP)
* different link speeds
The last theory is currently the only one standing. Indeed, 144.79MB/s
should not be possible on regular gigabit ethernet (GigE), as it is
actually *more* than 1000Mbit/s (1158.32Mbit/s). Sometimes the above
benchmark even gives 152MB/s (1222Mbit/s), way beyond what a regular
GigE link should be able to provide.
### Other tools
Siege has trouble going above ~100 concurrent clients because of its
design (and ulimit) limitations. Its interactive features are also
limited, here's a set of interesting alternatives:
* [ali](https://github.com/nakabonne/ali) - golang, HTTP/2, real-time graph, duration, not in
Debian, mouse support, unsearchable name
* [bombardier](https://github.com/codesenberg/bombardier) - golang, HTTP/2, better performance than siege in
my (2017) tests, not in debian
* [boom](https://github.com/tarekziade/boom) - python rewrite of apachebench, supports duration,
HTTP/2, not in debian, unsearchable name
* [go-wrk](https://github.com/adjust/go-wrk/) - golang rewrite of wrk with HTTPS, had performance
issues in my first tests (2017), [no duration target](https://github.com/adjust/go-wrk/issues/2), not in
Debian
* [hey](https://github.com/rakyll/hey) - golang rewrite of apachebench, similar to boom, not in
debian ([ITP #943596](https://bugs.debian.org/943596)), unsearchable name
* [Jmeter](https://jmeter.apache.org/) - interactive behavior, can replay recorded sessions
from browsers
* [k6.io](https://k6.io/) - commandline JMeter replacement with accompanied
"cloud" Software-as-a-service
* [Locust](https://locust.io/) - distributed, can model login and interactive
behavior, not in Debian
* [Tsung](http://tsung.erlang-projects.org/1/01/about/) - multi-protocol, distributed, erlang
* [wrk](https://github.com/wg/wrk/) - multithreaded, epoll, Lua scriptable, no HTTPS, only in
Debian unstable
## Alternatives considered ## Alternatives considered
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment