From 37f04c567c18c4a589dd5e3dcd04983257690b9a Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Antoine=20Beaupr=C3=A9?= <anarcat@debian.org>
Date: Wed, 17 Mar 2021 15:58:48 -0400
Subject: [PATCH] move the cache benchmark baseline back to the cache page

that is where it belongs. also clarify wtf it is
---
 howto/benchmark.md | 36 ------------------------------------
 howto/cache.md     | 38 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 38 insertions(+), 36 deletions(-)

diff --git a/howto/benchmark.md b/howto/benchmark.md
index f5a759d89..6258c51f3 100644
--- a/howto/benchmark.md
+++ b/howto/benchmark.md
@@ -79,42 +79,6 @@ Then running the benchmark is as simple as:
 
     ./go/bin/bombardier --duration=2m --latencies https://blog.torproject.org/
 
-Baseline benchmark, from cache02:
-
-    anarcat@cache-02:~$ ./go/bin/bombardier --duration=2m --latencies https://blog.torproject.org/  -c 100
-    Bombarding https://blog.torproject.org:443/ for 2m0s using 100 connection(s)
-    [================================================================================================================================================================] 2m0s
-    Done!
-    Statistics        Avg      Stdev        Max
-      Reqs/sec      2796.01     716.69    6891.48
-      Latency       35.96ms    22.59ms      1.02s
-      Latency Distribution
-         50%    33.07ms
-         75%    40.06ms
-         90%    47.91ms
-         95%    54.66ms
-         99%    75.69ms
-      HTTP codes:
-        1xx - 0, 2xx - 333646, 3xx - 0, 4xx - 0, 5xx - 0
-        others - 0
-      Throughput:   144.79MB/s
-
-This is strangely much higher, in terms of throughput, and faster, in
-terms of latency, than testing against our own servers. Different
-avenues were explored to explain that disparity with our servers:
-
- * jumbo frames? nope, both connexions see packets larger than 1500
-   bytes
- * protocol differences? nope, both go over IPv6 and (probably) HTTP/2
-   (at least not over UDP)
- * different link speeds
-
-The last theory is currently the only one standing. Indeed, 144.79MB/s
-should not be possible on regular gigabit ethernet (GigE), as it is
-actually *more* than 1000Mbit/s (1158.32Mbit/s). Sometimes the above
-benchmark even gives 152MB/s (1222Mbit/s), way beyond what a regular
-GigE link should be able to provide.
-
 ## wrk
 
 Note that wrk works similarly to `bombardier`, sampled above, and has
diff --git a/howto/cache.md b/howto/cache.md
index 03aff172e..b3245501a 100644
--- a/howto/cache.md
+++ b/howto/cache.md
@@ -433,6 +433,44 @@ See [#32239](https://bugs.torproject.org/32239) for a followup on the launch pro
 
 See the [benchmark procedures](howto/benchmark).
 
+### Baseline benchmark
+
+Baseline benchmark of the actual blog site, from `cache02`:
+
+    anarcat@cache-02:~$ ./go/bin/bombardier --duration=2m --latencies https://blog.torproject.org/  -c 100
+    Bombarding https://blog.torproject.org:443/ for 2m0s using 100 connection(s)
+    [================================================================================================================================================================] 2m0s
+    Done!
+    Statistics        Avg      Stdev        Max
+      Reqs/sec      2796.01     716.69    6891.48
+      Latency       35.96ms    22.59ms      1.02s
+      Latency Distribution
+         50%    33.07ms
+         75%    40.06ms
+         90%    47.91ms
+         95%    54.66ms
+         99%    75.69ms
+      HTTP codes:
+        1xx - 0, 2xx - 333646, 3xx - 0, 4xx - 0, 5xx - 0
+        others - 0
+      Throughput:   144.79MB/s
+
+This is strangely much higher, in terms of throughput, and faster, in
+terms of latency, than testing against our own servers. Different
+avenues were explored to explain that disparity with our servers:
+
+ * jumbo frames? nope, both connexions see packets larger than 1500
+   bytes
+ * protocol differences? nope, both go over IPv6 and (probably) HTTP/2
+   (at least not over UDP)
+ * different link speeds
+
+The last theory is currently the only one standing. Indeed, 144.79MB/s
+should not be possible on regular gigabit ethernet (GigE), as it is
+actually *more* than 1000Mbit/s (1158.32Mbit/s). Sometimes the above
+benchmark even gives 152MB/s (1222Mbit/s), way beyond what a regular
+GigE link should be able to provide.
+
 ## Alternatives considered
 
 Four alternatives were seriously considered:
-- 
GitLab