arti-bench: add concurrency, write benchmark results out to JSON
We now conduct benchmark tests with multiple concurrent streams (by
default; this is configurable by passing -p
to arti-bench
).
Currently, these results just get "flattened" for the purposes of
statistical analysis (as in, results_raw contains the results of each
connection's timing summary, across all benchmark runs). This might be
something we wish to change in future.
The stats summary now also records "best" and "worst" values for each metric, to give a rough idea of the range of values encountered.
Additionally, we now support writing the benchmark results out to a JSON file. A future commit may integrate this with CI, so that we have benchmark results for every commit as a build artefact.
(some documentation was also fixed)
part of #292 (closed)