Trac issueshttps://gitlab.torproject.org/legacy/trac/-/issues2020-06-13T14:08:46Zhttps://gitlab.torproject.org/legacy/trac/-/issues/1944Set up a Torperf to a hidden service2020-06-13T14:08:46ZRoger DingledineSet up a Torperf to a hidden serviceIt would be great to get an ongoing torperf run that's accessing a hidden service, so we can track performance of those over time also.
The simple part here is setting up a hidden service that's probably not the performance bottleneck, ...It would be great to get an ongoing torperf run that's accessing a hidden service, so we can track performance of those over time also.
The simple part here is setting up a hidden service that's probably not the performance bottleneck, and setting up a torperf install as usual.
The complex part here is that the Tor client will cache the hidden service descriptor, and maybe even the rendezvous circuit (in the case of the every-five-minute 50kb fetch), skewing our results.https://gitlab.torproject.org/legacy/trac/-/issues/2543Create graphs of #1919 torperfs2020-06-13T17:49:45ZMike PerryCreate graphs of #1919 torperfsIt would be great if we had some sort of output of the 15 torperf runs that are running somewhere. Right now, afaik, all that data will just go into a hole until someone decides to dig it up.
In my ideal world, we'd have the ability to ...It would be great if we had some sort of output of the 15 torperf runs that are running somewhere. Right now, afaik, all that data will just go into a hole until someone decides to dig it up.
In my ideal world, we'd have the ability to see all the graphs on https://metrics.torproject.org/performance.html for each one of these runs, right there live on the website.
If this is too much work to be done in a reasonable amount of time, I'd settle having just the timing graphs for each of the 15 runs in a directory I can access somewhere.Mike PerryMike Perryhttps://gitlab.torproject.org/legacy/trac/-/issues/2554Write script to reconstruct hidden service time components from controller ev...2020-06-13T00:04:27ZRoger DingledineWrite script to reconstruct hidden service time components from controller eventsWhen we get #1944 going, we're going to start wondering what the breakdown of "time before connected cell" is.
Does torperf get its statistics of time breakdown just from the socks port, or does it learn things over the control port too...When we get #1944 going, we're going to start wondering what the breakdown of "time before connected cell" is.
Does torperf get its statistics of time breakdown just from the socks port, or does it learn things over the control port too, or what? We should figure out what time components we want to track, and then figure out how to export them so torperf can track them for posterity.
We may also find that we want to remember the circuits for the other components of the rendezvous, similar to the extension we're considering for #2551.Tor: 0.2.9.x-finalDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/legacy/trac/-/issues/2565Redesign Python parts in Torperf2020-06-13T01:39:12ZKarsten LoesingRedesign Python parts in TorperfWe're stuffing more and more functionality into Torperf. See #2551 for outputting circuit build times and #2554 for adding hidden service time components.
I think it's time to consider a redesign of the Python parts. Having a Python s...We're stuffing more and more functionality into Torperf. See #2551 for outputting circuit build times and #2554 for adding hidden service time components.
I think it's time to consider a redesign of the Python parts. Having a Python script for writing controller events to disk, a Python script for influencing guard selection, and a cronjob for actually making the requests is already hackish. Adding circuit build times and hidden-service events stretches this even more.
We could have a single Python program that reads from a configuration file what to do, starts Tor, connects to its control port, periodically calls the C program to make requests, and collects all the data we want. This would also make it much easier to set up Torperf.
Sebastian, Mike: Does this make sense?https://gitlab.torproject.org/legacy/trac/-/issues/2672Fix bugs/issues with consolidate_stats2020-06-13T00:07:00ZMike PerryFix bugs/issues with consolidate_statsQuoting karsten:
I think there's a bug in this script: Whenever we skip a line in a .data file, because that line represents a failure, we might get out of sync with the .extradata file and stop writing any data to .mergedata. You shoul...Quoting karsten:
I think there's a bug in this script: Whenever we skip a line in a .data file, because that line represents a failure, we might get out of sync with the .extradata file and stop writing any data to .mergedata. You should be able to reproduce this bug with the Torperf 50KB run (https://metrics.torproject.org/data/torperf-50kb.data https://metrics.torproject.org/data/torperf-50kb.extradata). The last line in the result has CIRC_ID=4384. If I delete the line in .extradata starting with CIRC_ID=4397, the result has more entries than before. I think the fix is to distinguish between absolute slack of up to 1 second and a time difference of, say, more than 1 minute.
Also, is there a way to include timed out runs in the .mergedata, too? We do include failures, so by including timeouts, we wouldn't have to parse the original files for timeouts/failures anymore. This could be a new ticket, I'd just like to know whether it's possible.Mike PerryMike Perryhttps://gitlab.torproject.org/legacy/trac/-/issues/2687Write Python version of filter.R to parse Torperf's new .mergedata format2020-06-13T00:17:35ZKarsten LoesingWrite Python version of filter.R to parse Torperf's new .mergedata formatOnce we have completed the improved `consolidate_stats.py` script in #2672, we should update our `filter.R` script to process the new `.mergedata` format instead of `.data` files. We could start with parsing the 20 fields in `.mergedata...Once we have completed the improved `consolidate_stats.py` script in #2672, we should update our `filter.R` script to process the new `.mergedata` format instead of `.data` files. We could start with parsing the 20 fields in `.mergedata` coming from `.data` files. Once we have that under control, we might extend `filter.R` to parse new fields coming from `.extradata` files. See [my comment to #2618](https://trac.torproject.org/projects/tor/ticket/2618#comment:8) for an implementation idea.https://gitlab.torproject.org/legacy/trac/-/issues/2690Compare circuit build timeouts to Torperf completion times2012-03-13T15:38:34ZKarsten LoesingCompare circuit build timeouts to Torperf completion timesThis is a follow-up ticket to #2586 which was part of our March 5 Torperf iteration.
I'm leaving priority at normal, because we might better answer this question by playing with the cbtquantile consensus parameter. But we still want to...This is a follow-up ticket to #2586 which was part of our March 5 Torperf iteration.
I'm leaving priority at normal, because we might better answer this question by playing with the cbtquantile consensus parameter. But we still want to answer this question sometime.https://gitlab.torproject.org/legacy/trac/-/issues/2766Enforce using a fresh circuit for a new Torperf run2017-04-28T15:41:33ZKarsten LoesingEnforce using a fresh circuit for a new Torperf runOur current approach to enforce using a fresh circuit for a new Torperf run is that we set `MaxCircuitDirtiness` to something smaller than the interval at which we start new Torperf runs. But whenever we forget to update this setting, T...Our current approach to enforce using a fresh circuit for a new Torperf run is that we set `MaxCircuitDirtiness` to something smaller than the interval at which we start new Torperf runs. But whenever we forget to update this setting, Tor won't really tell us that something's wrong. We only learn that when merging `.data` and `.extradata` files.
We should switch to sending a NEWNYM signal and stop messing with `MaxCircuitDirtiness`. Below is a patch that Mike wrote, but that doesn't really fit in entrycons.py, because not all Torperf runs use that script. Maybe we should apply this patch when we merge the various Python scripts in #2565?
```
commit 6e79ca00fb923d90e63bf10f0e3913ef6641eb5c
Author: Mike Perry <mikeperry-git@fscked.org>
Date: Wed Mar 16 04:46:09 2011 -0700
Fix entrycons.py to send a NEWNYM signal after every stream.
diff --git a/entrycons.py b/entrycons.py
index d1c8f34..41678fd 100755
--- a/entrycons.py
+++ b/entrycons.py
@@ -16,6 +16,11 @@ class EntryTracker(TorCtl.ConsensusTracker):
self.speed = speed
self.set_entries()
+ def stream_status_event(self, event):
+ # Every time a stream closes, send a NEWNYM signal
+ if event.status == "CLOSED" or event.status == "FAILED":
+ self.c.send_signal("NEWNYM")
+
def new_consensus_event(self, n):
TorCtl.ConsensusTracker.new_consensus_event(self, n)
TorUtil.plog("INFO", "New consensus arrived. Rejoice!")
@@ -120,7 +125,7 @@ def main():
conn = TorCtl.connect(HOST, port)
- conn.set_events(["NEWCONSENSUS", "NEWDESC", "NS", "GUARD"])
+ conn.set_events(["STREAM", "NEWCONSENSUS", "NEWDESC", "NS", "GUARD"])
conn.set_option("StrictEntryNodes", "1")
conn.set_option("UseEntryNodes", "1")
```https://gitlab.torproject.org/legacy/trac/-/issues/2769Experiments and Data for Tor Performance Tech Report2020-06-13T17:46:49ZMike PerryExperiments and Data for Tor Performance Tech ReportKarsten and I intend to write a tech report covering the use of TorPerf and other metrics to measure tor network performance, and to determine if certain performance tweaks results in improved or degraded performance. This is the parent ...Karsten and I intend to write a tech report covering the use of TorPerf and other metrics to measure tor network performance, and to determine if certain performance tweaks results in improved or degraded performance. This is the parent ticket for this effort.
[[TicketQuery(parent=#2769,format=table,col=owner|priority|summary|points|actualpoints,order=priority)]]Mike PerryMike Perryhttps://gitlab.torproject.org/legacy/trac/-/issues/2770Perform experiments to determine optimal circuit build timeout cutoff2011-04-05T00:54:35ZMike PerryPerform experiments to determine optimal circuit build timeout cutoffWe need to perform a series of experiments to determine the optimal cbtquantile cutoff value that we're comfortable with using on the real network. Something like 50, 60, 70, and 80. We then need to observe the behavior of the torperf cl...We need to perform a series of experiments to determine the optimal cbtquantile cutoff value that we're comfortable with using on the real network. Something like 50, 60, 70, and 80. We then need to observe the behavior of the torperf clients.Mike PerryMike Perryhttps://gitlab.torproject.org/legacy/trac/-/issues/2771Reproduce CircWindow and EWMA experiments w/ new torperfs2012-03-08T19:27:50ZMike PerryReproduce CircWindow and EWMA experiments w/ new torperfsWe should reproduce the circwindow and ewma experiments with the new torperf runs. Or, we should make sure we have good results laying around from when we last tested this.We should reproduce the circwindow and ewma experiments with the new torperf runs. Or, we should make sure we have good results laying around from when we last tested this.Mike PerryMike Perryhttps://gitlab.torproject.org/legacy/trac/-/issues/2774Write down an outline of the performance tech report2011-04-01T23:24:00ZKarsten LoesingWrite down an outline of the performance tech report#2769 lists the experiments and evaluations that we want to run and put into the performance tech report. At the same time we should start writing the tech report.
This ticket is about writing a rough draft of the report containing the...#2769 lists the experiments and evaluations that we want to run and put into the performance tech report. At the same time we should start writing the tech report.
This ticket is about writing a rough draft of the report containing the motivation, structure, and maybe the expected conclusions. We should use this draft to collect our ideas and early results in a single document.
Turning the draft into something readable will be a new ticket.Mike PerryMike Perryhttps://gitlab.torproject.org/legacy/trac/-/issues/2859Transform perf tech report outline into latex draft2011-04-18T13:23:31ZMike PerryTransform perf tech report outline into latex draftThe submission deadline is Apr 25th, so we should have a passable latex draft by the end of this iteration. We also should try to get in a few more experiments, but I will write a new ticket for that.The submission deadline is Apr 25th, so we should have a passable latex draft by the end of this iteration. We also should try to get in a few more experiments, but I will write a new ticket for that.Mike PerryMike Perryhttps://gitlab.torproject.org/legacy/trac/-/issues/2868Fix weird torperf results due to bad guard choices2017-12-06T04:45:21ZMike PerryFix weird torperf results due to bad guard choicesEntrycons.py crashed, causing us to get bad results for the cbt experiments. It turns out that we weren't properly waiting for all descriptors to arrive. We need to delay setting guards until we have something like 99% of all descriptors...Entrycons.py crashed, causing us to get bad results for the cbt experiments. It turns out that we weren't properly waiting for all descriptors to arrive. We need to delay setting guards until we have something like 99% of all descriptors in the current consensus.Mike PerryMike Perryhttps://gitlab.torproject.org/legacy/trac/-/issues/2958Work with Karsten on TorPerf Experiments2011-05-03T06:37:19ZMike PerryWork with Karsten on TorPerf ExperimentsThis ticket is to catalog my portion of the work with karsten on #2769. I plan to help with a few experiments:
- Determine new guard cutoff: 2pts
- Determine optimal CircWindow: 6pts
- Throttle clients network-wide: 4pts
- Disable EWMA: ...This ticket is to catalog my portion of the work with karsten on #2769. I plan to help with a few experiments:
- Determine new guard cutoff: 2pts
- Determine optimal CircWindow: 6pts
- Throttle clients network-wide: 4pts
- Disable EWMA: 1pt
- Extra luck point: 1ptMike PerryMike Perryhttps://gitlab.torproject.org/legacy/trac/-/issues/2959Get TorPerf Tech Report into submission-shape2013-12-12T14:38:17ZMike PerryGet TorPerf Tech Report into submission-shapeWe have more writing and back and forth to go on the Torperf draft..We have more writing and back and forth to go on the Torperf draft..Mike PerryMike Perryhttps://gitlab.torproject.org/legacy/trac/-/issues/3036Tweak Torperf's .mergedata format and make it the new default2020-06-13T01:54:02ZKarsten LoesingTweak Torperf's .mergedata format and make it the new defaultRight now, we have three Torperf data formats: the .data files containing the output of trivsocks-client.c, the .extradata files containing the output of the Python script attached to Tor's control port, and the .mergedata files containi...Right now, we have three Torperf data formats: the .data files containing the output of trivsocks-client.c, the .extradata files containing the output of the Python script attached to Tor's control port, and the .mergedata files containing the consolidation of the two formats.
I'd like to tweak the .mergedata format to make it easier to process, and I want to make it the new default Torperf output format.
Here's what I'd like to change:
- Every data point in the new .mergedata format should contain the meta data that is necessary to generate Torperf graphs. This meta data contains the file size, the source (moria, siv, ferrinii, etc.), and possibly a custom guard choice and/or custom circuit build timeout. I could imagine adding these meta data as `FILESIZE=51200, SOURCE=ferrinii, GUARDS=slowratio, CBT=75`.
One motivation for this change is to remove the dependency from the filename, which is how we currently encode these meta data, e.g., `slowratio75cbt-50kb.mergedata`.
Also, I'd like to be able to concatenate multiple Torperf files and have a single file for a) the standard Torperf runs of a given month and b) the Torperf runs from a given experiment. This makes it easier for people to download and process our Torperf data.
- We should combine the SEC and USEC fields and simply write timestamps as floats with a precision of, say, two decimal places, like we do in `LAUNCH=1302523261.18`. For example, `STARTSEC=1302523501 STARTUSEC=703442` would become `START=1302523501.70`. This saves a lot of bytes and maybe even a few CPU cycles when parsing the single fields of a data point.
- When measuring hidden service performance as in #1944, we should add custom fields for the various hidden service substeps, e.g., `START_RENDCIRC`, `GOT_INTROCIRC`, etc.
What do you think? Do these changes make sense? If so, here are the next steps:
- The first step in this endeavor is to wait for the results of #2687 where we try to implement an efficient .mergedata parser in R.
- The next step would be to change `consolidate_stats.py` to add the new meta data fields and combine SEC and USEC fields for us.
- As soon as we have the new .mergedata format, I'll update metrics-db to aggregate the various Torperf files and prepare them for the metrics website. I'll also update metrics-web to parse the .mergedata format instead of the .data format. And of course, I'll update the [Overview of Statistical Data in the Tor Network](https://metrics.torproject.org/papers/data-2011-03-14.pdf) to describe the new format.
- Once we start working on #2565, we might want to dump the .data and .extradata formats entirely and have Torperf only output the .mergedata format.Karsten LoesingKarsten Loesinghttps://gitlab.torproject.org/legacy/trac/-/issues/3089Graph data from perconnbwrate throttling experiments2011-05-18T11:04:04ZMike PerryGraph data from perconnbwrate throttling experimentsI have been experimenting with perconnbwrate and we've seen some results in terms of reduced load on Olaf's servers. I need to see if our slow guard CBT runs have improved, follow up with olaf, and maybe run one more test.I have been experimenting with perconnbwrate and we've seen some results in terms of reduced load on Olaf's servers. I need to see if our slow guard CBT runs have improved, follow up with olaf, and maybe run one more test.Mike PerryMike Perryhttps://gitlab.torproject.org/legacy/trac/-/issues/3279Write a Torperf skeleton in Python that parses Torperf's configuration files2020-06-13T00:17:32ZKarsten LoesingWrite a Torperf skeleton in Python that parses Torperf's configuration filesIn our attempt to redesign Torperf, a first step could be to write a minimal Python module that parses Torperf's new configuration file format described [here](https://trac.torproject.org/projects/tor/ticket/2565#comment:9). This Torper...In our attempt to redesign Torperf, a first step could be to write a minimal Python module that parses Torperf's new configuration file format described [here](https://trac.torproject.org/projects/tor/ticket/2565#comment:9). This Torperf module wouldn't perform any requests, but only print out the configuration for each Torperf and exit.
The new code should go into a branch on top of current master. We'll keep it in that branch until it's a full replacement of the current Torperf scripts.
Assigning to tomb upon request.Thomas BenjaminThomas Benjaminhttps://gitlab.torproject.org/legacy/trac/-/issues/3280Make the new Python Torperf write torrc's2020-06-13T00:17:33ZKarsten LoesingMake the new Python Torperf write torrc'sOnce #3279 is complete, we should extend the new Torperf to interact with its tor processes, except talking to the control port which should come later. In this ticket, Torperf should create any missing Tor data directories and torrc fi...Once #3279 is complete, we should extend the new Torperf to interact with its tor processes, except talking to the control port which should come later. In this ticket, Torperf should create any missing Tor data directories and torrc files under the path given in `DataDirectory` [here](https://trac.torproject.org/projects/tor/ticket/2565#comment:9). Some logic is necessary to auto-generate data directory names, increment port numbers, etc. Also, there are some torrc config options that don't show up in Torperf's config file, but that need to go into the torrc files, e.g., `MaxCircuitDirtiness 1 minute`, `UseEntryGuards 0`, and `RunAsDaemon 1`. See `measurements-HOWTO` for details.
Assigning to tomb upon request.Thomas BenjaminThomas Benjaminhttps://gitlab.torproject.org/legacy/trac/-/issues/3281Enable the new Python Torperf to start and stop tor processes2020-06-13T00:17:34ZKarsten LoesingEnable the new Python Torperf to start and stop tor processesAfter #3280 is done, the new Torperf should also be able to start and stop its tor processes. Starting the processes should happen automatically after reading the configuration file. Further, the new Torperf should have a config option...After #3280 is done, the new Torperf should also be able to start and stop its tor processes. Starting the processes should happen automatically after reading the configuration file. Further, the new Torperf should have a config option or switch to stop all running tor processes. Torperf could look at the tor.pid files in the directories under `DataDirectory` and sending these processes a kill signal.
Assigning to tomb upon request.Thomas BenjaminThomas Benjaminhttps://gitlab.torproject.org/legacy/trac/-/issues/3282Extend the new Python Torperf to talk to tor's control port2020-06-13T00:17:34ZKarsten LoesingExtend the new Python Torperf to talk to tor's control portAfter finishing #3281, we should extend Torperf to talk to the control ports of the tor processes it started before. Torperf should use TorCtl for this task similar to `extra_stats.py` and `entrycons.py`. In fact, we can probably re-us...After finishing #3281, we should extend Torperf to talk to the control ports of the tor processes it started before. Torperf should use TorCtl for this task similar to `extra_stats.py` and `entrycons.py`. In fact, we can probably re-use most of the code from these two files.
Connecting to tor's control port has two purposes that have to be performed by the new Torperf: First, Torperf may have to influence tor's guard node selection algorithm depending on its configuration, and second, Torperf will have to process control port events and write them to a local .extradata file.
Assigning to tomb upon request.Thomas BenjaminThomas Benjaminhttps://gitlab.torproject.org/legacy/trac/-/issues/3283Make the new Torperf call trivsocks-client to make request and timeout runs t...2020-06-13T00:17:35ZKarsten LoesingMake the new Torperf call trivsocks-client to make request and timeout runs that take too longOnce #3282 is done, Torperf should call trivsocks-client which makes requests over Tor and which was previously implemented by cron. The output should be appended to local .data files. For debugging purposes, it's okay to use the `time...Once #3282 is done, Torperf should call trivsocks-client which makes requests over Tor and which was previously implemented by cron. The output should be appended to local .data files. For debugging purposes, it's okay to use the `timeout` script first. But eventually, Torperf should be able to timeout runs that take too long itself.
Assigning to tomb upon request.Thomas BenjaminThomas Benjaminhttps://gitlab.torproject.org/legacy/trac/-/issues/3284Make the new Torperf generate the new .mergedata output directly2013-12-12T14:43:28ZKarsten LoesingMake the new Torperf generate the new .mergedata output directlyOnce we have finished #3283, we should get rid of the crazy separation of .data and .extradata files and just output a single .mergedata file ourselves. This output format could be like the one described in #3036. Implementing this tic...Once we have finished #3283, we should get rid of the crazy separation of .data and .extradata files and just output a single .mergedata file ourselves. This output format could be like the one described in #3036. Implementing this ticket means rewriting most of the logic in `consolidate_stats.py`, but the merging takes place live while measurements are running as opposed to offline. The output of all Torperf runs can go into the same .mergedata file; results can be attributed to a specific Torperf run by the contained meta data keys like FILESIZE, SOURCE, GUARDS, and CBT.
Once this ticket is implemented, we should consider closing #3036, too.
Assigning to tomb upon request.Thomas BenjaminThomas Benjaminhttps://gitlab.torproject.org/legacy/trac/-/issues/3285Write Python version of filter.R to parse Torperf's new .mergedata format2013-12-12T14:43:41ZKarsten LoesingWrite Python version of filter.R to parse Torperf's new .mergedata formatThis is the successor of #2687 where we first tried to write an R script that parses the new .mergedata format. We found that this is not trivial in R, because R is slow for these kind of things. This ticket is about writing a new Pyth...This is the successor of #2687 where we first tried to write an R script that parses the new .mergedata format. We found that this is not trivial in R, because R is slow for these kind of things. This ticket is about writing a new Python script to convert .mergedata files into CSV which can then be processed by R.
Assigning to tomb upon request.Thomas BenjaminThomas Benjaminhttps://gitlab.torproject.org/legacy/trac/-/issues/3405torperf svn still looks active2012-03-07T09:13:07ZRoger Dingledinetorperf svn still looks activehttps://svn.torproject.org/svn/torperf/trunk/ looks like it's the real torperf repository. It's only if you happen to read the README that you learn otherwise.
I think we should branch it (or tag it -- whichever it is we're supposed to ...https://svn.torproject.org/svn/torperf/trunk/ looks like it's the real torperf repository. It's only if you happen to read the README that you learn otherwise.
I think we should branch it (or tag it -- whichever it is we're supposed to do in svn) as it was before the move to git, and then drop everything from trunk but the README.Karsten LoesingKarsten Loesinghttps://gitlab.torproject.org/legacy/trac/-/issues/3830upgrade tor versions on deployed torperfs2020-06-13T00:28:23ZRoger Dingledineupgrade tor versions on deployed torperfsmoria's torperf is running 0.2.2.22-alpha. I assume the other torperfs are similarly outdated.
Is there any reason to stick with that version, or should we upgrade them?
Should we consider some sort of automated upgrade plan, so we don...moria's torperf is running 0.2.2.22-alpha. I assume the other torperfs are similarly outdated.
Is there any reason to stick with that version, or should we upgrade them?
Should we consider some sort of automated upgrade plan, so we don't get into this position again later?Linus Nordberglinus@torproject.orgLinus Nordberglinus@torproject.orghttps://gitlab.torproject.org/legacy/trac/-/issues/3831siv torperf is down since june2020-06-13T17:49:58ZRoger Dingledinesiv torperf is down since junehttps://metrics.torproject.org/performance.html?graph=torperf&start=2011-05-29&end=2011-08-27&source=siv&filesize=50kb&dpi=72#torperf
What happened? Perhaps it crashed? See also #3830.https://metrics.torproject.org/performance.html?graph=torperf&start=2011-05-29&end=2011-08-27&source=siv&filesize=50kb&dpi=72#torperf
What happened? Perhaps it crashed? See also #3830.Karsten LoesingKarsten Loesinghttps://gitlab.torproject.org/legacy/trac/-/issues/4695scumbag tor2011-12-11T15:48:09Zcypherpunksscumbag tory u no make tor fast?
y u make tor slow?
https://metrics.torproject.org/performance.html makes keanu sad and forever alone
most interesting guy in the world says, 'I don't always use tor, but when I do, I plan to lose an entire day to...y u no make tor fast?
y u make tor slow?
https://metrics.torproject.org/performance.html makes keanu sad and forever alone
most interesting guy in the world says, 'I don't always use tor, but when I do, I plan to lose an entire day to browse one site'
u mad bro?Karsten LoesingKarsten Loesinghttps://gitlab.torproject.org/legacy/trac/-/issues/5632Make Torperf log timestamps after every 10% of received bytes2012-04-19T08:13:17ZKarsten LoesingMake Torperf log timestamps after every 10% of received bytesRob suggests to add timestamps between receiving the first byte and completing the request. He suggests timestamps for every completed 10% or 20% of received bytes.
We can easily add timestamps to Torperf's .data files. I wrote a patc...Rob suggests to add timestamps between receiving the first byte and completing the request. He suggests timestamps for every completed 10% or 20% of received bytes.
We can easily add timestamps to Torperf's .data files. I wrote a patch to trivsocks-client.c that adds timestamps for 10, 20, ..., 90% of read bytes. I'll push the new branch in a minute when this ticket has a number.
Once this patch is reviewed and merged, we'll want to see how metrics-db and metrics-web can handle the extended .data files. I don't expect any major problems.Karsten LoesingKarsten Loesinghttps://gitlab.torproject.org/legacy/trac/-/issues/7010Add upload measurement to torperf/trivsocks-client.c2017-04-28T15:41:33ZJacob AppelbaumAdd upload measurement to torperf/trivsocks-client.cWe should consider a way to test uploading of data over Tor.We should consider a way to test uploading of data over Tor.https://gitlab.torproject.org/legacy/trac/-/issues/7168Get Torperf results to be more realistic2020-06-13T18:13:04ZKarsten LoesingGet Torperf results to be more realisticWe should try to get Torperf results to be more realistic. Web pages are rather 320KB on average and contain multiple components, so our Torperf results are artificial.
We might need to do the redesign of Python parts (#2565) before ex...We should try to get Torperf results to be more realistic. Web pages are rather 320KB on average and contain multiple components, so our Torperf results are artificial.
We might need to do the redesign of Python parts (#2565) before extending Torperf towards a more complex download model. Setting up Torperf is already complex enough, and it's getting worse with every hack.https://gitlab.torproject.org/legacy/trac/-/issues/7516Track down Will Scott's torperf-like scripts, make them public if needed, and...2020-06-13T01:45:33ZRoger DingledineTrack down Will Scott's torperf-like scripts, make them public if needed, and do a trial deployment somewhereWill Scott says he's
```
using the code to actively look at the performance impact of
proxies on web page load time. Essentially it's a wrapper around
http://phantomjs.org/ with some aggregation and reporting added.
```
He adds that th...Will Scott says he's
```
using the code to actively look at the performance impact of
proxies on web page load time. Essentially it's a wrapper around
http://phantomjs.org/ with some aggregation and reporting added.
```
He adds that there are two design things that ought to get figured out
```
Where the monitoring should live. I have servers I can use to get a system
working at UW. At some point in a few years I'll graduate, and my
experience is that things which get left behind decay pretty fast, so I'm
somewhat hesitant to go that route.
How to get a stable / meaningful measurements. We need enough aggregation
across both the circuit and the destination domain to dampen individual
server issues and be able to say something about tor as a whole. Are there
other factors I'm missing that aggregation + setting up a new circuit
before each measurement won't be able to overcome?
```https://gitlab.torproject.org/legacy/trac/-/issues/7517Devise and deploy the canonical median web page2017-04-28T15:41:33ZRoger DingledineDevise and deploy the canonical median web pageDrew Dean and many others tell us that the typical web page these days is 320KBytes with multiple components.
Great -- why do they say this? How many is 'multiple'?
Once we *do* have some answer, what's the rate of change over time of ...Drew Dean and many others tell us that the typical web page these days is 320KBytes with multiple components.
Great -- why do they say this? How many is 'multiple'?
Once we *do* have some answer, what's the rate of change over time of this "average" web page? (I.e., will anything we pick now be substantially wrong in six months, or will it last us for a few years?)
We should take some guesses, based on whatever data we think appropriate, and set up this median web page. Then we can use it as the target for the #7516 scripts once they're online.https://gitlab.torproject.org/legacy/trac/-/issues/8662Make Torperf log circuit failures2017-04-28T15:41:33ZKarsten LoesingMake Torperf log circuit failuresMike, Rob, and I briefly discussed at the dev meeting that we could make Torperf log circuit failures by adding REASON and REMOTE_REASON from CIRC events to Torperf's output whenever a request fails.
I can see how Torperf learns about c...Mike, Rob, and I briefly discussed at the dev meeting that we could make Torperf log circuit failures by adding REASON and REMOTE_REASON from CIRC events to Torperf's output whenever a request fails.
I can see how Torperf learns about circuit failures, but I'm yet unsure how to include this information in its output. Torperf outputs exactly one line per measurement. But if a circuit fails before attaching Torperf's measurement stream to it, we'll never learn about that circuit in the context of this measurement. And I assume this isn't just about circuits failing after they got a stream attached, right?
Here are a few options:
* We don't include circuit failure information in Torperf's results, but log CIRC events locally to a separate file. These CIRC event log files won't be archived or made publicly available. This may work if we're only looking for changes in CIRC failures within the next few weeks or so.
* We add a new field to Torperf's output collecting all CIRC failures since the last measurement. This is somewhat similar to the QUANTILE and TIMEOUT fields that contain values from the last received BUILDTIMEOUT_SET event.
* We give up the one-output-line-per-measurement format and add further information to Torperf's output. This is a rather big data format change though. If we were to do this, we should also move QUANTILE and TIMEOUT to their own lines.
Thoughts?