The main goal of this is to determine whether obfs4 bridges are being blocked due to bridge IP enumeration, or if there is something blockable about the obfs4 protocol.
These tests will use new, private (unpublished) obfs4 IP addresses that have not been used for censorship circumvention prior to these tests.
The outcome should be a script that users we reach out to in censored can run from which we can collect metrics about their ability to connect and bandwidth measurements. Before we send out the script we should figure out:
Whether we have all necessary metrics on the bridge side to verify if obfs4 is working and whether it is being throttled
How we are going to collect the client-side measurement data
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Child items 0
Show closed items
No child items are currently assigned. Use child items to break down this issue into smaller parts.
Linked items 0
Link issues together to show that they're related.
Learn more.
Trac: Summary: Add any necessary metrics to verify if obfs4 is working or not to Write reachability tests to verify if obfs4 is working or not Description: Ensure that we have all necessary metrics to verify if obfs4 is working for users.
TODO: Determine what these metrics should be, where they should be added, methods for aggregating them and analyzing them.
to
The main goal of this is to determine whether obfs4 bridges are being blocked due to bridge IP enumeration, or if there is something blockable about the obfs4 protocol.
These tests will use new, private (unpublished) obfs4 IP addresses that have not been used for censorship circumvention prior to these tests.
The outcome should be a script that users we reach out to in censored can run from which we can collect metrics about their ability to connect and bandwidth measurements. Before we send out the script we should figure out:
Whether we have all necessary metrics on the bridge side to verify if obfs4 is working and whether it is being throttled
How we are going to collect the client-side measurement data Status: new to assigned Type: defect to task Owner: N/Ato cohosh
[SITENAME] is an arbitrary identifier for the probe sitethat you have to choose.Add to crontab to run hourly tests: 0 */1 * * * cd ~/kz && ./bridgetest.sh [SITENAME]Generate a CSV file from logs: find log -name '*.log' | sort | ./makecsv > bridgetest.csvMake a graph: Rscript graph.R bridgetest.csv
And I've added a large (~100M) file download to check for throttling. This might be too large, but no matter the size I'd suggest running this test perhaps 4x a day as opposed to every hour to reduce load on the bridges and the probe sites. This commit adds the file download: https://github.com/cohosh/bridgetest/commit/dcb9daaf41c2898b714291d012e3b06449016ee5
Putting this in needs_review to move things along... one thing I can think of that we might want is more granular large file download information as opposed to just "the time it takes to download the entire file". We can of course get this from tcpdump if we can capture on the probe site.
And I've added a large (~100M) file download to check for throttling.
Does this mean that there are multiple 100 MB pcaps being produced every day? That could be a lot of data to manage. Or are you not doing full packet capture for this part?
And I've added a large (~100M) file download to check for throttling.
Does this mean that there are multiple 100 MB pcaps being produced every day? That could be a lot of data to manage. Or are you not doing full packet capture for this part?
I'm not planning on doing a full packet capture unless the overall results look suspicious, and then I will turn packet capture on to investigate more closely. Perhaps with a smaller file.