Tor issueshttps://gitlab.torproject.org/tpo/core/tor/-/issues2020-07-30T00:58:13Zhttps://gitlab.torproject.org/tpo/core/tor/-/issues/27907PrivCount config and version spec2020-07-30T00:58:13ZteorPrivCount config and version specSpecify how:
* PrivCount is configured
* How we transition between PrivCount versionsSpecify how:
* PrivCount is configured
* How we transition between PrivCount versionshttps://gitlab.torproject.org/tpo/core/tor/-/issues/27906PrivCount noise specification2020-07-30T00:58:13ZteorPrivCount noise specificationLet's write a spec for PrivCount noise:
* how noise is determined for each statistic
* how much noise each relay adds
* how to interpret the final result
* how to avoid obvious pitfallsLet's write a spec for PrivCount noise:
* how noise is determined for each statistic
* how much noise each relay adds
* how to interpret the final result
* how to avoid obvious pitfallshttps://gitlab.torproject.org/tpo/core/tor/-/issues/23061crypto_rand_double() should produce all possible outputs on platforms with 32...2021-08-23T15:10:24Zteorcrypto_rand_double() should produce all possible outputs on platforms with 32-bit intOn 32-bit platforms, crypto_rand_double() only produces 1 in every 2 million possible values between 0 and 1.
This happens because:
* crypto_rand_double() divides a random unsigned int by UINT_MAX
* an unsigned int on 32-bit platforms is 32 bits
* the mantissa on a double is 53 bits
So crypto_rand_double() doesn't fill the lower 21 bits with random values.
This makes the rep_hist_format_hs_stats() noise more predictable on 32-bit platforms.
This fix shouldn't affect the unit tests, because they pass on 64-bit.On 32-bit platforms, crypto_rand_double() only produces 1 in every 2 million possible values between 0 and 1.
This happens because:
* crypto_rand_double() divides a random unsigned int by UINT_MAX
* an unsigned int on 32-bit platforms is 32 bits
* the mantissa on a double is 53 bits
So crypto_rand_double() doesn't fill the lower 21 bits with random values.
This makes the rep_hist_format_hs_stats() noise more predictable on 32-bit platforms.
This fix shouldn't affect the unit tests, because they pass on 64-bit.https://gitlab.torproject.org/tpo/core/tor/-/issues/22422Add noise to PaddingStatistics2020-07-30T01:02:31ZteorAdd noise to PaddingStatisticsIt's safer to publish statistics if they have noise added.
Even though we round the totals, that's not enough to ensure privacy for a certain amount of user activity without added noise.
We need to fix this before 0.3.1 becomes stable.It's safer to publish statistics if they have noise added.
Even though we round the totals, that's not enough to ensure privacy for a certain amount of user activity without added noise.
We need to fix this before 0.3.1 becomes stable.https://gitlab.torproject.org/tpo/core/tor/-/issues/7509Publish and use circuit success rates in extrainfo descriptors2020-07-30T01:00:42ZMike PerryPublish and use circuit success rates in extrainfo descriptorsarma suggests we publish create cell success rates in the extrainfo descriptors. We want to use these values to measure the actual rate of client circuit success network wide given our current path selection weights.
In this simple case, a graph traversal computation would do the trick, but ideally we want to do it in a way that is liar-resistant. Does this mean we should publish information on our observed peers' rates of CREATE success instead?
Perhaps this can be modeled as an eigenvalue problem, a-la eigenspeed (legacy/trac#5464). Since we're computing only a single scalar value for the whole network at the end as opposed to a vector of weights, there might be a simplification we could deploy that reduces the amount of stuff we need to shove into extrainfo.
Either way, an extrainfo-based approach may end up being simpler to implement than a centralized scanner for reliably measuring circuit failure (see legacy/trac#7281).
I'm not sure I trust a fully self-reported scheme more without some kind of liar resistance, but it might end up that doing the graph traversal already bakes in as much liar resistance as you'd get from having each node report on its peers. It might be possible to prove this even, but something tells me empirical simulation is as close as we're going to get.arma suggests we publish create cell success rates in the extrainfo descriptors. We want to use these values to measure the actual rate of client circuit success network wide given our current path selection weights.
In this simple case, a graph traversal computation would do the trick, but ideally we want to do it in a way that is liar-resistant. Does this mean we should publish information on our observed peers' rates of CREATE success instead?
Perhaps this can be modeled as an eigenvalue problem, a-la eigenspeed (legacy/trac#5464). Since we're computing only a single scalar value for the whole network at the end as opposed to a vector of weights, there might be a simplification we could deploy that reduces the amount of stuff we need to shove into extrainfo.
Either way, an extrainfo-based approach may end up being simpler to implement than a centralized scanner for reliably measuring circuit failure (see legacy/trac#7281).
I'm not sure I trust a fully self-reported scheme more without some kind of liar resistance, but it might end up that doing the graph traversal already bakes in as much liar resistance as you'd get from having each node report on its peers. It might be possible to prove this even, but something tells me empirical simulation is as close as we're going to get.