Anti-censorship issueshttps://gitlab.torproject.org/groups/tpo/anti-censorship/-/issues2023-02-04T09:41:52Zhttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/-/issues/69Can get stuck sometimes2023-02-04T09:41:52ZWofWcawofwca@protonmail.comCan get stuck sometimesError/timeout handling of `ProxyPair` and related stuff looks poor to me and I think it needs to be revisited. Namely:
* `flush` checks `webrtcIsReady()` before sending a message to the client. If it's `false` and `r2cSchedule` has messa...Error/timeout handling of `ProxyPair` and related stuff looks poor to me and I think it needs to be revisited. Namely:
* `flush` checks `webrtcIsReady()` before sending a message to the client. If it's `false` and `r2cSchedule` has messages, it will [`setTimeout` to `flush()` again](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/-/blob/724281b06a122506f17a472ec20e5a34f02439ae/proxypair.js#L239-241). This can make an infinite loop (see [comment](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/-/issues/30#note_2821668))
* [`peerConnOpen`](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/-/blob/724281b06a122506f17a472ec20e5a34f02439ae/proxypair.js#L259-261) checks if `this.pc.connectionState !== 'closed'`, but state can also be `'failed'` and `'disconnected'`.
* `channel.onerror` is not handled (I suppose a timeout would run anyway, but then it looks like things need to be renamed or something).
* In [`close()`](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/-/blob/724281b06a122506f17a472ec20e5a34f02439ae/proxypair.js#L200-208) we check for things like `peerConnOpen()` before doing `pc.close()`, but its state could still be `'new'` or `'connecting'`, so it won't get closed in that case.
* When creating `ProxyPair`, we [create a timeout](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/-/blob/a8b7508ab0587276faec1a0290732c4bea8c5362/snowflake.js#L90-92) that's supposed to run if we fail to create a connection. If we succeed, then [another timeout](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/-/blob/4a178c81f02fe6d1bb4804ffefcac37d893e1942/proxypair.js#L163-167) is supposed to take over. But it's not obvious and it looks very fragile, like it can be broken by an unrelated change.
* If connection gets closed within `config.datachannelTimeout` (20s) after being opened, the `datachannelTimeout` callback would still get executed.
* There are listeners (e.g. in `proxypair.js`) like `.onopen` and `.ondatachannel` that don't take into account the fact that they can be fired several times.
A suggestion for `ProxyPair.close()` issues is - only call `close()` in one place of the program (outside of the ProxyPair class perhaps). `ProxyPair.receiveOffer` should return a `Promise` that resolves when we started serving the client successfully, otherwise rejects. And the resolve value of that `Promise` should be another `Promise` that is fulfilled when we've finished serving the client.
Related: !54, #19https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40228perf: proxy: don't wait for WebRTC to establish before connecting to server2022-11-14T09:51:43ZWofWcawofwca@protonmail.comperf: proxy: don't wait for WebRTC to establish before connecting to serverIs there a benefit in this waiting (other than when WebRTC fails and we spare doing the connection for nothing)?
We can start connecting as soon as we get the client's offer. This can make bootstrapping a little faster. Same goes for the...Is there a benefit in this waiting (other than when WebRTC fails and we spare doing the connection for nothing)?
We can start connecting as soon as we get the client's offer. This can make bootstrapping a little faster. Same goes for the extension.
* https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/blob/97dea533da7b6b3b2b1dfbffe7dca3a8350fab0b/proxy/lib/snowflake.go#L328
* https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/-/blob/44bb28a15a4ebb5193e5d72b84d9259de7ea633d/proxypair.js#L140-142https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/-/issues/72perf: reuse WebRTC certificates between connections2022-11-15T18:34:01ZWofWcawofwca@protonmail.comperf: reuse WebRTC certificates between connectionsGenerating certificates takes a while, and by default (at least in browsers) they're generated for each new `RTCPeerConnection`. In Firefox generating 1000 certificates takes 4 seconds (4ms per certificate) and 100% of one CPU core for m...Generating certificates takes a while, and by default (at least in browsers) they're generated for each new `RTCPeerConnection`. In Firefox generating 1000 certificates takes 4 seconds (4ms per certificate) and 100% of one CPU core for me.
```js
(async () => {
const promises = [];
for (let i = 0; i < 1000; i++) {
arr.push(RTCPeerConnection.generateCertificate({ name: "ECDSA", namedCurve: "P-256" }))
}
await Promise.all(promises);
console.log('done');
})()
```
I don't think this affects bootstrapping performance much as in a good implementation they're generated in parallel. It's only a matter of not hogging device's resources.
Applies to the web extension as well.
Not sure how it affects privacy and security, but I don't think it should be a problem at least for proxies.https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/-/issues/73Add an option to disable the icon badge2022-11-15T18:07:25ZLaughingManAdd an option to disable the icon badgeThe recently committed https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/-/commit/b4743eb1c7c48019411e2c26b4e6e31ded836d66 added an icon badge showing the number of clients. There are a number of ext...The recently committed https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/-/commit/b4743eb1c7c48019411e2c26b4e6e31ded836d66 added an icon badge showing the number of clients. There are a number of extensions showing counters in this way, so nothing unusual there. What is unusual is that Snowflake is the only extension where I've been unable to find a setting to turn the counter off. I hate such badges with a passion, so that's a problem.
Would you kindly add a setting for that?https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40234Signaling through TURN2023-08-24T18:14:07ZWofWcawofwca@protonmail.comSignaling through TURNThis one is an epic.
I was thinking about https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/22945#note_2823413 and #40164 and came up with an interesting idea.
How about we do signaling through a...This one is an epic.
I was thinking about https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/22945#note_2823413 and #40164 and came up with an interesting idea.
How about we do signaling through a WebRTC peer connection itself? In order to avoid [leaking the peers' private data](https://w3c.github.io/webrtc-pc/#revealing-ip-addresses), let's establish the peer connection through a TURN relay initially (with the help of `iceTransportPolicy: "relay"` WebRTC option), then set `iceTransportPolicy` to `"all"` (enabling STUN and true P2P) and `restartIce()` and continue signaling (ICE trickling).
Where do we get a TURN server, you might ask? Let's host it along with the broker, I say. Of course we'll probably need some gatekeeping for it (like limiting bandwidth, connection duration, only allowing peers that have communicated with the broker, rotating passwords) so that it doesn't get overloaded by outsiders. Conveniently, [Pion also offers a powerful TURN library](https://github.com/pion/turn) ([example](https://github.com/pion/turn/blob/v2.0.8/examples/turn-server/simple/main.go)).
Biggest problem - looks like the client has to tunnel the TURN traffic through a domain-fronting HTTPS (WSS?) tunnel (or some other censorship-resistant thing (#25594 )?) because the TURN server might be blocked. I'm not sure how hard it is to achieve, but here's [an example of traffic manipulation in Pion](https://github.com/pion/webrtc/blob/v3.1.47/examples/ice-single-port/main.go), so I guess it shouldn't be super hard.
So
Pros:
* Solves #22945
* Practically Solves #40164
* Solves the verification part of #40165 because it's not two different connections, it's the same one.
* Makes the broker more future-proof because it doesn't have to process data that the proxy and the client want to exchange, it simply passes it along.
* Can allow faster bootstrapping by relaying (non-signaling) proxy-to-client data initially, before true P2P has been established.
* (maybe, need to verify) better DPI resistance due to handshake being performed in a secure (domain-fronted) channel (see https://gitlab.torproject.org/tpo/anti-censorship/censorship-analysis/-/issues/40030).
Cons:
* The broker codebase needs a major overhaul.
[Some chat logs](https://matrix.to/#/!hNphRlWKcRVXnwAWJy:matrix.org/$EWgGZ38YotRK9zhqpMjBJo98wQo1HFapnLRXqzsBSCg?via=matrix.org&via=nitro.chat&via=systemli.org) (nothing particularly important).https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/-/issues/77Add "Donate" link to the popup2023-04-25T10:44:36ZWofWcawofwca@protonmail.comAdd "Donate" link to the popupLike this maybe
![image](/uploads/7928d63db7365ba0fb2e4abe09456ae6/image.png)Like this maybe
![image](/uploads/7928d63db7365ba0fb2e4abe09456ae6/image.png)https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40251Analysis of speed deficiency of Snowflake in China, 2023 Q12024-03-21T20:45:13ZshelikhooAnalysis of speed deficiency of Snowflake in China, 2023 Q1We are currently observing an increase of snowflake bootstrap failure. This ticket document our investigation of this incident.
As we can observe from the vantage point test [result](https://gitlab.torproject.org/tpo/anti-censorship/con...We are currently observing an increase of snowflake bootstrap failure. This ticket document our investigation of this incident.
As we can observe from the vantage point test [result](https://gitlab.torproject.org/tpo/anti-censorship/connectivity-measurement/bridgestatus/-/blob/39c4d2a143c2ce43ffb1cbf39bf18f26d7ba49c7/recentResult_cn), the bootstrap percentage is often more than 10 and less than 100 as a result of poor connection speed.
In order to measure the packet loss rate at the vantage points a few [scripts](https://gist.github.com/xiaokangwang/14ac48ef9fc2ce8dd04f92ed9c0928de) are used to calculate the packet loss rate from packet capture file, here is the result:
```
snowflake-probe-0-eth0.pcap:TOTAL 3027, RECV 2702, LOSS RATE .107
snowflake-probe-1-eth0.pcap:TOTAL 3406, RECV 3169, LOSS RATE .069
snowflake-probe-2-eth0.pcap:TOTAL 2896, RECV 2294, LOSS RATE .207
snowflake-probe-3-eth0.pcap:TOTAL 2883, RECV 2652, LOSS RATE .080
snowflake-probe-4-eth0.pcap:TOTAL 2696, RECV 2514, LOSS RATE .067
snowflake-probe-5-eth0.pcap:TOTAL 847, RECV 669, LOSS RATE .210
snowflake-probe-6-eth0.pcap:TOTAL 1855, RECV 1692, LOSS RATE .087
snowflake-probe-7-eth0.pcap:TOTAL 76, RECV 284, LOSS RATE -2.736 (invalid, more than one dtls connection)
snowflake-probe-8-eth0.pcap:TOTAL 1577, RECV 1255, LOSS RATE .204
snowflake-probe-9-eth0.pcap:TOTAL 1449, RECV 1166, LOSS RATE .195
```
As we can see snowflake's bootstrap percentage is regularly impacted by packet loss rate. We can either make snowflake more resistant to packet loss or improve matching process to reduce packet loss.shelikhooshelikhoohttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40352Use unreliable and unordered WebRTC data channels2024-03-21T20:15:25ZDavid Fifielddcf@torproject.orgUse unreliable and unordered WebRTC data channels@shelikhoo:
Actually here are some observation from me related to https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40243:
Snowflake is currently using network resource in a so suboptimal way I ...@shelikhoo:
Actually here are some observation from me related to https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40243:
Snowflake is currently using network resource in a so suboptimal way I think it would make sense to also consider make protocol level change on how kcp is interacting with webrtc before considering to add forward error correction. This would be in the form of enabling unreliable mode of webrtc and make necessary change to get it to work.
Right now, kcp packets are sent in webrtc data channel in a reliable way that deliver all packets in order and retransmit any lost message repeatedly. However, kcp also retransmit its packet itself, which as a result, queue all those retransmitted packets somewhere, like in webrtc's buffer.
This means lost packets are required to be retransmitted more than once in different protocol, and retransmit & timeout get compounded. More retransmit result in more lost packets and more retransmission, which eventually lead to [connection melt down](https://openvpn.net/faq/what-is-tcp-meltdown/) <- please read.
back pressure like the one introduced in https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/144 only move the problem, and block kcp's send in unexpected way, as kcp don't expect send to block as it is usually over udp.
See also: https://lists.torproject.org/pipermail/anti-censorship-team/2023-March/000286.html
(@dcf split this issue off from #40251 to separate the analysis of speed in China from the proposed remedy.)shelikhooshelikhoohttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40254Nil Pointer Crash when Initializing Snowflake Proxy2023-03-07T15:49:08ZbimNil Pointer Crash when Initializing Snowflake Proxyhttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/blob/main/proxy/lib/snowflake.go#L568
Line 568 ought to be moved below 589 - if the event dispatcher isn't set the proxy will crash. I came across this b...https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/blob/main/proxy/lib/snowflake.go#L568
Line 568 ought to be moved below 589 - if the event dispatcher isn't set the proxy will crash. I came across this bumping snowflake to the the latest release in Orbot via our IPtProxy wrapper library.
https://github.com/tladesignz/IPtProxy/issues/39
For now, we simply just init'd our own event dispatcher instance to sidestep the crash.https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40256Standalone Snowflake proxy for Microsoft Windows2023-03-07T14:55:51ZRahim RollinsStandalone Snowflake proxy for Microsoft Windows> If you would like to run a command-line version of the Snowflake proxy on your **desktop** or server, see our guide for running a Snowflake standalone proxy.
[The "Standalone Snowflake proxy" page](https://community.torproject.org/rel...> If you would like to run a command-line version of the Snowflake proxy on your **desktop** or server, see our guide for running a Snowflake standalone proxy.
[The "Standalone Snowflake proxy" page](https://community.torproject.org/relay/setup/snowflake/standalone/) provides instructions for installing and configuring the CLI version of Snowflake proxy on Debian, Fedora, Arch Linux, FreeBSD and Ubuntu. However, most users (working on Windows) would be able to help other users bypass censorship without having to keep the browser running. Now this possibility is impossible for them. At least for such volunteers there is not even an instruction, unlike users of the operating systems listed above.https://gitlab.torproject.org/tpo/anti-censorship/rdsys/-/issues/129GetTor is not replying to emails2023-07-24T16:18:41ZGusGetTor is not replying to emailsUsers from Iran reported that GetTor is not replying to them. I have tried myself and I didn't get a reply too.Users from Iran reported that GetTor is not replying to them. I have tried myself and I didn't get a reply too.meskiomeskio@torproject.orgmeskiomeskio@torproject.orghttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40263proxy: option to set IPv4 and IPv6 bind addresses2023-03-28T18:36:39Zbennyproxy: option to set IPv4 and IPv6 bind addressesfeature request:
On a system with multiple IPv4 & IPv6 addresses and services it would be very helpful to set one (or more) IPv4 and one (or more) IPv6 address for the tcp/udp sockets by proxy.
Different to --outbound-address it should...feature request:
On a system with multiple IPv4 & IPv6 addresses and services it would be very helpful to set one (or more) IPv4 and one (or more) IPv6 address for the tcp/udp sockets by proxy.
Different to --outbound-address it should not be a priority - it should be a fixed IP assignment.https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40267Improve bug discovery process2023-06-20T20:59:52ZitchyonionImprove bug discovery processCreating the issue as a follow up to the meeting on March 16th, 2023 about **snowflake-server buffer reuse bug postmortem https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40260**
(The title of th...Creating the issue as a follow up to the meeting on March 16th, 2023 about **snowflake-server buffer reuse bug postmortem https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40260**
(The title of this ticket could be improved as well. Feel free to do so)
> The harm to users was minor, but incidents like this are a good opportunity to reflect on our process, to make similar things less likely in the future.
>
> The bug (#40199) might have been caught, but was not, at multiple points:
> - Code understanding and review by the initial committer
> - Code review on the merge request
> - Automated tests / CI
> - End user reports or logs
> - Logs or instrumentation at the bridge
>
> **Which of these processes, if any, should we change, to decrease the chance of mistakes?**
>
> **Brainstorming during the meeting:**
>
> - Initial merge request should have included a test to prove the assumption that buffers were not reused.
> - The reviewer might have requested that such a test be added.
> - Any such anomalies, if detected at the client, should be logged in such a way that they show up in the tor log.
> - dcf's private branch that logs KCP's internal error counters:
> - https://gitlab.torproject.org/dcf/snowflake/-/commit/9f43843b59b9753686be836f2c55f209ba29c1e9
> - https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40262#note_2886018
> - The fix this week made the "KCPInErrors" counter go to zero:
> - https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40262#note_2886032
> - We should log whenever KCPInErrors is non-zero, at least.
> - We are missing integration testing as part of CI. We have unit testing, but nothing where all the pieces are working together as in production.
> - shelikhoo's setup for distributed snowflake server testing: https://github.com/xiaokangwang/snowflake-mu-docker/blob/master/docker-compose.yaml
> - Should we have another more verbose level of log (debug/trace) so that it takes less effort to debug things in general? (no need to modify code then rebuilt like hazae41 did it https://hackerone.com/reports/1880610)https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/-/issues/84Snowflake web badge need 3rd-party cookies to run, which is unskilled in toda...2023-05-16T18:24:05ZsolidsSnowflake web badge need 3rd-party cookies to run, which is unskilled in today's most browsersToday most browser set the default config to disable 3rd-party cookie, in Safari it's called "Prevent corss-site tracking", in Chromium-based browser it's called "Block third-party cookies". For example, the badge in website [relay.love]...Today most browser set the default config to disable 3rd-party cookie, in Safari it's called "Prevent corss-site tracking", in Chromium-based browser it's called "Block third-party cookies". For example, the badge in website [relay.love](https://relay.love) will show "Cookies are not enabled." in my browser and it's not possible to run without re-enabling 3rd-party cookies, which will allow tracking websites sneak in my privacy.https://gitlab.torproject.org/tpo/anti-censorship/team/-/issues/126Research about designing an armored bridge line sharing URL format2024-03-04T15:32:16ZshelikhooResearch about designing an armored bridge line sharing URL formatTor's bridge line format is well suited for professional developers and power users on desktop environments. However, for other users the current bridge line does not work so well because:
1. The bridge line contains white space and oth...Tor's bridge line format is well suited for professional developers and power users on desktop environments. However, for other users the current bridge line does not work so well because:
1. The bridge line contains white space and other special characters that could make it hard to copy and paste correctly.
2. When the bridge line was corrupted, the client software can neither detect it, nor correct it. This results in user confusion as the corrupted bridge line results in silent error.
3. User tries to edit the bridge line without understanding how it works internally. This results in inconsistency between how the user expects a bridge line to work and how it actually works.
This ticket tracks the research and discussion about creating a new bridge line format specialized in sharing to address the issues mentioned.
Let's make some initial discussion before I write the full spec and write a reference implementation.
## Goals and non-goals
This armored bridge line format will try to:
1. auto detect/auto correct error occurred during transmission. Give the user explicit feedback when the bridge line is corrupted and avoid silent errors.
2. improve its operating system integration, allowing the user to click on the armored bridge line and be redirected to a bridge line recipient application.
3. avoid any characters or design that could make it harder to transmit the bridge line correctly.
4. signal user not to modify the shared bridge line by intuitively
It won't:
1. try to replace the current bridge line format. It is used to share bridge lines, and original bridge line format will still be accepted by all Tor applications and shown to users by default. The current bridge line format will still be the way bridge configurations are represented.
2. prevent users from editing bridge lines. Users still will be able to edit the bridge line once it is decoded from armored format.
3. prevent the bridge line from being censored or detected by authority.
## Expected Usage Context
This armored bridge line design will be used exclusively for sharing.
Specifically:
1. On Tor Browser, there will be a share bridge line button, when clicked, an armored bridge line will be converted from an ordinary bridge line, and shown to the user as plain text and QR code.
2. The user support team will share an armored bridge line generated from Tor Browser or command line tool to users requesting a bridge when appropriate.
3. Users can share armored bridge lines with each other.
4. Tor client implementations MAY support armored bridge line input. It is optional since this design is targeted toward ordinary users, and Tor Browser already supports converting bridge lines between different forms with command line tools. Advanced users can just use command line tools to convert bridge lines between its different formats.
## Internal design (for discussion)
The 2 way convention between armored bridge line and ordinary bridge line is through a sequence of reversible transform steps. Some of them are optional(under discussion), and may or may not be included in the final design. There are no dynamic or skipable step in the final version of the design.
### Compression (optional)
A compression like 7 bit UTF8 can be used to reduce the length of the final url string.
It will however make conversion more complex to implement.
### All or none transform (optional)
A all or none transform(AONT) like [SAEP+](http://crypto.stanford.edu/~dabo/abstracts/saep.html) can make sure the final output is completely random looking, polymorphic without any resemblance of underlying data.
This ensures:
1. Data are covered by checksum(see SAEP+ design), so any corruption will be detected.
2. Because data are encoded differently each time, if the final output contains a censored keyword, the user can just try again.
3. there will be less observable patterns in the final URL, preventing users from attempting to modify or interpreting it. The users will need to use a supported application to process the armored bridge line.
4. (less of a concern for Tor ecosystem) prevent client implementation from ignoring the checksum and process anyway.
This is a complex transform step.
### Checksum (if All or none transform step is not used) (optional)
Use a CRC64 or SHA-3 to generate a checksum to detect corruption.
This step should be skipped if AONT step was used.
### Forward error correction (optional)
Split the data into chunks and use Reed Solomon coding to encode the data and generate recovery shreds.
When the bridge line is corrupted, forward error correction attempts to repair content directly, without asking the user to try again. This non-interactive repair makes it easier for the user to get the bridge line working, without asking and waiting for assistance. Some environments like bad email clients/line breakers corrupt the content each time it processes it, retry itself won't work and frustrate users.
This is a complex transform step.
### URLSafe base64 encoding without padding + concreted
URL safe base64 encoding without padding will convert the binary result of previous steps into a URL safe string. If there is more than one shard of contents, they will be concreted with ~ symbol, which is URL safe and not used by URL safe base64.
### URL Prefix
The final string will be prefixed with either `bridgeprefix:?` or `https://bridgeprefix/#` to allow it to be clicked and be redirected to Tor client application by operating system.shelikhooshelikhoo2024-03-08https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40271refactor: use Pion's `SetIPFilter` instead of our `StripLocalAddresses`2023-06-29T18:57:41ZWofWcawofwca@protonmail.comrefactor: use Pion's `SetIPFilter` instead of our `StripLocalAddresses`[`SetIPFilter`](https://pkg.go.dev/github.com/pion/webrtc/v3#SettingEngine.SetIPFilter) was [recently added](https://github.com/pion/webrtc/pull/2316) to Pion.
I'm not sure if this gives actual benefits, but to me it seems better if Pio...[`SetIPFilter`](https://pkg.go.dev/github.com/pion/webrtc/v3#SettingEngine.SetIPFilter) was [recently added](https://github.com/pion/webrtc/pull/2316) to Pion.
I'm not sure if this gives actual benefits, but to me it seems better if Pion would filter out addresses by itself, instead of us only stripping them off when sending the offer/answer (see 0fae4ee8ea487c3b4384217e193e5b9a9088e7de, 1867f89562fb25bf9a3c2172a7b6f0a198c81adb, https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/19026). It _might_ also solve [the problem (if it's a problem)](https://forum.torproject.net/t/ubuntu-snowflake-standalone-proxy-tries-to-access-private-lan/7485?u=wofwca) where if a client sends local addresses in its offer, the proxy will try to connect to these local addresses.
Also while we're at it, need to check whether it can be a proper way to solve https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40108.https://gitlab.torproject.org/tpo/anti-censorship/lox/-/issues/5Rewrite Lox handling of new resources2024-02-15T17:38:19ZonyinyangRewrite Lox handling of new resourcesAt present, the Lox distributor handles `new` resources by parsing them into BridgeLines, grouping them into 3 (MAX_BRIDGES_PER_BUCKET) and then placing them into an open invitation bucket. Any additional bridges that don't group into 3 ...At present, the Lox distributor handles `new` resources by parsing them into BridgeLines, grouping them into 3 (MAX_BRIDGES_PER_BUCKET) and then placing them into an open invitation bucket. Any additional bridges that don't group into 3 are stored temporarily in the LoxServerContext until enough bridges to make up a bucket come along. Eventually, this needs to be improved to address several things:
* [ ] Smarter bridge groupings:
* What factors determine whether a set of bridges are grouped into a bucket?
* Should the distributor determine those factors or should rdsys pre-sort bridges into buckets?
* [ ] Decide on an appropriate open invitation to hot spare bucket ratio
* Some proportion of buckets *must* be hot spare buckets so that there are a pool of buckets for users to migrate to
* [ ] Ensure access to open-entry invitations
* Can we prevent sock-puppets from clogging the open-entry pathway?
* Can/should we limit which open-entry bridges are distributed to which users?
* [ ] Determine when a bridge is "blocked"?
* In reality, bridges are blocked by locale which Lox does not consider
* Issue being tracked [here](https://gitlab.torproject.org/tpo/anti-censorship/censorship-analysis/-/issues/40035)
* [ ] Improve Lox Parameters for Tor Usecase
* Lox's parameters such as the time to level up, number of invitations distributed, etc. are untested in the real world
* Are there better parameters for Tor's usecase?https://gitlab.torproject.org/tpo/anti-censorship/rdsys/-/issues/165Write an spec of the assignments.log file format2023-06-12T08:47:21Zmeskiomeskio@torproject.orgWrite an spec of the assignments.log file formathttps://gitlab.torproject.org/tpo/anti-censorship/rdsys/-/issues/169Gettor: distribute TB in bitbucket.org2024-02-27T18:23:51Zmeskiomeskio@torproject.orgGettor: distribute TB in bitbucket.orgIt looks like bitbucket is not blocked in some places where others are.It looks like bitbucket is not blocked in some places where others are.https://gitlab.torproject.org/tpo/anti-censorship/lox/-/issues/24Implement Metrics Reporting for Lox2023-10-31T21:19:34ZonyinyangImplement Metrics Reporting for LoxFrom the [Lox Roadmap](https://gitlab.torproject.org/tpo/anti-censorship/lox-rs/-/wikis/Lox-Roadmap) we want to include strategic reporting of metrics in our Lox deployment so that we are able to determine the effectiveness of Lox. The m...From the [Lox Roadmap](https://gitlab.torproject.org/tpo/anti-censorship/lox-rs/-/wikis/Lox-Roadmap) we want to include strategic reporting of metrics in our Lox deployment so that we are able to determine the effectiveness of Lox. The minimum metrics to measure are the following:
- [x] Prometheus metrics for counts of how often each library function is called from distributor
- [ ] How many bridges are in each rank
- [ ] Blockages from deployed bridgestrap instance
- [x] Remaining capacity (or if/when we run out of bridges to hand out to open inv)
Discussion, development of these and additional metrics to include in the initial deployment will be tracked in this issue.onyinyangonyinyang