Anti-censorship issueshttps://gitlab.torproject.org/groups/tpo/anti-censorship/-/issues2023-08-11T10:49:53Zhttps://gitlab.torproject.org/tpo/anti-censorship/team/-/issues/104extend PT args size limit2023-08-11T10:49:53Zmeskiomeskio@torproject.orgextend PT args size limitCurrently the maximum length of arguments on a bridge line is 510bytes, as those are passed in the username and password fields of the SOCKS5 connection. We have already hit this limit with snowflake (https://gitlab.torproject.org/tpo/ap...Currently the maximum length of arguments on a bridge line is 510bytes, as those are passed in the username and password fields of the SOCKS5 connection. We have already hit this limit with snowflake (https://gitlab.torproject.org/tpo/applications/tor-browser-build/-/issues/40665).
One proposal would to define a SOCKS5 METHOD (section 3 of [RFC1928](https://www.rfc-editor.org/rfc/rfc1928) different that 'username/password' (0x02) for it. Some years ago this was discussed and proposed to use 0x80 (*RESERVED FOR PRIVATE METHODS*): https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/trac/-/issues/10671#note_2604090
There is a PT spec around that proposes using 0x9 (undefined in the SOCKS5 RFC): https://github.com/Pluggable-Transports/Pluggable-Transports-spec/blob/main/releases/PTSpecV3.0/Pluggable%20Transport%20Specification%20v3.0%20-%20Dispatcher%20IPC%20Interface%20v3.0.md#14-pluggable-pt-client-per-connection-arguments
But AFAIK there is no implementation of any of those, so I guess we are free to define what we find more useful here. Any better proposals?https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/-/issues/69Can get stuck sometimes2023-02-04T09:41:52ZWofWcawofwca@protonmail.comCan get stuck sometimesError/timeout handling of `ProxyPair` and related stuff looks poor to me and I think it needs to be revisited. Namely:
* `flush` checks `webrtcIsReady()` before sending a message to the client. If it's `false` and `r2cSchedule` has messa...Error/timeout handling of `ProxyPair` and related stuff looks poor to me and I think it needs to be revisited. Namely:
* `flush` checks `webrtcIsReady()` before sending a message to the client. If it's `false` and `r2cSchedule` has messages, it will [`setTimeout` to `flush()` again](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/-/blob/724281b06a122506f17a472ec20e5a34f02439ae/proxypair.js#L239-241). This can make an infinite loop (see [comment](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/-/issues/30#note_2821668))
* [`peerConnOpen`](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/-/blob/724281b06a122506f17a472ec20e5a34f02439ae/proxypair.js#L259-261) checks if `this.pc.connectionState !== 'closed'`, but state can also be `'failed'` and `'disconnected'`.
* `channel.onerror` is not handled (I suppose a timeout would run anyway, but then it looks like things need to be renamed or something).
* In [`close()`](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/-/blob/724281b06a122506f17a472ec20e5a34f02439ae/proxypair.js#L200-208) we check for things like `peerConnOpen()` before doing `pc.close()`, but its state could still be `'new'` or `'connecting'`, so it won't get closed in that case.
* When creating `ProxyPair`, we [create a timeout](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/-/blob/a8b7508ab0587276faec1a0290732c4bea8c5362/snowflake.js#L90-92) that's supposed to run if we fail to create a connection. If we succeed, then [another timeout](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/-/blob/4a178c81f02fe6d1bb4804ffefcac37d893e1942/proxypair.js#L163-167) is supposed to take over. But it's not obvious and it looks very fragile, like it can be broken by an unrelated change.
* If connection gets closed within `config.datachannelTimeout` (20s) after being opened, the `datachannelTimeout` callback would still get executed.
* There are listeners (e.g. in `proxypair.js`) like `.onopen` and `.ondatachannel` that don't take into account the fact that they can be fired several times.
A suggestion for `ProxyPair.close()` issues is - only call `close()` in one place of the program (outside of the ProxyPair class perhaps). `ProxyPair.receiveOffer` should return a `Promise` that resolves when we started serving the client successfully, otherwise rejects. And the resolve value of that `Promise` should be another `Promise` that is fulfilled when we've finished serving the client.
Related: !54, #19https://gitlab.torproject.org/tpo/anti-censorship/team/-/issues/103add 'settings' and 'telegram' to BridgeDB requests by distributor metrics page2022-11-03T09:20:52Zmeskiomeskio@torproject.orgadd 'settings' and 'telegram' to BridgeDB requests by distributor metrics pageCan we add the missing distributors here?
https://metrics.torproject.org/bridgedb-distributor.html
That data is coming from the collector metrics produced by BridgeDB, settings and telegram distributors right now don't produce metrics...Can we add the missing distributors here?
https://metrics.torproject.org/bridgedb-distributor.html
That data is coming from the collector metrics produced by BridgeDB, settings and telegram distributors right now don't produce metrics for collector only for prometheus. Should we start producing data for collector?meskiomeskio@torproject.orgmeskiomeskio@torproject.orghttps://gitlab.torproject.org/tpo/anti-censorship/team/-/issues/102future of builtin bridges2024-02-27T18:46:44Zmeskiomeskio@torproject.orgfuture of builtin bridgesWe've being removing obfs4 builtin bridges (#44 #98) without replacing them. Talking with others about it we started questioning the value of builtin bridges and if it makes sense to keep them around and find more bridge operators for th...We've being removing obfs4 builtin bridges (#44 #98) without replacing them. Talking with others about it we started questioning the value of builtin bridges and if it makes sense to keep them around and find more bridge operators for them.
I guess they were created because getting bridges (before moat) was hard and in many cases builtin bridges are enough to overcome restricted networks (like corporate firewalls). With connect assist getting bridges is trivial and is easier than configuring builtin bridges.
Another reason to use builtin bridges is that they are operated by trusted members of our community and hopefully are more stable than a random bridge. But we solve that by providing multiple bridges in each request to circumvention settings.
Currently there are two situations in Tor Browser where builtin bridges are used in two cases:
* Circumvention settings API configure builtin bridges when we don't know anything about the country of the IP address. I expect this mostly to be used in corporate firewalls, where builtin bridges work fine, but I hope circumvention settings ones should work as good.
* When users enable them manually. I'm not sure when will this happen. I expect that if circumvention settings is not reachable (our domain front is blocked) builtin bridges will not work.
I don't think we can't stop using builtin bridges from one day to another, as TB is not the only user of them. But if we don't see the need of them we can slowly move into the direction of deprecating them.
What kind of usecases might people have for builtin bridges that I'm missing? Is there any original reason form them that I don't know?meskiomeskio@torproject.orgmeskiomeskio@torproject.orghttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40222Inactive connections with 510 second duration2023-06-28T11:15:43ZVortInactive connections with 510 second durationAfter commit 1bc54948 (!108), I started to notice lots of connections in logs with similar duration (510s-512s), which were closed due to inactivity.
I see several possible reasons for such behaviour:
1. All such connections come from ...After commit 1bc54948 (!108), I started to notice lots of connections in logs with similar duration (510s-512s), which were closed due to inactivity.
I see several possible reasons for such behaviour:
1. All such connections come from non-standard clients.
2. Timeout value from !108 is too small.
3. Lots of connections broke because of high rate of UDP losses in my network.
```bash
$ grep -c "over 51[012]" log_crop2.txt && grep -c "Traffic throughput" log_crop2.txt
52
459
```
If connections are closed because of problems with my network (3), then, most likely, nothing should be done from snowflake side.
If this is a sign of attack or other bot activity (1), then it may need further investigation.
And, most importantly, if all of them are legit and my network is good enough for snowflake (2), it means losing >11% of user connections just because of bug, which I think is not acceptable.
```bash
$ grep "over 51[012]" log_crop2.txt | tail -20
2022/10/20 09:20:16 Traffic throughput (up|down): 1 MB|45 KB -- (209 OnMessages, 2641 Sends, over 510 seconds)
2022/10/20 09:49:23 Traffic throughput (up|down): 7 MB|367 KB -- (750 OnMessages, 6452 Sends, over 510 seconds)
2022/10/20 10:18:56 Traffic throughput (up|down): 184 KB|12 KB -- (53 OnMessages, 1236 Sends, over 510 seconds)
2022/10/20 11:05:02 Traffic throughput (up|down): 1 MB|26 KB -- (193 OnMessages, 1156 Sends, over 510 seconds)
2022/10/20 11:11:46 Traffic throughput (up|down): 51 KB|16 KB -- (93 OnMessages, 853 Sends, over 510 seconds)
2022/10/20 11:52:27 Traffic throughput (up|down): 11 MB|180 KB -- (428 OnMessages, 11799 Sends, over 510 seconds)
2022/10/20 12:16:46 Traffic throughput (up|down): 48 KB|13 KB -- (169 OnMessages, 334 Sends, over 510 seconds)
2022/10/20 12:30:28 Traffic throughput (up|down): 3 MB|353 KB -- (1053 OnMessages, 4517 Sends, over 510 seconds)
2022/10/20 12:31:58 Traffic throughput (up|down): 4 MB|47 KB -- (500 OnMessages, 3478 Sends, over 511 seconds)
2022/10/20 12:39:16 Traffic throughput (up|down): 27 KB|28 KB -- (58 OnMessages, 297 Sends, over 510 seconds)
2022/10/20 12:56:22 Traffic throughput (up|down): 1 MB|12 KB -- (72 OnMessages, 1206 Sends, over 510 seconds)
2022/10/20 13:16:58 Traffic throughput (up|down): 29 KB|5 KB -- (28 OnMessages, 224 Sends, over 510 seconds)
2022/10/20 13:29:10 Traffic throughput (up|down): 159 KB|4 KB -- (17 OnMessages, 1997 Sends, over 510 seconds)
2022/10/20 13:30:19 Traffic throughput (up|down): 244 KB|199 KB -- (444 OnMessages, 1257 Sends, over 510 seconds)
2022/10/20 13:52:30 Traffic throughput (up|down): 259 KB|77 KB -- (236 OnMessages, 1230 Sends, over 510 seconds)
2022/10/20 13:58:37 Traffic throughput (up|down): 33 KB|7 KB -- (39 OnMessages, 760 Sends, over 511 seconds)
2022/10/20 14:00:25 Traffic throughput (up|down): 23 KB|15 KB -- (89 OnMessages, 398 Sends, over 510 seconds)
2022/10/20 14:51:09 Traffic throughput (up|down): 102 KB|30 KB -- (344 OnMessages, 538 Sends, over 511 seconds)
2022/10/20 15:06:10 Traffic throughput (up|down): 132 KB|31 KB -- (87 OnMessages, 305 Sends, over 510 seconds)
2022/10/20 15:10:26 Traffic throughput (up|down): 161 KB|111 KB -- (194 OnMessages, 1157 Sends, over 510 seconds)
```https://gitlab.torproject.org/tpo/anti-censorship/rdsys/-/issues/130Upgrade telebot library to version 32022-11-03T16:39:33Zmeskiomeskio@torproject.orgUpgrade telebot library to version 3We use [telebot](https://github.com/tucnak/telebot) in our [bridges bot](https://gitlab.torproject.org/tpo/anti-censorship/rdsys/-/blob/main/doc/telegram.md). But we are using the version 2 of the library, let's update it to version 3.We use [telebot](https://github.com/tucnak/telebot) in our [bridges bot](https://gitlab.torproject.org/tpo/anti-censorship/rdsys/-/blob/main/doc/telegram.md). But we are using the version 2 of the library, let's update it to version 3.https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40210Remove the pollInterval loop from SignalingServer.pollOffer in the standalone...2022-10-18T18:33:59ZDavid Fifielddcf@torproject.orgRemove the pollInterval loop from SignalingServer.pollOffer in the standalone proxyThe constant `pollInterval` is used in two places in the proxy code: in an [outer loop](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/blob/main/proxy/lib/snowflake.go#L636-648) that starts sessions, a...The constant `pollInterval` is used in two places in the proxy code: in an [outer loop](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/blob/main/proxy/lib/snowflake.go#L636-648) that starts sessions, and in an [inner loop](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/blob/main/proxy/lib/snowflake.go#L206-243) for each session that repeatedly polls.
The design does not make sense: there should be only one place where the `pollInterval` is enforced.
The inner loop, in the `SignalingServer.pollOffer` function,
is barely even a loop at all:
all its internal code paths lead to a `return`.
(The error case for `s.Post` lacks a `return`,
but an empty `resp` will cause the following `DecodePollResponse` check to return.)
The inner loop should be replaced with just one iteration of straight-line code.
I did some digging to find out how the code got to be this way.
Originally `pollInterval` only appeared in the inner loop.
The outer loop would run as soon as the inner loop returned,
so the speed of the whole process depended on the delay being enforced by the inner loop.
There was a bug (#40055) where the inner loop could return
without enforcing any delay (the same `s.Post` case I mentioned above).
The fix in !51 was to add a delay to the outer loop as well.
Looking at the code now, it appears that the right place for the delay all along
was in the outer loop,
and it should not be in the inner loop at all.
Before 7a0428e3b11ba437f27d09b1a9ad0fa820e54d24, the inner loop worked differently:
it was as if the `s.Post` error case had a `continue` rather than a `return`.
That is why there is a loop at all in the inner loop.
7a0428e3b11ba437f27d09b1a9ad0fa820e54d24 inadvertently changed it so that the inner loop
returned immediately, rather than enforcing a delay and running the loop again,
which was the actual cause of #40055, I think.https://gitlab.torproject.org/tpo/anti-censorship/gettor-project/OnionSproutsBot/-/issues/44pyrogram.errors.exceptions.bad_request_400.QueryIdInvalid: Telegram says: [40...2022-10-14T07:16:39Zmeskiomeskio@torproject.orgpyrogram.errors.exceptions.bad_request_400.QueryIdInvalid: Telegram says: [400 QUERY_ID_INVALID] - The callback query id is invalid (caused by "messages.SetBotCallbackAnswer")Looking at the logs I found:
```
Oct 10 15:55:42 telegram-bot-01 osbtg[516]: NetworkTask stopped
Oct 10 15:55:42 telegram-bot-01 osbtg[516]: PingTask stopped
Oct 10 15:55:42 telegram-bot-01 osbtg[516]: Disconnected
Oct 10 15:55:42 telegr...Looking at the logs I found:
```
Oct 10 15:55:42 telegram-bot-01 osbtg[516]: NetworkTask stopped
Oct 10 15:55:42 telegram-bot-01 osbtg[516]: PingTask stopped
Oct 10 15:55:42 telegram-bot-01 osbtg[516]: Disconnected
Oct 10 15:55:42 telegram-bot-01 osbtg[516]: Session stopped
Oct 10 15:55:42 telegram-bot-01 osbtg[516]: Connecting...
Oct 10 15:55:42 telegram-bot-01 osbtg[516]: Connected! Production DC4 - IPv4 - TCPAbridgedO
Oct 10 15:55:42 telegram-bot-01 osbtg[516]: NetworkTask started
Oct 10 15:55:42 telegram-bot-01 osbtg[516]: Session initialized: Layer 146
Oct 10 15:55:42 telegram-bot-01 osbtg[516]: Device: CPython 3.9.2 - Pyrogram 2.0.55
Oct 10 15:55:42 telegram-bot-01 osbtg[516]: System: Linux 5.10.0-18-amd64 (EN)
Oct 10 15:55:42 telegram-bot-01 osbtg[516]: Session started
Oct 10 15:55:42 telegram-bot-01 osbtg[516]: PingTask started
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: Telegram says: [400 QUERY_ID_INVALID] - The callback query id is invalid (caused by "messages.SetBotCallbackAnswer")
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: Traceback (most recent call last):
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: File "/home/telegrambot/.local/lib/python3.9/site-packages/Pyrogram-2.0.55-py3.9.egg/pyrogram/dispatcher.py", line 240, in handler_worker
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: await handler.callback(self.client, *args)
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: File "/home/telegrambot/venv/lib/python3.9/site-packages/OnionSproutsBot-1.1.0-py3.9.egg/OnionSproutsBot/bot.py", line 117, in welcome_command
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: await client.answer_callback_query(
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: File "/home/telegrambot/.local/lib/python3.9/site-packages/Pyrogram-2.0.55-py3.9.egg/pyrogram/methods/bots/answer_callback_query.py", line 71, in answer_callback_query
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: return await self.invoke(
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: File "/home/telegrambot/.local/lib/python3.9/site-packages/Pyrogram-2.0.55-py3.9.egg/pyrogram/methods/advanced/invoke.py", line 77, in invoke
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: r = await self.session.invoke(
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: File "/home/telegrambot/.local/lib/python3.9/site-packages/Pyrogram-2.0.55-py3.9.egg/pyrogram/session/session.py", line 361, in invoke
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: return await self.send(query, timeout=timeout)
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: File "/home/telegrambot/.local/lib/python3.9/site-packages/Pyrogram-2.0.55-py3.9.egg/pyrogram/session/session.py", line 331, in send
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: RPCError.raise_it(result, type(data))
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: File "/home/telegrambot/.local/lib/python3.9/site-packages/Pyrogram-2.0.55-py3.9.egg/pyrogram/errors/rpc_error.py", line 91, in raise_it
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: raise getattr(
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: pyrogram.errors.exceptions.bad_request_400.QueryIdInvalid: Telegram says: [400 QUERY_ID_INVALID] - The callback query id is invalid (caused by "messages.SetBotCallbackAnswer")
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: Telegram says: [400 QUERY_ID_INVALID] - The callback query id is invalid (caused by "messages.SetBotCallbackAnswer")
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: Traceback (most recent call last):
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: File "/home/telegrambot/.local/lib/python3.9/site-packages/Pyrogram-2.0.55-py3.9.egg/pyrogram/dispatcher.py", line 240, in handler_worker
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: await handler.callback(self.client, *args)
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: File "/home/telegrambot/venv/lib/python3.9/site-packages/OnionSproutsBot-1.1.0-py3.9.egg/OnionSproutsBot/bot.py", line 183, in send_mirrors
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: await client.answer_callback_query(
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: File "/home/telegrambot/.local/lib/python3.9/site-packages/Pyrogram-2.0.55-py3.9.egg/pyrogram/methods/bots/answer_callback_query.py", line 71, in answer_callback_query
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: return await self.invoke(
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: File "/home/telegrambot/.local/lib/python3.9/site-packages/Pyrogram-2.0.55-py3.9.egg/pyrogram/methods/advanced/invoke.py", line 77, in invoke
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: r = await self.session.invoke(
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: File "/home/telegrambot/.local/lib/python3.9/site-packages/Pyrogram-2.0.55-py3.9.egg/pyrogram/session/session.py", line 361, in invoke
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: return await self.send(query, timeout=timeout)
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: File "/home/telegrambot/.local/lib/python3.9/site-packages/Pyrogram-2.0.55-py3.9.egg/pyrogram/session/session.py", line 331, in send
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: RPCError.raise_it(result, type(data))
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: File "/home/telegrambot/.local/lib/python3.9/site-packages/Pyrogram-2.0.55-py3.9.egg/pyrogram/errors/rpc_error.py", line 91, in raise_it
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: raise getattr(
Oct 10 15:55:43 telegram-bot-01 osbtg[516]: pyrogram.errors.exceptions.bad_request_400.QueryIdInvalid: Telegram says: [400 QUERY_ID_INVALID] - The callback query id is invalid (caused by "messages.SetBotCallbackAnswer")
```
Doesn't seem to affect any other connections, neither I've seeing how to produce it myself, but it looks like a similar traceback appears in the log 14 times in the last week.https://gitlab.torproject.org/tpo/anti-censorship/censorship-analysis/-/issues/40035Brainstorm and analyze heuristics to guess that a bridge might be offline or ...2024-02-22T21:57:07ZRoger DingledineBrainstorm and analyze heuristics to guess that a bridge might be offline or blockedIn the upcoming "subscription model" plan (https://gitlab.torproject.org/tpo/anti-censorship/team/-/issues/42), we envision several use cases. Here are the first three:
* (1) bridge moves to a new IP address
* (2) bridge goes offline
...In the upcoming "subscription model" plan (https://gitlab.torproject.org/tpo/anti-censorship/team/-/issues/42), we envision several use cases. Here are the first three:
* (1) bridge moves to a new IP address
* (2) bridge goes offline
* (3) bridge gets blocked
Case 1 is the easiest, since if the bridge is at a new IP address, we know this because we have a newer bridge descriptor for it. So if a client comes asking for a replacement, we just give them a new bridge line based on this new bridge descriptor.
For case 2, we want to give users a deterministic replacement -- but only if the bridge is actually offline. So we need some scanning mechanism to discover and/or verify which bridges have gone offline, and it should learn an answer quickly enough to be relevant for the subscription model style replacement.
For case 3, we also want to give users a deterministic replacement, but it has to come from the "dynamic bridge pool" subset, and also we only want to offer a replacement if we believe the bridge is actually blocked. Case 3 is also fun because we don't want to test a given bridge from in-country until we hit a threshold of suspicion that it is blocked.
This umbrella ticket aims to collect ideas for (a) what information sources we can use to decide that a given bridge is worth testing now, and (b) think about architectures for active scanning that go well with these three use cases plus the information sources from 'a'.
Potential data sources:
- Usage metrics (https://gitlab.torproject.org/tpo/anti-censorship/rdsys/-/issues/112)
- Client reports
- Reported by tor process (https://gitlab.torproject.org/tpo/core/arti/-/issues/717)
- Reported by Tor Browser
- Measurement probes
- Indirect, external scanning (e.g., spooky scan, censored planet)
- Scans from within the censored region (e.g., OONI, our own censorship probe)
Consumers of this information:
- Subscription model for bridge distribution (https://gitlab.torproject.org/tpo/anti-censorship/team/-/issues/42)
- Reputation-based bridge distribution (https://gitlab.torproject.org/tpo/anti-censorship/lox/lox-overview/-/issues/5)https://gitlab.torproject.org/tpo/anti-censorship/rdsys/-/issues/129GetTor is not replying to emails2023-07-24T16:18:41ZGusGetTor is not replying to emailsUsers from Iran reported that GetTor is not replying to them. I have tried myself and I didn't get a reply too.Users from Iran reported that GetTor is not replying to them. I have tried myself and I didn't get a reply too.meskiomeskio@torproject.orgmeskiomeskio@torproject.orghttps://gitlab.torproject.org/tpo/anti-censorship/rdsys/-/issues/128Add microsoft onedrive provider to gettor2022-10-05T13:37:27Zmeskiomeskio@torproject.orgAdd microsoft onedrive provider to gettorThe [gettor updater](https://gitlab.torproject.org/tpo/anti-censorship/rdsys/-/blob/main/doc/gettor.md) could upload Tor Browser also to a [microsoft OneDrive](https://www.microsoft.com/en-us/microsoft-365/onedrive), like already do to o...The [gettor updater](https://gitlab.torproject.org/tpo/anti-censorship/rdsys/-/blob/main/doc/gettor.md) could upload Tor Browser also to a [microsoft OneDrive](https://www.microsoft.com/en-us/microsoft-365/onedrive), like already do to other providers: https://gitlab.torproject.org/tpo/anti-censorship/rdsys/-/tree/main/pkg/presentation/updaters/gettor
OneDrive free plan is only 5GBs, which is too small for our current needs, but this requirement might be reduced in the near future. Or we might consider paying for the service as OneDrive is probably reachable in most places.https://gitlab.torproject.org/tpo/anti-censorship/rdsys/-/issues/125Add a prometheus exporter to gettor updater2022-10-05T12:53:23Zmeskiomeskio@torproject.orgAdd a prometheus exporter to gettor updaterLet's produce some metrics on the gettor updater for the latest TB version we have updated per platform and provider.
Some inspiration can be taken from how is the prometheus exporter in gettor: https://gitlab.torproject.org/tpo/anti-ce...Let's produce some metrics on the gettor updater for the latest TB version we have updated per platform and provider.
Some inspiration can be taken from how is the prometheus exporter in gettor: https://gitlab.torproject.org/tpo/anti-censorship/rdsys/-/blob/main/pkg/usecases/distributors/gettor/gettor.gohttps://gitlab.torproject.org/tpo/anti-censorship/rdsys/-/issues/124Add a prometheus exporter to moat distributor2024-03-21T12:29:24Zmeskiomeskio@torproject.orgAdd a prometheus exporter to moat distributorLet's collect prometheus metrics on the Circumvention Settings [moat distributor](https://gitlab.torproject.org/tpo/anti-censorship/rdsys/-/blob/main/doc/moat.md). We might want to collect metrics for:
* Requests to settings with country...Let's collect prometheus metrics on the Circumvention Settings [moat distributor](https://gitlab.torproject.org/tpo/anti-censorship/rdsys/-/blob/main/doc/moat.md). We might want to collect metrics for:
* Requests to settings with country and *valid shim token* as labels
* Requests to other API endpoints with the endpoint as label (settings, defaults, map, builtin)
Some inspiration can be taken from how is the prometheus exporter in gettor: https://gitlab.torproject.org/tpo/anti-censorship/rdsys/-/blob/main/pkg/usecases/distributors/gettor/gettor.gomeskiomeskio@torproject.orgmeskiomeskio@torproject.orghttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40204pion errors don't go into the log2022-12-03T13:45:30ZRoger Dingledinepion errors don't go into the logMy snowflake proxy tells me, I guess on either stdout or stderr,
```
sctp ERROR: 2022/10/03 13:47:32 [0xc002986380] stream 1 not found)
sctp ERROR: 2022/10/03 13:47:32 [0xc002986380] stream 1 not found)
sctp ERROR: 2022/10/03 13:47:32 [0...My snowflake proxy tells me, I guess on either stdout or stderr,
```
sctp ERROR: 2022/10/03 13:47:32 [0xc002986380] stream 1 not found)
sctp ERROR: 2022/10/03 13:47:32 [0xc002986380] stream 1 not found)
sctp ERROR: 2022/10/03 13:47:32 [0xc002986380] stream 1 not found)
sctp ERROR: 2022/10/03 13:47:32 [0xc002986380] stream 1 not found)
sctp ERROR: 2022/10/03 13:47:32 [0xc002986380] stream 1 not found)
sctp ERROR: 2022/10/03 13:47:32 [0xc002986380] stream 1 not found)
sctp ERROR: 2022/10/03 13:47:32 [0xc002986380] stream 1 not found)
sctp ERROR: 2022/10/03 13:47:32 [0xc002986380] stream 1 not found)
```
but I am using -log, and these lines don't show up in the log. It is unexpected that "error" category messages would be the ones that are transient and not captured for posterity.
(Also, the timestamps in the log seem to be utc, and the timestamps on my stdout/stderr appear to be local timezone. Not sure if that merits a separate ticket -- let me know if yes and I can open it.)Linus Nordberglinus@torproject.orgLinus Nordberglinus@torproject.orghttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40197nil pointer dereference in PeerConnection.PendingLocalDescription2022-10-03T11:53:14Zcypherpunksnil pointer dereference in PeerConnection.PendingLocalDescription```
info] {EDGE} connection_edge_process_inbuf(): data from edge while in 'waiting for circuit' state. Leaving it on buffer.
info] {EDGE} connection_edge_process_inbuf(): data from edge while in 'waiting for circuit' state. Leaving it on...```
info] {EDGE} connection_edge_process_inbuf(): data from edge while in 'waiting for circuit' state. Leaving it on buffer.
info] {EDGE} connection_edge_process_inbuf(): data from edge while in 'waiting for circuit' state. Leaving it on buffer.
info] {NET} parse_socks_client(): SOCKS 5 client: continuing without authentication
info] {NET} connection_read_proxy_handshake(): Proxy Client: OR connection (handshaking (proxy)) with 192.0.2.3:80 ID=1zOHpg+FxqQfi/6jDLtCpHHqBTH8gjYmCKXkus1D5Ko RSA_ID=2B280B23E1107BB62ABFC40DDCC8824814F80A72 successful
info] {BTRACK} bto_update_best(): ORCONN BEST_ANY state 2->3 gid=4
notice] {CONTROL} Bootstrapped 10% (conn_done): Connected to a relay
info] {BTRACK} bto_update_best(): ORCONN BEST_AP state 2->3 gid=4
info] {PT} managed_proxy_stderr_callback(): Managed proxy at 'PluggableTransports\snowflake-client.exe' reported via standard error: panic: runtime error: invalid memory address or nil pointer dereference
info] {PT} managed_proxy_stderr_callback(): Managed proxy at 'PluggableTransports\snowflake-client.exe' reported via standard error: [signal 0xc0000005 code=0x0 addr=0x0 pc=0x5ed17d]
info] {PT} managed_proxy_stderr_callback(): Managed proxy at 'PluggableTransports\snowflake-client.exe' reported via standard error:
info] {PT} managed_proxy_stderr_callback(): Managed proxy at 'PluggableTransports\snowflake-client.exe' reported via standard error: goroutine 53 [running]:
info] {PT} managed_proxy_stderr_callback(): Managed proxy at 'PluggableTransports\snowflake-client.exe' reported via standard error: github.com/pion/webrtc.(*PeerConnection).PendingLocalDescription(0x0)
info] {PT} managed_proxy_stderr_callback(): Managed proxy at 'PluggableTransports\snowflake-client.exe' reported via standard error: /var/tmp/dist/gopath/src/github.com/pion/webrtc/peerconnection.go:2026 +0x1d
info] {PT} managed_proxy_stderr_callback(): Managed proxy at 'PluggableTransports\snowflake-client.exe' reported via standard error: github.com/pion/webrtc.(*PeerConnection).LocalDescription(0xc00034c000)
info] {PT} managed_proxy_stderr_callback(): Managed proxy at 'PluggableTransports\snowflake-client.exe' reported via standard error: /var/tmp/dist/gopath/src/github.com/pion/webrtc/peerconnection.go:1007 +0x1e
info] {PT} managed_proxy_stderr_callback(): Managed proxy at 'PluggableTransports\snowflake-client.exe' reported via standard error: git.torproject.org/pluggable-transports/snowflake.git/v2/client/lib.(*WebRTCPeer).connect(0xc00034c000, 0x0, 0xc000345d48)
info] {PT} managed_proxy_stderr_callback(): Managed proxy at 'PluggableTransports\snowflake-client.exe' reported via standard error: /var/tmp/dist/gopath/src/git.torproject.org/pluggable-transports/snowflake.git/v2/client/lib/webrtc.go:150 +0xd8
info] {PT} managed_proxy_stderr_callback(): Managed proxy at 'PluggableTransports\snowflake-client.exe' reported via standard error: git.torproject.org/pluggable-transports/snowflake.git/v2/client/lib.NewWebRTCPeerWithEvents(0x35ee40, 0xc0000d6000, {0x223f8f30008, 0xc00022e220})
info] {PT} managed_proxy_stderr_callback(): Managed proxy at 'PluggableTransports\snowflake-client.exe' reported via standard error: /var/tmp/dist/gopath/src/git.torproject.org/pluggable-transports/snowflake.git/v2/client/lib/webrtc.go:73 +0x38b
info] {PT} managed_proxy_stderr_callback(): Managed proxy at 'PluggableTransports\snowflake-client.exe' reported via standard error: git.torproject.org/pluggable-transports/snowflake.git/v2/client/lib.WebRTCDialer.Catch(...)
info] {PT} managed_proxy_stderr_callback(): Managed proxy at 'PluggableTransports\snowflake-client.exe' reported via standard error: /var/tmp/dist/gopath/src/git.torproject.org/pluggable-transports/snowflake.git/v2/client/lib/rendezvous.go:172
info] {PT} managed_proxy_stderr_callback(): Managed proxy at 'PluggableTransports\snowflake-client.exe' reported via standard error: git.torproject.org/pluggable-transports/snowflake.git/v2/client/lib.(*Peers).Collect(0xc000234080)
info] {PT} managed_proxy_stderr_callback(): Managed proxy at 'PluggableTransports\snowflake-client.exe' reported via standard error: /var/tmp/dist/gopath/src/git.torproject.org/pluggable-transports/snowflake.git/v2/client/lib/peers.go:69 +0x223
info] {PT} managed_proxy_stderr_callback(): Managed proxy at 'PluggableTransports\snowflake-client.exe' reported via standard error: git.torproject.org/pluggable-transports/snowflake.git/v2/client/lib.connectLoop({0x846bd0, 0xc000234080})
info] {PT} managed_proxy_stderr_callback(): Managed proxy at 'PluggableTransports\snowflake-client.exe' reported via standard error: /var/tmp/dist/gopath/src/git.torproject.org/pluggable-transports/snowflake.git/v2/client/lib/snowflake.go:345 +0x56
info] {PT} managed_proxy_stderr_callback(): Managed proxy at 'PluggableTransports\snowflake-client.exe' reported via standard error: created by git.torproject.org/pluggable-transports/snowflake.git/v2/client/lib.(*Transport).Dial
info] {PT} managed_proxy_stderr_callback(): Managed proxy at 'PluggableTransports\snowflake-client.exe' reported via standard error: /var/tmp/dist/gopath/src/git.torproject.org/pluggable-transports/snowflake.git/v2/client/lib/snowflake.go:170 +0x206
info] {NET} TLS error: <syscall error while handshaking> (errno=10054: Connection reset by peer [WSAECONNRESET ]; state=SSLv3/TLS write client hello)
info] {OR} connection_tls_continue_handshake(): tls error [connection reset]. breaking connection.
info] {CIRC} circuit_n_chan_done(): Channel failed; closing circ.
info] {GENERAL} circuit_mark_for_close_(): Circuit 0 (id: 1) marked for close at circuitbuild.c:687 (orig reason: 8, new reason: 0)
info] {HANDSHAKE} connection_or_note_state_when_broken(): Connection died in state 'handshaking (TLS) with SSL state SSLv3/TLS write client hello in HANDSHAKE'
info] {BTRACK} bto_status_rcvr(): ORCONN DELETE gid=4 status=2 reason=4
warn] {CONTROL} Problem bootstrapping. Stuck at 10% (conn_done): Connected to a relay. (CONNECTRESET; CONNECTRESET; count 1; recommendation warn; host 2B280B23E1107BB62ABFC40DDCC8824814F80A72 at 192.0.2.3:80)
warn] {HANDSHAKE} 1 connections have failed:
warn] {HANDSHAKE} 1 connections died in state handshaking (TLS) with SSL state SSLv3/TLS write client hello in HANDSHAKE
info] {OR} circuit_build_failed(): Our circuit 0 (id: 1) died before the first hop with no connection
info] {GUARD} entry_guards_note_guard_failure(): Recorded failure for primary guard $2B280B23E1107BB62ABFC40DDCC8824814F80A72 ($2B280B23E1107BB62ABFC40DDCC8824814F80A72)
info] {CIRC} circuit_free_(): Circuit 0 (id: 1) has been freed.
warn] {PT} Pluggable Transport process terminated with status code 2
```https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/-/issues/57Visually modify snowflake extension badge to indicate NAT type?2023-01-12T12:19:55ZRoger DingledineVisually modify snowflake extension badge to indicate NAT type?As one concrete idea for the broader goal in https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/-/issues/4 of gamification -- which I will define as steering users toward behaviors that are more valua...As one concrete idea for the broader goal in https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/-/issues/4 of gamification -- which I will define as steering users toward behaviors that are more valuable -- it occurred to me that NAT type (restricted vs unrestricted) is a super important feature for Snowflake volunteers these days.
What if we made the badge one color when snowflake decides it is restricted, and a different if it decides it's unrestricted?
Then the next step would be some mechanism when you click on it for it to steer you toward what to do to become the better color -- which overlaps with https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40128.
(Doesn't have to be color based, since colorblindness etc is a thing. Just something that looks more successful and not quite as successful.)
Cc'ing @tpo/ux since gamification if totally a ux topic.https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40178Handle unknown client NAT type better to reduce load on restricted proxy pool2022-10-12T18:01:31ZCecylia BocovichHandle unknown client NAT type better to reduce load on restricted proxy poolWe're seeing a large number of clientpolls with unknown NAT types:
![image](/uploads/af5a7125fb7f8168b2bf6d2637a2ebaf/image.png)
To be safe, we treat unknown client NATs the same as restricted client NATs, so they pull from the smaller...We're seeing a large number of clientpolls with unknown NAT types:
![image](/uploads/af5a7125fb7f8168b2bf6d2637a2ebaf/image.png)
To be safe, we treat unknown client NATs the same as restricted client NATs, so they pull from the smaller pool of proxies that are known to work with symmetrics NATs. It's possible we can relieve at least some of the pressure on this proxy pool by having unknown clients first try proxies from the unrestricted pool and then fall back to the restricted pool if there is a failure to connect.https://gitlab.torproject.org/tpo/anti-censorship/gettor-project/OnionSproutsBot/-/issues/36investigate dashboard showing the most popular versions over the past 24 hours2022-09-29T10:42:50Zn0tooseinvestigate dashboard showing the most popular versions over the past 24 hoursbecause why notbecause why nothttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/-/issues/55Update description in Snowflake extension pages on Firefox and Chrome2023-06-27T18:34:40ZrayaUpdate description in Snowflake extension pages on Firefox and ChromeThere was a discussion in the Tor IRC channel that the description in the Snowflake extension Chrome webstore and Firefox add-ons page does not clearly distinguish between censored/uncensored users:
- https://addons.mozilla.org/en-GB/fir...There was a discussion in the Tor IRC channel that the description in the Snowflake extension Chrome webstore and Firefox add-ons page does not clearly distinguish between censored/uncensored users:
- https://addons.mozilla.org/en-GB/firefox/addon/torproject-snowflake/
- https://chrome.google.com/webstore/detail/snowflake/mafpmfcccpbjnhfhjnllmmalhifmlcie
Opening the issue to say that I could work on updating the description in the next hour if the priority is high!
cc: @arma @gus @shelikhoo @meskioCecylia BocovichCecylia Bocovichhttps://gitlab.torproject.org/tpo/anti-censorship/gettor-project/OnionSproutsBot/-/issues/32add ways to process parameters based on argparse and/or environment variables...2022-09-22T11:58:45Zn0tooseadd ways to process parameters based on argparse and/or environment variables instead of just .json