Anti-censorship issueshttps://gitlab.torproject.org/groups/tpo/anti-censorship/-/issues2023-04-22T12:23:15Zhttps://gitlab.torproject.org/tpo/anti-censorship/gettor-project/OnionSproutsBot/-/issues/15provide android builds2023-04-22T12:23:15Zn0tooseprovide android buildsWe want to provide Android builds, but the endpoint we use (https://gitlab.torproject.org/tpo/anti-censorship/gettor-project/onionsproutsbot/-/blob/rewrite/example.yaml#L6) only provides downloads for desktop versions.
## Solutions
###...We want to provide Android builds, but the endpoint we use (https://gitlab.torproject.org/tpo/anti-censorship/gettor-project/onionsproutsbot/-/blob/rewrite/example.yaml#L6) only provides downloads for desktop versions.
## Solutions
### httpdirfs
We could use httpdirfs (see https://gitlab.torproject.org/tpo/anti-censorship/gettor-project/onionsproutsbot/-/issues/11) and use binaries as provided by https://dist.torproject.org, but it's a bit hard to distinguish which files are to be used, as, not only is there a "version mismatch" between the latest stable version of the desktop and the mobile edition, but there are also many different versions, some of which are suitable for daily usage, whereas some of them aren't. (As of the time of this writing, the example that shows this happens to be **11.0.8**, https://dist.torproject.org/torbrowser/11.0.8/ with the latest version being 11.0.10. The former version's directory only contains Android builds, whereas none of the more latest, stable versions provide Android builds.)
The bot having to decide which version is good based on parameters that may no longer exist in the future (e.g. someone could remove all Android builds ending in `*androidTest.apk`, because "why not, what could possibly go wrong with this" is most likely a bad idea, and the maintenance burden would be higher than desired. This bot is meant to be robust, just running in the background and doing its job without causing problems every now and then.
### F-Droid
We could use an F-Droid endpoint that provides builds (right now, the Guardian Project does so, and if the work is made to support obtaining files from their frontend, this can be easily switched to Tor's later on). This is by far the most realistic approach if other teams are not able to work on this, and F-Droid does provide APIs that could help. My recent efforts to make different parts of the program modular would help with getting something like this done. They have APIs as well: https://f-droid.org/en/docs/All_our_APIs/
However, I am concerned that the end result would turn out to be more fragile than it should be (such as telling apart the correct versions to be included), and a big refactoring session would still turn out to be required.
### Website scraping
This website providing a list of downloads for the browser is manually updated (I think I recall @gaba telling me this): https://www.torproject.org/download/#android
We could hypothetically just scrape the website and just get the downloads and be done with this. However, again, this bot should just sit back, quietly, doing its job and not breaking over the smallest changes, such as a website redesign. In conclusion, *no.*
### Tor Endpoint
To obtain a list of downloads for the desktop version, we currently use the following API endpoint: https://aus1.torproject.org/torbrowser/update_3/release/downloads.json
(See: https://gitlab.torproject.org/tpo/anti-censorship/gettor-project/onionsproutsbot/-/blob/rewrite/example.yaml#L6)
It would be pretty great if the project provided an endpoint for Android builds as well, especially because that would mean that a single "point of contact" would be telling my bot, as well as other projects that utilize the aforementioned endpoint, what to do, instead of my own bot scrambling to figure out what to do with a volatile set of data like that. However, with an F-Droid repository being supposedly around the corner, such a feature may be realistically way too much effort that I would be "outsourcing" to other people. However, something like that could also be useful for things such as providing lists of downloads through the website quickly. But that's neither my field or my fight.Sponsor 139: Rapid Response Iranirlirlhttps://gitlab.torproject.org/tpo/anti-censorship/rdsys/-/issues/42Provide download links for android on gettor2023-02-03T16:01:32Zmeskiomeskio@torproject.orgProvide download links for android on gettorWill be nice to add android as a platform as well. There are few challenges, as there is one single `.apk` for all languages and a bunch of different architectures: android-aarch64, android-armv7, android-x86, android-x86_64.Will be nice to add android as a platform as well. There are few challenges, as there is one single `.apk` for all languages and a bunch of different architectures: android-aarch64, android-armv7, android-x86, android-x86_64.Sponsor 139: Rapid Response Iranmeskiomeskio@torproject.orgmeskiomeskio@torproject.orghttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/27850Provide stand-alone snowflake proxy for 32-bit2020-06-27T13:40:35ZtraumschuleProvide stand-alone snowflake proxy for 32-bitI tried [[doc/Snowflake#Option2standalone]] and ran into
```
~/go/src/git.torproject.org/pluggable-transports/snowflake/proxy-go$ torsocks go get ...I tried [[doc/Snowflake#Option2standalone]] and ran into
```
~/go/src/git.torproject.org/pluggable-transports/snowflake/proxy-go$ torsocks go get
# github.com/keroserene/go-webrtc
/usr/bin/ld: cannot find -lwebrtc-linux-386-magic
collect2: error: ld returned 1 exit status
```
https://github.com/keroserene/go-webrtc/issues/38David Fifielddcf@torproject.orgDavid Fifielddcf@torproject.orghttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/25597Proxy - better polling / reset2020-06-27T13:40:38ZArlo BreaultProxy - better polling / resetMigrated from https://github.com/keroserene/snowflake/issues/15Migrated from https://github.com/keroserene/snowflake/issues/15https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40302Proxy hourly stats did not actually happen during that hour2023-10-30T16:44:18ZRoger DingledineProxy hourly stats did not actually happen during that hourMy snowflake proxy today said in its logs:
```
2023/10/27 09:53:58 In the last 1h0m0s, there were 53 connections. Traffic Relayed ↓ 999064 KB, ↑ 167798 KB.
2023/10/27 10:53:58 In the last 1h0m0s, there were 67 connections. Traffic Relaye...My snowflake proxy today said in its logs:
```
2023/10/27 09:53:58 In the last 1h0m0s, there were 53 connections. Traffic Relayed ↓ 999064 KB, ↑ 167798 KB.
2023/10/27 10:53:58 In the last 1h0m0s, there were 67 connections. Traffic Relayed ↓ 561833 KB, ↑ 97282 KB.
2023/10/27 11:53:58 In the last 1h0m0s, there were 63 connections. Traffic Relayed ↓ 124717270 KB, ↑ 6237157 KB.
2023/10/27 12:53:58 In the last 1h0m0s, there were 48 connections. Traffic Relayed ↓ 745608 KB, ↑ 97699 KB.
```
That "125 gigabytes" entry translates to... almost 35 megabytes per second of traffic, on average during the hour? Probably by only a few or even one user? This did not happen during that hour.
Looking at proxy/lib/pt_event_logger.go:
```
case event.EventOnProxyConnectionOver:
e := e.(event.EventOnProxyConnectionOver)
p.inboundSum += e.InboundTraffic
p.outboundSum += e.OutboundTraffic
p.connectionCount += 1
```
I.e. what is happening here is that the stats for the entire connection are counted during the hour that it closed.
See the forum for somebody else reporting this same issue (forum post noticed courtesy MarkC on irc): https://forum.torproject.org/t/impossible-metric-for-snowflake-proxy/9941/1
We should either (a) make the hourly "In the last 1h0m0s," be accurate, in the sense that they actually tell me what happened in the last 1h0m0s, or (b) change the log message so it's clearer it is telling me how many connections finished during that hour along with total transfer on those recently-closed connections.
I'd prefer solution 'a', since it is what I thought I was getting out of these log entries, and I was using it to e.g. try to judge what timezones my snowflake is popular.https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40104proxy lib - be able to configure the proxy type2022-03-21T18:25:21Zmeskiomeskio@torproject.orgproxy lib - be able to configure the proxy typeThe snowflake proxy library is being used by more clients than our standalone proxy. Library users should be able to set the proxy type that will be reported to the broker.
Currently the proxy type is hardcoded: https://gitlab.torprojec...The snowflake proxy library is being used by more clients than our standalone proxy. Library users should be able to set the proxy type that will be reported to the broker.
Currently the proxy type is hardcoded: https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/blob/main/proxy/lib/snowflake.go#L202
A use case for it now will be to have a different type in orbot, so we know how many snowflakes are provided by orbot users in comparison to other users.
We should take into account that currently the broker has a [hardcoded list of proxy types](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/blob/main/broker/metrics.go#L26) and the rest is treated as 'unknown'. This was motivated by having a lot of requests with estrange proxy types (https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40089). I guess we can extend the proxy type list for the mayor types we know off or we could do some simple validation of what kind of proxy types are meaningful.meskiomeskio@torproject.orgmeskiomeskio@torproject.orghttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40229Proxy log scrubbing misses URL-encoded IPv6 addresses2022-11-28T20:58:21ZDavid Fifielddcf@torproject.orgProxy log scrubbing misses URL-encoded IPv6 addressesThe log scrubbing patterns (tpo/anti-censorship/pluggable-transports/snowflake#21304, tpo/anti-censorship/pluggable-transports/snowflake#40115)
miss IPv6 addresses in URLs, where `:` is encoded as `%3A` or `%3a`.
URLs like these may be l...The log scrubbing patterns (tpo/anti-censorship/pluggable-transports/snowflake#21304, tpo/anti-censorship/pluggable-transports/snowflake#40115)
miss IPv6 addresses in URLs, where `:` is encoded as `%3A` or `%3a`.
URLs like these may be logged in the case of HTTP errors.
https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake-webext/-/merge_requests/55#note_2851695
> `error dialing relay: wss://snowflake.torproject.net/?client_ip=2001%3Adb8%3A4000%3A%3A1234 = dial tcp: lookup snowflake.torproject.net: no such host`https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/29426proxy-go instances not available2020-06-27T13:40:32ZCecylia Bocovichproxy-go instances not availableThe broker is reporting no available snowflakes despite the fact that the proxies are running. Not sure if this is due to the deadlock problem or if these 504 errors indicate a problem with the proxy-broker communication.The broker is reporting no available snowflakes despite the fact that the proxies are running. Not sure if this is due to the deadlock problem or if these 504 errors indicate a problem with the proxy-broker communication.https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/25688proxy-go is still deadlocking occasionally2022-07-09T04:20:16ZDavid Fifielddcf@torproject.orgproxy-go is still deadlocking occasionallyThe three fallback proxy-go instances are still hanging, after variable delays of a few days. This is even after removing all memory restrictions I discussed in https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowf...The three fallback proxy-go instances are still hanging, after variable delays of a few days. This is even after removing all memory restrictions I discussed in https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/21312#note_2591535.
The more heavily used instances seem to deadlock sooner. Those for the currently used broker would be more likely to stop than those for the standalone broker. But the ones for the standalone broker would stop too.
In the meantime, I've put the fallback proxies back on periodic restarts. Before the intervals were 1h,2h,10h; now I increased them to 17h,23h,29h (prime numbers, so the average time before the next restart is < 17h).
I'll update this ticket with a graph showing uptimes when I have time.Cecylia BocovichCecylia Bocovichhttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/25344proxy-go needs to relax between polls2020-06-27T13:40:40ZDavid Fifielddcf@torproject.orgproxy-go needs to relax between pollsThe JavaScript proxy has `DEFAULT_BROKER_POLL_INTERVAL`, but there's nothing like that in proxy-go. The standalone broker is getting 2 to 5 /proxy requests per second from the 3 round-the-clock proxy-go instances.The JavaScript proxy has `DEFAULT_BROKER_POLL_INTERVAL`, but there's nothing like that in proxy-go. The standalone broker is getting 2 to 5 /proxy requests per second from the 3 round-the-clock proxy-go instances.https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/33211proxy-go sometimes gets into a 100+% CPU state2021-07-09T18:26:26ZDavid Fifielddcf@torproject.orgproxy-go sometimes gets into a 100+% CPU stateproxy-go sometimes works itself into a state where it is still running and working, but using more than 100% CPU. I have had it happen locally a couple of times while testing turbotunnel stuff, and it's currently happening with proxy-go-...proxy-go sometimes works itself into a state where it is still running and working, but using more than 100% CPU. I have had it happen locally a couple of times while testing turbotunnel stuff, and it's currently happening with proxy-go-restartless:
```
$ top
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
13844 snowfla+ 20 0 551292 320692 8844 R 161.1 15.6 129356:18 proxy-go
```
Or looking at single threads:
```
$ top -H
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
24628 snowfla+ 20 0 551292 320692 8844 R 39.7 15.6 15219:01 proxy-go
13844 snowfla+ 20 0 551292 320692 8844 R 35.4 15.6 15431:52 proxy-go
1637 snowfla+ 20 0 551292 320692 8844 R 34.8 15.6 16057:40 proxy-go
13848 snowfla+ 20 0 551292 320692 8844 S 27.5 15.6 13669:02 proxy-go
13846 snowfla+ 20 0 551292 320692 8844 S 22.5 15.6 17021:57 proxy-go
```
I caught it once and attached to the process with GDB, but didn't know what to make of it. `thread apply all bt` seemed to show all the threads being somewhere in the Go runtime; the thread that wasn't was not one of the threads using a lot of CPU. (Matching up the `PID` field from `top -H` with the `LWP` identifiers in gdb.)
I had the idea to make proxy-go emit profiling output, and then exmine the call chain that was resulting the in the most CPU using [profiling tools](https://blog.golang.org/profiling-go-programs). A patch to do that is [proxy-go-profile.patch.](None/proxy-go-profile.patch.) But I haven't been able to reproduce the high CPU usage yet.Cecylia BocovichCecylia Bocovichhttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/23356proxy-go starts using 100% CPU when network is disconnected2020-06-27T13:40:41ZDavid Fifielddcf@torproject.orgproxy-go starts using 100% CPU when network is disconnectedA power outage disconnected the network of a laptop on which I was running proxy-go. The laptop battery kept the computer running until the power came back on, but the network was still down. When I checked on it, the log was filled with...A power outage disconnected the network of a laptop on which I was running proxy-go. The laptop battery kept the computer running until the power came back on, but the network was still down. When I checked on it, the log was filled with 2.4 GB of
```
2017/08/29 10:02:33 error polling broker: Post https://snowflake-reg.appspot.com/proxy: dial tcp: lookup snowflake-reg.appspot.com on <dns server>:53: dial udp <dns server>:53: connect: network is unreachable
2017/08/29 10:02:33 error polling broker: Post https://snowflake-reg.appspot.com/proxy: dial tcp: lookup snowflake-reg.appspot.com on <dns server>:53: dial udp <dns server>:53: connect: network is unreachable
2017/08/29 10:02:33 error polling broker: Post https://snowflake-reg.appspot.com/proxy: dial tcp: lookup snowflake-reg.appspot.com on <dns server>:53: dial udp <dns server>:53: connect: network is unreachable
```
There were 11,879,088 of these messages over the course of about an hour (according to the log timestamps), so about 3,330 messages per second. I'm guessing the code was in a tight failure loop.https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40106proxy-go support for IGD2024-02-20T07:58:00ZJillproxy-go support for IGDI tried using the proxy-go in two setup, one on a laptop behind NAT, the other on a server with its own restrictive firewall.
In both cases the NAT type was detected as "restricted". My understanding is that at most one end of the conne...I tried using the proxy-go in two setup, one on a laptop behind NAT, the other on a server with its own restrictive firewall.
In both cases the NAT type was detected as "restricted". My understanding is that at most one end of the connection can be restricted,
so a restricted proxy can't talk to a restricted client. Getting an unrestricted NAT is better as it's compatible with more clients.
To that effect, proxy-go could use [IGD](https://en.wikipedia.org/wiki/Internet_Gateway_Device_Protocol) to ask the NAT to create a dynamic
port forwarding so it is effectively unrestricted. This would help with the laptop situation (assuming the router doing NAT supports IGD,
which mine does), and using something like [miniupnp](https://miniupnp.tuxfamily.org/), it would also be possible to dynamically open ports
on a local, restrictive, firewall.https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/trac/-/issues/10629PT spec changes for better interoperability with other projects2021-11-08T19:56:38ZXimin LuoPT spec changes for better interoperability with other projectsI spoke with the i2p guys today and here are some of their suggestions for the PT spec. These would make it easier for them (and future other projects) to use Tor's PTs.
Major improvements:
- better spec documentation
- less Tor jarg...I spoke with the i2p guys today and here are some of their suggestions for the PT spec. These would make it easier for them (and future other projects) to use Tor's PTs.
Major improvements:
- better spec documentation
- less Tor jargon, split Tor-specific information into separate sections (e.g. Tor env vars)
- some guidelines for non-Tor programs to use PTs
- better handling of per-endpoint config params such as pubkeys, instead of current hack via SOCKS auth params
Smaller enhancements, "good to have":
- possibility of letting a single process to act as both a client (outgoing) and server (incoming).
- flashproxy must allow client-specific remote endpoints (already as legacy/trac#10196)
- don't trust the entire localhost machine to make outgoing connections, e.g. if one users wants to run his own instance. two options here:
- SSL connection with user/pass to the SOCKS transport client
- use unix domain sockets. This also frees up ports, which is extra-useful in PT composition. Doesn't work on windows, though.Tor: unspecifiedYawning AngelYawning Angelhttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/trac/-/issues/17523PT Spec v1 should document the ExtORPort2021-11-15T19:01:57ZYawning AngelPT Spec v1 should document the ExtORPortFollow up from legacy/trac#16754.
The newly generic PT spec is still lacking the non-tor specific parts of proposals 196 and 217 pertaining to the Extended OR Port. The information conveyed via that protocol is useful to non-tor people...Follow up from legacy/trac#16754.
The newly generic PT spec is still lacking the non-tor specific parts of proposals 196 and 217 pertaining to the Extended OR Port. The information conveyed via that protocol is useful to non-tor people so the spec should one day incorporate such.Yawning AngelYawning Angelhttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/trac/-/issues/30442PT spec: should 255 bytes be sent in the RFC 1929 UNAME field?2021-07-29T15:00:10ZMark SmithPT spec: should 255 bytes be sent in the RFC 1929 UNAME field?Section 3.5 of the PT spec says:
If the encoded argument list is less than 255 bytes in
length, the "PLEN" field must be set to "1" and the "PASSWD"
field must contain a single NUL character.
When Kathy Brade and I implemented legacy...Section 3.5 of the PT spec says:
If the encoded argument list is less than 255 bytes in
length, the "PLEN" field must be set to "1" and the "PASSWD"
field must contain a single NUL character.
When Kathy Brade and I implemented legacy/trac#29627, we viewed the above as a spec bug and allowed up to 255 bytes to be sent in the RFC 1929 UNAME field. Was that the wrong thing to do? Or should the PT spec be changed to read "If the encoded argument list is less than or equal to 255 bytes in length..."?https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/trac/-/issues/10047PTs could self-shutdown when they detect their stdout is closed2020-06-27T13:44:02ZXimin LuoPTs could self-shutdown when they detect their stdout is closedIn [ticket:9330] we were exploring solutions to signal a PT to do clean shutdown on Windows. In [ticket:10006] dcf suggested a workaround using JobObjects, which has the nice property that the children shutdown even when their parent cra...In [ticket:9330] we were exploring solutions to signal a PT to do clean shutdown on Windows. In [ticket:10006] dcf suggested a workaround using JobObjects, which has the nice property that the children shutdown even when their parent crashes or is killed (SIGKILL or TerminateProcess).
This raises the valid point, why don't we try to achieve this for all platforms? Since all PTs must already communicate via stdout back to Tor (or any parent process, such as a PT chainer), one way of detecting parent death is to check that stdout is still open.
Example: [http://compgroups.net/comp.unix.programmer/how-to-kill-all-child-when-parent-exits/36841]
We'll need to research whether we must write to the stream to detect it's closed, or if we can get away with doing something like poll or select.George KadianakisGeorge Kadianakishttps://gitlab.torproject.org/tpo/anti-censorship/team/-/issues/87Publish a snapshot of what PTs are needed for successful Tor use in each country2022-07-20T21:15:30ZRoger DingledinePublish a snapshot of what PTs are needed for successful Tor use in each countrySeveral countries have deployed censorship that includes trying to block Tor in various ways. And places change their censorship over time. What does the big picture look like today?
We have a scattering of resources on this topic curre...Several countries have deployed censorship that includes trying to block Tor in various ways. And places change their censorship over time. What does the big picture look like today?
We have a scattering of resources on this topic currently, e.g.:
* OONI has "vanilla Tor" measurements in some countries.
* We have anecdotal stories from talking to users in various places.
* There's the censorship wiki: https://trac.torproject.org/projects/tor/wiki/doc/OONI/censorshipwiki (legacy/trac#6149)
* metrics-timeline has something similar: https://trac.torproject.org/projects/tor/wiki/doc/MetricsTimeline
* And the Berkeley folks wrote up their own Tor censorship timeline: https://www.icsi.berkeley.edu/~sadia/tor_timeline.pdf
But what is the situation, this month, in every country? Which ones block the Tor directory authorities, which ones block the public relays, which ones block the default (i.e. included in tor browser) bridges, which ones enumerate bridges from bridges.torproject.org and block them by IP address, which ones use DPI to detect and cut various pluggable transport connections, which ones throttle protocols they don't want, etc?
When Venezuela's CANTV ISP did their IP address based blocking, they also blocked the default obfs4 bridges, which led to confusion and then unfortunate headlines like the one from Access: "Venezuela blocks Tor". (Tor worked fine if you got a fresh bridge, even a vanilla bridge.)
In Taipei I talked to some central asia experts who told me about how Tor only works in a degraded way in Belarus in the default configuration "because a few years ago they blocked all the relay IP addresses, but they haven't updated their block since then".
We need up-to-date information about Tor blocking to provide advice to our users when they ask for support, and also we want it for preemptively including good advice in Tor Launcher's UI. Knowing historical trends will help us prioritize the development of new pluggable transports vs new distribution methods of existing transports.
So, how do we get this information?
One option is that in the glorious future, the standard OONI decks will have all of these tools built-in. But that future is a long way off, and maybe it should never even arrive, since some Tor transports are huge and have no business being bundled into a little mobile testing app.
I think we instead want some combination of the following two plans:
* We have on-the-ground contacts in many countries, and it's often not just individuals but whole NGOs full of Tor enthusiasts. We should pull together our knowledge of who we know in each place, and ask them what they think the current situation is in their country, and talk to them regularly. We can prioritize the various countries that we think were, are, or might be trying to block Tor. Having these on-the-ground experts is going to be necessary no matter what else we add to the plan, and it's why I picked 'community outreach' as the ticket component.
* We should build automated tools to assist people in assessing Tor censorship on their network -- to increase the accuracy of reports that we get, and to make the reports come with actual data that we can compare over time. This idea is legacy/trac#23839.
This ticket is for pulling together one big-picture report. But once we have one, we will want to find ways of keeping ourselves up to date over time.Sponsor 96: Rapid Expansion of Access to the Uncensored Internet through Tor in China, Hong Kong, & Tibetmeskiomeskio@torproject.orgmeskiomeskio@torproject.orghttps://gitlab.torproject.org/tpo/anti-censorship/docker-snowflake-proxy/-/issues/3publish images for arm2022-05-16T14:33:30Zmeskiomeskio@torproject.orgpublish images for armSo raspberrypi users can run a proxy. We could copy the crossbuild setup from [docker-obfs4-bridge](https://gitlab.torproject.org/tpo/anti-censorship/docker-obfs4-bridge) or just build it natively and push it to docker hub.
Related to #2.So raspberrypi users can run a proxy. We could copy the crossbuild setup from [docker-obfs4-bridge](https://gitlab.torproject.org/tpo/anti-censorship/docker-obfs4-bridge) or just build it natively and push it to docker hub.
Related to #2.https://gitlab.torproject.org/tpo/anti-censorship/lox/-/issues/37Publish Lox crates on crates.io2023-11-23T18:05:31ZonyinyangPublish Lox crates on crates.ioPrior to Lox being deployed, we should publish each of the Lox crates on [`crates.io`](https://crates.io/). The crate name and the `crates.io` name should match to avoid confusion/inconvenience.
There are instructions for publishing to ...Prior to Lox being deployed, we should publish each of the Lox crates on [`crates.io`](https://crates.io/). The crate name and the `crates.io` name should match to avoid confusion/inconvenience.
There are instructions for publishing to `crates.io` [here](https://doc.rust-lang.org/cargo/reference/publishing.html)
Once the crates are created, we should update our pipeline to automatically push updates to the documentation as appropriate.onyinyangonyinyang