Snowflake issueshttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues2023-07-29T18:04:14Zhttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40161Snowflake Broker IP Change Rate Data HMAC Key Used the default value2023-07-29T18:04:14ZshelikhooSnowflake Broker IP Change Rate Data HMAC Key Used the default valueThis is an issue discovered in the https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40151 's configuration that the `ip-count-mask` used the default value, and make the counting result unsuitable ...This is an issue discovered in the https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40151 's configuration that the `ip-count-mask` used the default value, and make the counting result unsuitable for publishing.
I will deploy it again tomorrow with a randomly generated HMAC key. The collected unsalted IP count data will be retained and used for internal research only. It is not useless either, since it has collected data for over a month, and could be used for validation of this feature.shelikhooshelikhoo2022-08-03https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40158Change name "Snowflake" -> "North Star"2022-07-18T18:12:15ZWofWcawofwca@protonmail.comChange name "Snowflake" -> "North Star"Of course there's probably no strong reason to do this, just a funny idea I had.
Symbolism: North Star - guidance (to the free internet), North - ICE, cold (like it is now with Snowflake).
"Polaris" would be another layer, but too compl...Of course there's probably no strong reason to do this, just a funny idea I had.
Symbolism: North Star - guidance (to the free internet), North - ICE, cold (like it is now with Snowflake).
"Polaris" would be another layer, but too complicated, I think.
Maybe there's gonna be another similar project that can take this name.https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40157Add missing 2.3.0 tag to gitweb.torproject.org2022-07-13T16:22:22ZtlaAdd missing 2.3.0 tag to gitweb.torproject.orgThe tag v2.3.0 is still missing in this repo:
https://gitweb.torproject.org/pluggable-transports/snowflake.git/refs/
Which of these now is the source-of-truth?
I'd say, it's not a good idea, to keep 2 repositories around, but if, the...The tag v2.3.0 is still missing in this repo:
https://gitweb.torproject.org/pluggable-transports/snowflake.git/refs/
Which of these now is the source-of-truth?
I'd say, it's not a good idea, to keep 2 repositories around, but if, they should get synchronized automatically.shelikhooshelikhoohttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40154Snowflake container crashes after a few days2023-11-21T00:11:27ZjonesvSnowflake container crashes after a few daysI am running Snowflake on my VPS as a podman container, with the following:
```
podman run --net host -m=700M -d snowflake-proxy
```
I have been running this for a few months, and it tends to crash after one to ten days. If I look at t...I am running Snowflake on my VPS as a podman container, with the following:
```
podman run --net host -m=700M -d snowflake-proxy
```
I have been running this for a few months, and it tends to crash after one to ten days. If I look at the memory usage, I see that it has a tendency to grow. At some point I was monitoring more closely, and pretty clearly it would crash after reaching the limit (at first I had no limit and it would crash my VPS, now I guess it's just stopped by podman when it uses more than 700M).
My guess is that Snowflake does not keep much state, so the memory usage should never grow too much, right? This feels like a memory leak, but I am not sure what I can do to get more data.https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40153probetest misbehaving2022-07-17T15:19:39ZMarkCprobetest misbehavingprobetest has been throwing errors for a day or so:
_error polling probe: http2: timeout awaiting response headers_
just ran _./proxy -nat-retest-interval 1m -verbose_ and its continuing ad infinitum
still seeing traffic throughput th...probetest has been throwing errors for a day or so:
_error polling probe: http2: timeout awaiting response headers_
just ran _./proxy -nat-retest-interval 1m -verbose_ and its continuing ad infinitum
still seeing traffic throughput tho
@meskio @shelikhoo thought you might want to know
update: one successful probe in a 15 minute periodhttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40152Snowflake doesn't work in Russia2022-06-20T15:21:38ZcypherpunksSnowflake doesn't work in RussiaSnowflake in Tor Browser 11.5a12 can't connect.Snowflake in Tor Browser 11.5a12 can't connect.https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40151Snowflake Broker Deployment 22-06-212022-07-22T11:04:30ZshelikhooSnowflake Broker Deployment 22-06-21Code to be deployed: TBD, with [Distributed](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/95) and [IP Change Rate](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transport...Code to be deployed: TBD, with [Distributed](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/95) and [IP Change Rate](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/95) merged.
## Deployment Script
```
sv stop snowflake-broker
cp /usr/local/bin/broker ./snowflake-broker-22-06-17-backup-$(date +%N)
cp /etc/service/snowflake-broker/run ./snowflake-broker-run-22-06-17-backup-$(date +%N)
install --owner root ./snowflake-broker-22-06-17-candidcate /usr/local/bin/broker
install --owner root ./snowflake-broker-run-22-06-17-candidcate /etc/service/snowflake-broker/run
install --owner root ./snowflake-broker-bridgelist-22-06-17-candidcate /home/snowflake-broker/bridge_lists.json
sv start snowflake-broker
```
## New Run Script
```
#!/bin/sh -e
setcap 'cap_net_bind_service=+ep' /usr/local/bin/broker
export GOMAXPROCS=1
exec chpst -u snowflake-broker -o 32768 /usr/local/bin/broker --metrics-log /home/snowflake-broker/metrics.log --acme-hostnames snowflake-broker.bamsoftware.com,snowflake-broker.freehaven.net,snowflake-broker.torproject.net --acme-email dcf@torproject.org --acme-cert-cache /home/snowflake-broker/acme-cert-cache --bridge-list-path /home/snowflake-broker/bridge_lists.json --default-relay-pattern ^snowflake.torproject.net$ --allowed-relay-pattern snowflake.torproject.net$ -ip-count-log /home/snowflake-broker/metrics-ip.jsonl -ip-count-interval 1h 2>&1
```
## New Bridge List
```
{"displayName":"default", "webSocketAddress":"wss://snowflake.torproject.net/", "fingerprint":"2B280B23E1107BB62ABFC40DDCC8824814F80A72"}
```
## Build
```
git clone https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake.git
git checkout 35e9ab8c0b3168b5eaa4f6538b8e9208eb38c508
GOARCH=amd64 GOOS=linux CGO_ENABLED=0 go build -ldflags="-s -w" -o snowflake-broker
#sha256sum snowflake-broker=ca85c33aeb8bdc04e31a772f24b610c5bb4ab68973a9ade6e64915bf7c1ee8d2
```
## Deployment Pack
```
$sha256sum *
73ccdf7f3cc5da1e0808bec0c1593500c9b09c7f89c4b96d403cb5096286b1e1 deployment.sh
ca85c33aeb8bdc04e31a772f24b610c5bb4ab68973a9ade6e64915bf7c1ee8d2 snowflake-broker-22-06-17-candidcate
e9de53a5566216dfd511b229385edcf3f710684039cb76a27b737e8ed47b0a3f snowflake-broker-bridgelist-22-06-17-candidcate
cc46657f8c186e2788da3d8bb58ec6a00af1e73493a01964171503569f74680a snowflake-broker-run-22-06-17-candidcate
```
## Note to myself
Check log:
```
tail -n 50 /var/log/snowflake-broker/current
```shelikhooshelikhoohttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40148provide a library debian package2022-07-07T12:03:41Zmeskiomeskio@torproject.orgprovide a library debian packageOther packages are interested on using the snowflake client library in debian. Let's create a *-dev* package with it.Other packages are interested on using the snowflake client library in debian. Let's create a *-dev* package with it.meskiomeskio@torproject.orgmeskiomeskio@torproject.orghttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40146Avoid double NAT check on standalone proxy startup2022-05-30T14:29:17ZCecylia BocovichAvoid double NAT check on standalone proxy startupWhen go standalone proxies are first starting up, they perform two NAT probe checks in quick succession. The first is done by calling `checkNATType` directly from `Start` at [line 562](https://gitlab.torproject.org/tpo/anti-censorship/pl...When go standalone proxies are first starting up, they perform two NAT probe checks in quick succession. The first is done by calling `checkNATType` directly from `Start` at [line 562](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/blob/ae5a71e6e58e664311e3a12f9adb48ed439df4a5/proxy/lib/snowflake.go#L562), and second as a part of the period `NatRetestTask` started at [line 577](https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/blob/ae5a71e6e58e664311e3a12f9adb48ed439df4a5/proxy/lib/snowflake.go#L577).
It's not much, but cutting down these double tests will reduce some of the load at the probe service.https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40145NAT type refresh appears to be skipped on 24 hour cycle2023-04-13T16:01:20ZMarkCNAT type refresh appears to be skipped on 24 hour cycleAs of snowflake-proxy v.2.2.0 I’m getting 'Nat type: unknown' in the log. Previously on v.2.1.0 it was 'unrestricted/unrestricted'. When I run the nat behaviour tool it reports endpoint-independant for both mapping and filtering. I have ...As of snowflake-proxy v.2.2.0 I’m getting 'Nat type: unknown' in the log. Previously on v.2.1.0 it was 'unrestricted/unrestricted'. When I run the nat behaviour tool it reports endpoint-independant for both mapping and filtering. I have a static ip for the host computer and all ephemeral udp ports forwarded to it. Traffic flow for the proxy is way down as a result. I was averaging close to 1 GB/hr before.
Here’s the output from the NAT behaviour tool:
```
Users-Mac-mini:~ user$ $GOPATH/bin/stun-nat-behaviour —server stun.voip.blackberry.com:3478
2022/05/26 17:56:13 Connecting to STUN server: stun.voip.blackberry.com:3478
2022/05/26 17:56:15 Local address: 0.0.0.0:59082
2022/05/26 17:56:15 Received xormapped address: xxx.xxx.xx.xxx:59082
2022/05/26 17:56:15 Received xormapped address: xxx.xxx.xx.xxx:59082
2022/05/26 17:56:15 NAT mapping behavior: endpoint-independent
2022/05/26 17:56:15 Local address: 0.0.0.0:55624
2022/05/26 17:56:15 Received xormapped address: xxx.xxx.xx.xxx:55624
2022/05/26 17:56:15 NAT filtering behavior: endpoint-independent
```
And the output from proxy -verbose:
```
Users-Mac-mini:~ user$ proxy -verbose
2022/05/27 00:41:07 In the last 1h0m0s, there are 0 connections. Traffic Relayed ↑ 0 B, ↓ 0 B.
2022/05/27 00:41:07 starting
2022/05/27 00:41:07 WebRTC: Created offer
2022/05/27 00:41:07 WebRTC: Set local description
2022/05/27 00:41:12 Offer: {"type":"offer","sdp":"v=0\r\no=- 8344757767766408414 1653612067 IN IP4 [scrubbed]\r\ns=-\r\nt=0 0\r\na=fingerprint:sha-256 07:B3:E5:8A:F4:91:22:25:4C:E4:8F:C0:EF:F3:05:1C:8E:72:8A:60:4E:79:18:C5:7A:52:7A:BD:79:E2:6F:C1\r\na=group:BUNDLE 0\r\nm=application 9 UDP/DTLS/SCTP webrtc-datachannel\r\nc=IN IP4 [scrubbed]\r\na=setup:actpass\r\na=mid:0\r\na=sendrecv\r\na=sctp-port:5000\r\na=ice-ufrag:zLGIWbmOZdWGnHVI\r\na=ice-pwd:mJruWarpiqemHmanJRrtzdcrXziaGsxp\r\na=candidate:1952023002 1 udp 2130706431 [scrubbed] 53922 typ host\r\na=candidate:1952023002 2 udp 2130706431 [scrubbed] 53922 typ host\r\na=candidate:170000163 1 udp 1694498815 [scrubbed] 63454 typ srflx raddr [scrubbed] rport 63454\r\na=candidate:170000163 2 udp 1694498815 [scrubbed] 63454 typ srflx raddr [scrubbed] rport 63454\r\na=end-of-candidates\r\n"}
2022/05/27 00:41:42 error polling probe: http2: timeout awaiting response headers
2022/05/27 00:41:42 NAT type: unknown
2022/05/27 00:41:42 WebRTC: Created offer
2022/05/27 00:41:42 WebRTC: Set local description
2022/05/27 00:41:47 Offer: {"type":"offer","sdp":"v=0\r\no=- 5257956900376912333 1653612102 IN IP4 [scrubbed]\r\ns=-\r\nt=0 0\r\na=fingerprint:sha-256 0A:5B:00:59:1F:FD:F5:4E:40:DF:D4:80:CB:BE:59:35:9E:DF:CB:D5:AF:92:4F:61:86:17:75:FE:4E:72:D6:43\r\na=group:BUNDLE 0\r\nm=application 9 UDP/DTLS/SCTP webrtc-datachannel\r\nc=IN IP4 [scrubbed]\r\na=setup:actpass\r\na=mid:0\r\na=sendrecv\r\na=sctp-port:5000\r\na=ice-ufrag:VHAlBzBCTjVuNPMK\r\na=ice-pwd:UyWRcktAolsfjKqCURXlLbeaqVuPUsyy\r\na=candidate:1952023002 1 udp 2130706431 [scrubbed] 58920 typ host\r\na=candidate:1952023002 2 udp 2130706431 [scrubbed] 58920 typ host\r\na=candidate:170000163 1 udp 1694498815 [scrubbed] 62713 typ srflx raddr [scrubbed] rport 62713\r\na=candidate:170000163 2 udp 1694498815 [scrubbed] 62713 typ srflx raddr [scrubbed] rport 62713\r\na=end-of-candidates\r\n"}
2022/05/27 00:42:17 error polling probe: http2: timeout awaiting response headers
2022/05/27 00:55:30 sdp offer successfully received.
2022/05/27 00:55:30 Generating answer...
2022/05/27 00:55:55 Timed out waiting for client to open data channel.
2022/05/27 01:41:07 In the last 1h0m0s, there are 0 connections. Traffic Relayed ↑ 0 B, ↓ 0 B.
2022/05/27 01:48:09 sdp offer successfully received.
2022/05/27 01:48:09 Generating answer...
2022/05/27 01:48:34 Timed out waiting for client to open data channel.
2022/05/27 01:57:56 sdp offer successfully received.
2022/05/27 01:57:56 Generating answer...
2022/05/27 01:58:21 Timed out waiting for client to open data channel.
2022/05/27 02:13:32 sdp offer successfully received.
2022/05/27 02:13:32 Generating answer...
2022/05/27 02:13:57 Timed out waiting for client to open data channel.
2022/05/27 02:13:58 sdp offer successfully received.
2022/05/27 02:13:58 Generating answer...
2022/05/27 02:14:23 Timed out waiting for client to open data channel.
2022/05/27 02:41:07 In the last 1h0m0s, there are 0 connections. Traffic Relayed ↑ 0 B, ↓ 0 B.
2022/05/27 03:15:00 sdp offer successfully received.
2022/05/27 03:15:00 Generating answer...
2022/05/27 03:15:26 Timed out waiting for client to open data channel.
```
Normally I’d have seen at least some traffic by this point. Perhaps I should be more patient?https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40144Add CI build target for go 1.182022-05-27T14:04:31ZCecylia BocovichAdd CI build target for go 1.18Looks like Tor Browser will be moving to go1.18 at some point (https://gitlab.torproject.org/tpo/applications/tor-browser-build/-/issues/40464). We should make sure that snowflake works with that version of Go.Looks like Tor Browser will be moving to go1.18 at some point (https://gitlab.torproject.org/tpo/applications/tor-browser-build/-/issues/40464). We should make sure that snowflake works with that version of Go.https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40142Race condition in broker library causes broker to crash2022-07-26T21:57:26ZitchyonionRace condition in broker library causes broker to crash(This is the error message from s28 version of broker. Tor version will be slightly different)
```
http: panic serving 127.0.0.1:39380: runtime error: index out of range [0] with length 0
goroutine 105878 [running]:
net/http.(*conn).ser...(This is the error message from s28 version of broker. Tor version will be slightly different)
```
http: panic serving 127.0.0.1:39380: runtime error: index out of range [0] with length 0
goroutine 105878 [running]:
net/http.(*conn).serve.func1(0xc008088aa0)
/usr/local/go/src/net/http/server.go:1800 +0x13b
panic(0x7f12dec52440, 0xc002aa3bc0)
/usr/local/go/src/runtime/panic.go:975 +0x3e7
github.com/RACECAR-GU/snowflake/broker.SnowflakeHeap.Swap(...)
/usr/local/go/pkg/mod/github.com/!r!a!c!e!c!a!r-!g!u/snowflake@v0.0.0-20211214215908-95acebd91684/broker/snowflake-heap.go:32
container/heap.Pop(0x7f12decd1f60, 0xc00030e040, 0x12, 0xc0030c5d94)
/usr/local/go/src/container/heap/heap.go:62 +0x66
github.com/RACECAR-GU/snowflake/broker.clientOffers(0xc000322570, 0x7f12decce160, 0xc009662c40, 0xc005554300)
/usr/local/go/pkg/mod/github.com/!r!a!c!e!c!a!r-!g!u/snowflake@v0.0.0-20211214215908-95acebd91684/broker/broker.go:297 +0x5c9
github.com/RACECAR-GU/snowflake/broker.SnowflakeHandler.ServeHTTP(0xc000322570, 0x7f12deca9b80, 0x7f12decce160, 0xc009662c40, 0xc005554300)
/usr/local/go/pkg/mod/github.com/!r!a!c!e!c!a!r-!g!u/snowflake@v0.0.0-20211214215908-95acebd91684/broker/broker.go:97 +0x213
net/http.(*ServeMux).ServeHTTP(0x7f12df1e8d20, 0x7f12decce160, 0xc009662c40, 0xc005554300)
/usr/local/go/src/net/http/server.go:2416 +0x1a7
net/http.serverHandler.ServeHTTP(0xc000332000, 0x7f12decce160, 0xc009662c40, 0xc005554300)
/usr/local/go/src/net/http/server.go:2836 +0xa5
net/http.(*conn).serve(0xc008088aa0, 0x7f12deccfe60, 0xc006437ec0)
/usr/local/go/src/net/http/server.go:1924 +0x86e
created by net/http.(*Server).Serve
/usr/local/go/src/net/http/server.go:2962 +0x35e
```itchyonionitchyonionhttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40141are snowflake events safe to make public their content2022-05-25T16:09:39Zmeskiomeskio@torproject.orgare snowflake events safe to make public their contentOONI wants to use the client event API and publish the strings of the events in the json. Do they contain any personal data like IP addresses?OONI wants to use the client event API and publish the strings of the events in the json. Do they contain any personal data like IP addresses?meskiomeskio@torproject.orgmeskiomeskio@torproject.orghttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40140Snowflake blocked by ClientHello [RU]2022-10-04T17:50:22ZcypherpunksSnowflake blocked by ClientHello [RU]<details>
<summary>FAIL: https://paste.debian.net/plainh/e32d200a</summary>
```
Datagram Transport Layer Security
DTLSv1.2 Record Layer: Handshake Protocol: Client Hello
Content Type: Handshake (22)
Version: DTLS 1.2...<details>
<summary>FAIL: https://paste.debian.net/plainh/e32d200a</summary>
```
Datagram Transport Layer Security
DTLSv1.2 Record Layer: Handshake Protocol: Client Hello
Content Type: Handshake (22)
Version: DTLS 1.2 (0xfefd)
Epoch: 0
Sequence Number: 0
Length: 124
Handshake Protocol: Client Hello
Handshake Type: Client Hello (1)
Length: 112
Message Sequence: 0
Fragment Offset: 0
Fragment Length: 112
Version: DTLS 1.2 (0xfefd)
Random: <snip>...
GMT Unix Time: <snip> UTC
Random Bytes: <snip>...
Session ID Length: 0
Cookie Length: 0
Cipher Suites Length: 12
Cipher Suites (6 suites)
Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 (0xc02b)
Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (0xc02f)
Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA (0xc00a)
Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014)
Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 (0xc02c)
Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (0xc030)
Compression Methods Length: 1
Compression Methods (1 method)
Compression Method: null (0)
Extensions Length: 58
Extension: signature_algorithms (len=16)
Type: signature_algorithms (13)
Length: 16
Signature Hash Algorithms Length: 14
Signature Hash Algorithms (7 algorithms)
Signature Algorithm: ecdsa_secp256r1_sha256 (0x0403)
Signature Hash Algorithm Hash: SHA256 (4)
Signature Hash Algorithm Signature: ECDSA (3)
Signature Algorithm: ecdsa_secp384r1_sha384 (0x0503)
Signature Hash Algorithm Hash: SHA384 (5)
Signature Hash Algorithm Signature: ECDSA (3)
Signature Algorithm: ecdsa_secp521r1_sha512 (0x0603)
Signature Hash Algorithm Hash: SHA512 (6)
Signature Hash Algorithm Signature: ECDSA (3)
Signature Algorithm: rsa_pkcs1_sha256 (0x0401)
Signature Hash Algorithm Hash: SHA256 (4)
Signature Hash Algorithm Signature: RSA (1)
Signature Algorithm: rsa_pkcs1_sha384 (0x0501)
Signature Hash Algorithm Hash: SHA384 (5)
Signature Hash Algorithm Signature: RSA (1)
Signature Algorithm: rsa_pkcs1_sha512 (0x0601)
Signature Hash Algorithm Hash: SHA512 (6)
Signature Hash Algorithm Signature: RSA (1)
Signature Algorithm: ed25519 (0x0807)
Signature Hash Algorithm Hash: Unknown (8)
Signature Hash Algorithm Signature: Unknown (7)
Extension: renegotiation_info (len=1)
Type: renegotiation_info (65281)
Length: 1
Renegotiation Info extension
Renegotiation info extension length: 0
Extension: supported_groups (len=8)
Type: supported_groups (10)
Length: 8
Supported Groups List Length: 6
Supported Groups (3 groups)
Supported Group: x25519 (0x001d)
Supported Group: secp256r1 (0x0017)
Supported Group: secp384r1 (0x0018)
Extension: ec_point_formats (len=2)
Type: ec_point_formats (11)
Length: 2
EC point formats Length: 1
Elliptic curves point formats (1)
EC point format: uncompressed (0)
Extension: use_srtp (len=7)
Type: use_srtp (14)
Length: 7
SRTP Protection Profiles Length: 4
SRTP Protection Profile: SRTP_AEAD_AES_128_GCM (0x0007)
SRTP Protection Profile: SRTP_AES128_CM_HMAC_SHA1_80 (0x0001)
MKI Length: 0
Extension: extended_master_secret (len=0)
Type: extended_master_secret (23)
Length: 0
```
</details>
<details>
<summary>PASS: https://paste.debian.net/plainh/fe7c64fc</summary>
```
Datagram Transport Layer Security
DTLS Record Layer: Handshake Protocol: Client Hello
Content Type: Handshake (22)
Version: DTLS 1.0 (0xfeff)
Epoch: 0
Sequence Number: 0
Length: 176
Handshake Protocol: Client Hello
Handshake Type: Client Hello (1)
Length: 164
Message Sequence: 0
Fragment Offset: 0
Fragment Length: 164
Version: DTLS 1.2 (0xfefd)
Random: <snip>...
GMT Unix Time: <snip> UTC
Random Bytes: <snip>...
Session ID Length: 0
Cookie Length: 0
Cipher Suites Length: 16
Cipher Suites (8 suites)
Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 (0xc02b)
Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (0xc02f)
Cipher Suite: TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 (0xcca9)
Cipher Suite: TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (0xcca8)
Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA (0xc00a)
Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA (0xc009)
Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013)
Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014)
Compression Methods Length: 1
Compression Methods (1 method)
Compression Method: null (0)
Extensions Length: 106
Extension: extended_master_secret (len=0)
Type: extended_master_secret (23)
Length: 0
Extension: renegotiation_info (len=1)
Type: renegotiation_info (65281)
Length: 1
Renegotiation Info extension
Renegotiation info extension length: 0
Extension: supported_groups (len=8)
Type: supported_groups (10)
Length: 8
Supported Groups List Length: 6
Supported Groups (3 groups)
Supported Group: x25519 (0x001d)
Supported Group: secp256r1 (0x0017)
Supported Group: secp384r1 (0x0018)
Extension: ec_point_formats (len=2)
Type: ec_point_formats (11)
Length: 2
EC point formats Length: 1
Elliptic curves point formats (1)
EC point format: uncompressed (0)
Extension: application_layer_protocol_negotiation (len=18)
Type: application_layer_protocol_negotiation (16)
Length: 18
ALPN Extension Length: 16
ALPN Protocol
ALPN string length: 6
ALPN Next Protocol: webrtc
ALPN string length: 8
ALPN Next Protocol: c-webrtc
Extension: signature_algorithms (len=32)
Type: signature_algorithms (13)
Length: 32
Signature Hash Algorithms Length: 30
Signature Hash Algorithms (15 algorithms)
Signature Algorithm: ecdsa_secp256r1_sha256 (0x0403)
Signature Hash Algorithm Hash: SHA256 (4)
Signature Hash Algorithm Signature: ECDSA (3)
Signature Algorithm: ecdsa_secp384r1_sha384 (0x0503)
Signature Hash Algorithm Hash: SHA384 (5)
Signature Hash Algorithm Signature: ECDSA (3)
Signature Algorithm: ecdsa_secp521r1_sha512 (0x0603)
Signature Hash Algorithm Hash: SHA512 (6)
Signature Hash Algorithm Signature: ECDSA (3)
Signature Algorithm: ecdsa_sha1 (0x0203)
Signature Hash Algorithm Hash: SHA1 (2)
Signature Hash Algorithm Signature: ECDSA (3)
Signature Algorithm: rsa_pss_rsae_sha256 (0x0804)
Signature Hash Algorithm Hash: Unknown (8)
Signature Hash Algorithm Signature: Unknown (4)
Signature Algorithm: rsa_pss_rsae_sha384 (0x0805)
Signature Hash Algorithm Hash: Unknown (8)
Signature Hash Algorithm Signature: Unknown (5)
Signature Algorithm: rsa_pss_rsae_sha512 (0x0806)
Signature Hash Algorithm Hash: Unknown (8)
Signature Hash Algorithm Signature: Unknown (6)
Signature Algorithm: rsa_pkcs1_sha256 (0x0401)
Signature Hash Algorithm Hash: SHA256 (4)
Signature Hash Algorithm Signature: RSA (1)
Signature Algorithm: rsa_pkcs1_sha384 (0x0501)
Signature Hash Algorithm Hash: SHA384 (5)
Signature Hash Algorithm Signature: RSA (1)
Signature Algorithm: rsa_pkcs1_sha512 (0x0601)
Signature Hash Algorithm Hash: SHA512 (6)
Signature Hash Algorithm Signature: RSA (1)
Signature Algorithm: rsa_pkcs1_sha1 (0x0201)
Signature Hash Algorithm Hash: SHA1 (2)
Signature Hash Algorithm Signature: RSA (1)
Signature Algorithm: SHA256 DSA (0x0402)
Signature Hash Algorithm Hash: SHA256 (4)
Signature Hash Algorithm Signature: DSA (2)
Signature Algorithm: SHA384 DSA (0x0502)
Signature Hash Algorithm Hash: SHA384 (5)
Signature Hash Algorithm Signature: DSA (2)
Signature Algorithm: SHA512 DSA (0x0602)
Signature Hash Algorithm Hash: SHA512 (6)
Signature Hash Algorithm Signature: DSA (2)
Signature Algorithm: SHA1 DSA (0x0202)
Signature Hash Algorithm Hash: SHA1 (2)
Signature Hash Algorithm Signature: DSA (2)
Extension: Unknown type 28 (len=2)
Type: Unknown (28)
Length: 2
Data: 4000
Extension: use_srtp (len=11)
Type: use_srtp (14)
Length: 11
SRTP Protection Profiles Length: 8
SRTP Protection Profile: SRTP_AEAD_AES_128_GCM (0x0007)
SRTP Protection Profile: SRTP_AEAD_AES_256_GCM (0x0008)
SRTP Protection Profile: SRTP_AES128_CM_HMAC_SHA1_80 (0x0001)
SRTP Protection Profile: SRTP_AES128_CM_HMAC_SHA1_32 (0x0002)
MKI Length: 0
```
</details>shelikhooshelikhoohttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40139Network activity around :00 every or every second hour2022-05-17T18:17:53ZLinus Nordberglinus@torproject.orgNetwork activity around :00 every or every second hourHere's a graph over 14h for `netfilter.conntrack_sockets` according to local netdata, tracking `/proc/sys/net/netfilter/nf_conntrack_max`. It seems like there are spikes building up to every even hour (:00) except sometimes it's only eve...Here's a graph over 14h for `netfilter.conntrack_sockets` according to local netdata, tracking `/proc/sys/net/netfilter/nf_conntrack_max`. It seems like there are spikes building up to every even hour (:00) except sometimes it's only every two hours.
This is more of an observation than a bug report.![conntrack-sockets](/uploads/062310020bd72e47b950903b4e98bd5a/conntrack-sockets.png)https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40138snowflake-01: Change uplink from 1G to 10G2022-05-06T16:46:37ZLinus Nordberglinus@torproject.orgsnowflake-01: Change uplink from 1G to 10Gsnowflake-01.tpn has a 10G network interface but currently is connected to a 1G uplink port. This is planned to be changed on 2022-05-06, service window starting at 12:00 UTC.
Best case this will cause a few seconds of network outage.
...snowflake-01.tpn has a 10G network interface but currently is connected to a 1G uplink port. This is planned to be changed on 2022-05-06, service window starting at 12:00 UTC.
Best case this will cause a few seconds of network outage.
Another case is that we will have to bring down the system and install another PCI board. That might result in up to one hour of downtime for the whole system. I'll update this ticket when I know more.https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40136Offloading KCP related processing from server to proxy2022-11-15T21:01:20ZshelikhooOffloading KCP related processing from server to proxyCurrently, to the best of my knowledge, the proxy will forward all data it received to the server, where packet loss and connection instability are compensated.
@arma asked if it would be possible to offload the packet loss compensation...Currently, to the best of my knowledge, the proxy will forward all data it received to the server, where packet loss and connection instability are compensated.
@arma asked if it would be possible to offload the packet loss compensation(KCP) to the proxy, thus reducing the traffic between proxy and server in order to improve connection speed. I am unsure if this would be possible, so I opened this ticket to start a public discussion that includes @dcf.
The original chat log is as follows:
```[7:17:43 pm] <+nickm> ␡␡␡␡␡␡␡␡␡␡␡␡␡␡␡␡␡␡␡␡␡␡␡␡␡[5~[5~my connection to this irc host seems to be having som e packet loss.
[7:17:46 pm] <+nickm> so i might have a hard time seeing whatever.
[7:36:23 pm] <+nickm> (is hetzner on fire, or is this the threatened upgrade to a new debian version?)
[7:41:20 pm] -*- ahf dont know
[7:47:00 pm] <+meejah> the chances of _two_ fires in one year are low, right?? ;)
[7:53:22 pm] <shelikhoo> there is a tool that can reduce the impact of packet loss: https://github.com/xtaci/kcptun
[7:53:53 pm] <shelikhoo> the KCP part of this tunnel software is already used in snowflake
[7:54:30 pm] <shelikhoo> it will send packets aggressively, so packet loss are overpowered
[7:55:33 pm] <shelikhoo> I use this for my traffic between home network and network egress to compensate for network quality issue with local ISP
[7:58:17 pm] <+armadev> shelikhoo: hey, speaking of kcp and snowflake, and now also speaking of tcp and bbr
[7:58:27 pm] <+armadev> if there is packet loss between the snowflake user and the snowflake volunteer,
[7:58:35 pm] <+armadev> like say one of them is inside china and one of them outside,
[7:58:56 pm] <+armadev> how does snowflake handle this? in the obfs4 case we found that being more aggressive at tcp helped a lot
[7:59:13 pm] <+armadev> ( https://gitlab.torproject.org/tpo/anti-censorship/team/-/issues/65 )
[7:59:14 pm] [zwiebelbot] tor:tpo/anti-censorship/team#65: S96 dynamic IP obfs4 bridge performance insufficiency - https://bugs.torproject.org/tpo/anti-censorship/team/65 - [Open]
[7:59:35 pm] <+armadev> is there some similar change we should consider for snowflake? or is the kcp part supposed to already handle that?
[8:00:07 pm] <shelikhoo> I think snowflake have built-in KCP, but test result from vantage point show some of times the bootstrap didn't finish
[8:00:22 pm] <shelikhoo> in our vantage point in China
[8:01:24 pm] <shelikhoo> let's say in the most recent test there is 1 of 10 times that it stuck at 50%
[8:01:43 pm] <+armadev> right. i am wondering if kcp, between client and bridge, is too big a loop
[8:02:22 pm] <shelikhoo> I didn't get the idea behind "is too big a loop"
[8:02:24 pm] <+armadev> (client -> volunteer -> bridge (and back))
[8:02:31 pm] <shelikhoo> oh yes
[8:02:49 pm] <shelikhoo> and KCP create a lot of traffic
[8:02:53 pm] <shelikhoo> which means
[8:03:13 pm] <shelikhoo> (volunteer -> bridge) will be slower
[8:03:16 pm] <shelikhoo> and
[8:03:32 pm] <shelikhoo> bridge will need to process more traffic
[8:04:09 pm] <+armadev> how is webrtc (dtls) at handling packet loss?
[8:04:27 pm] <+armadev> like, how much are we relying on kcp here because the other layers are failing us
[8:04:52 pm] <shelikhoo> however we are unable to move this KCP processing to volunteer side, since that would require rework of turbo tunnel....
[8:05:08 pm] <shelikhoo> webrtc handle packet loss with SCTP
[8:05:12 pm] <shelikhoo> not DTLS
[8:05:24 pm] <shelikhoo> https://github.com/pion/sctp
[8:05:33 pm] <shelikhoo> but it is toooooooo conservative
[8:05:53 pm] <shelikhoo> so won't work at all when there is packet loss
[8:06:17 pm] <shelikhoo> so very slow when there is constant packet loss
[8:06:26 pm] <+armadev> ok so that is a good candidate as The Problem
[8:07:23 pm] <shelikhoo> I think the task in improving snowflake speed is assigned to cecylia ....
[8:07:44 pm] <shelikhoo> But I also have some experience in getting around this issue
[8:07:47 pm] <shelikhoo> as well
[8:08:13 pm] <+armadev> yep. i am not worried that we will steal her task and accidentally finish it :)
[8:09:17 pm] <shelikhoo> it is actually a quite difficult task, the way I was trying to solve it in my own research is with forward error correction
[8:10:14 pm] <shelikhoo> like reed solomon
[8:10:20 pm] <shelikhoo> reed solomon
[8:10:28 pm] <shelikhoo> or Fountain code
[8:11:36 pm] <shelikhoo> so instead of retransmit data when things are lost like tcp
[8:11:46 pm] <shelikhoo> or send a few copy of data like kcp
[8:12:02 pm] <shelikhoo> send original data and a few reconstruction shard
[8:12:47 pm] <+armadev> year. this is an entire research field. i imagine the theory is pretty easy, but if the reality is that packet loss isn't uniformly-at-random, the theory starts to fall apart
[8:13:15 pm] <shelikhoo> so in an given block the number of loss packet is lower than reconstruction shard then it will not need retransmission
[8:13:48 pm] <shelikhoo> and in my own project, there is packet dispatch pattern control
[8:14:20 pm] <shelikhoo> so reconstruction shard are not all send in the same time as the data itself
[8:15:15 pm] <shelikhoo> instead different packet are dynamically scheduled to send interlaced
[8:15:50 pm] <shelikhoo> so burst lost and constant loss case all deal with in an best effort way
[8:18:53 pm] <+armadev> you should learn about... what's it called.. 'network coding'
[8:19:49 pm] <shelikhoo> yes! added to todo list
[8:20:57 pm] <+armadev> all of these things are fun in theory but the people who work on them rarely actually interact with the real world. that makes it tough. :)
[8:22:16 pm] <shelikhoo> Isn't this kind of thing that are in mobile phone's baseband that are also know as 4G/5G?
[8:22:34 pm] <shelikhoo> Isn't this kind of thing are in mobile phone's baseband that are also know as 4G/5G?
[8:22:55 pm] <shelikhoo> so they kind of need to face reality
[8:25:50 pm] <+armadev> i don't know. good question. i would also wonder if the type/pattern of packet loss they see is different from what snowflake sees.
[8:26:04 pm] <+armadev> they probably get transient radio interference etc, which is different from congestion
[8:28:31 pm] <shelikhoo> Yes that could be true. This is a good question that I don't answer now. But when cecylia actually begin the work on this part, I would be happy to join the discussion about transfer performance(and sad if not invited....).
[8:30:07 pm] <+armadev> please grab the backlog here in case you want to use it then
[8:30:24 pm] <+armadev> and bringing it back to snowflake: wait what, we use sctp, not dtls? is that because we use the data channel and not the media channel?
[8:30:50 pm] <shelikhoo> dtls is encryption
[8:31:00 pm] <shelikhoo> sctp is packet -> stream
[8:31:12 pm] <+armadev> oh. it's dtls on the outside, sctp inside, and yet something else inside that probably?
[8:31:31 pm] <shelikhoo> then turbo tunnel inside
[8:31:38 pm] <+armadev> so the answer to "how does dtls handle packet loss" is "some of the packets don't arrive, that's how it handles it"
[8:31:42 pm] <shelikhoo> turbo tunnel includes a layer of kcp
[8:32:47 pm] <shelikhoo> DTLS will propagate packet loss to the user
[8:32:59 pm] <+armadev> does webrtc always use sctp?
[8:33:27 pm] <shelikhoo> yes, but sctp support reliable and unreliable traffic
[8:33:41 pm] <shelikhoo> so it can either propagate packet loss
[8:33:52 pm] <shelikhoo> or deal with it itself
[8:34:30 pm] <shelikhoo> we are asking it to propagate packet loss and deal with it at turbo tunnel's kcp
[8:35:06 pm] <+armadev> oh interesting. so we could try to get it to fix its packet loss at the client -> volunteer layer too.
[8:35:27 pm] <+armadev> and then we'd have two layers fighting each other, but maybe it's stll a win. fun.
[8:35:40 pm] <shelikhoo> that would some rework of turbo tunnel i guess
[8:36:20 pm] <+armadev> not necessarily. we could let turbotunnel keep doing what it is doing, for example to handle when you change snowflakes
[8:38:22 pm] <shelikhoo> I am unsure about that.... We can discuss this in a ticket.... I can create a ticket to discuss this with dcf around....
[8:42:12 pm] <+armadev> yep. i don't have any answers. just yet more possible ways to combine cpomponents.
[8:42:38 pm] <shelikhoo> yes...
[8:42:43 pm] <+armadev> (experiencing my own packet loss here, which makes typos, sorry)
[8:43:12 pm] <shelikhoo> (no problem~)```https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40135Cannot build snowflake/proxy2022-07-09T04:20:15ZcypherpunksCannot build snowflake/proxyHello! I want to install the snowflake proxy on my raspberry pi, but I can't seem to build it. I updated my system via apt-get update/update succesfully and the installation of git and golang were also successful. But after cloning the ...Hello! I want to install the snowflake proxy on my raspberry pi, but I can't seem to build it. I updated my system via apt-get update/update succesfully and the installation of git and golang were also successful. But after cloning the git, I went to /home/pi/snowflake/proxy and when I enter go build the system gives me this:
```
pi@raspberrypi:~/snowflake/proxy $ go build
main.go:5:2: cannot find package "git.torproject.org/pluggable-transports/snowflake.git/v2/common/event" in any of:
/usr/lib/go-1.7/src/git.torproject.org/pluggable-transports/snowflake.git/v2/common/event (from $GOROOT)
($GOPATH not set)
main.go:12:2: cannot find package "git.torproject.org/pluggable-transports/snowflake.git/v2/common/safelog" in any of:
/usr/lib/go-1.7/src/git.torproject.org/pluggable-transports/snowflake.git/v2/common/safelog (from $GOROOT)
($GOPATH not set)
main.go:13:2: cannot find package "git.torproject.org/pluggable-transports/snowflake.git/v2/proxy/lib" in any of:
/usr/lib/go-1.7/src/git.torproject.org/pluggable-transports/snowflake.git/v2/proxy/lib (from $GOROOT)
($GOPATH not set)
```
What is the problem here, and how can I fix it? Whe I look into /usr/lib/go-1.7/src/ there is no git.torproject.org subdirectory
I hope, you can help me.itchyonionitchyonionhttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40134Log messages from client NAT check failures are confusing2022-05-31T22:11:07ZDavid Fifielddcf@torproject.orgLog messages from client NAT check failures are confusingWhen [`CheckIfRestrictedNAT`](https://gitweb.torproject.org/pluggable-transports/snowflake.git/tree/common/nat/nat.go?h=v2.1.0#n34) fails with an error, it logs a message like `Error: no response from server`. But in context, the message...When [`CheckIfRestrictedNAT`](https://gitweb.torproject.org/pluggable-transports/snowflake.git/tree/common/nat/nat.go?h=v2.1.0#n34) fails with an error, it logs a message like `Error: no response from server`. But in context, the messages confusingly appear to refer to the broker rendezvous, not the STUN server connection:
```
Target URL: snowflake-broker.torproject.net.global.prod.fastly.net
Front URL: cdn.sstatic.net
Error: no response from server
Error: no response from server
Error: no response from server
```
In this situation, communication with the broker has succeeded and a proxy has been assigned, but the client is having trouble checking its own NAT type. These log messages should say "STUN" or "NAT" somewhere in them, and ideally also the address of the server that failed (possibly subject to safe-log scrubbing).
Refactoring suggestion: instead of having a log call at every return of `isRestrictedMapping`, you can use [`fmt.Errorf("...: %w")`](https://pkg.go.dev/errors) to wrap the underlying error with additional context, and just return the error. That way, the logging can be consolidated in [`updateNATType`](https://gitweb.torproject.org/pluggable-transports/snowflake.git/tree/client/lib/snowflake.go?h=v2.1.0#n239), which is also where the STUN server address can be added and displayed.itchyonionitchyonionhttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40133Idea: Replace long polling for Proxy to Broker2022-07-26T20:47:14ZcheakoIdea: Replace long polling for Proxy to BrokerSee the title: The Proxy(s) already use Websockets to communicate with the Server, are there reasons not to use WS for communicating with the Broker?See the title: The Proxy(s) already use Websockets to communicate with the Server, are there reasons not to use WS for communicating with the Broker?