- 19 Jan, 2023 1 commit
-
-
David Fifield authored
https://gitlab.torproject.org/tpo/applications/tor-browser-build/-/commits/fc89e8b10c3ff30db2079b2fb327d05b2b5f3c80/projects/common/bridges_list.snowflake.txt * Use port 80 in placeholder IP addresses tpo/applications/tor-browser-build!516 * Enable uTLS tpo/applications/tor-browser-build!540 * Shorten bridge line (remove stun.voip.blackberry.com) tpo/applications/tor-browser-build!558 * Add snowflake-02 bridge tpo/applications/tor-browser-build!571
-
- 18 Jan, 2023 3 commits
- 17 Jan, 2023 1 commit
-
-
shelikhoo authored
Backported from https://gitlab.torproject.org/shelikhoo/snowflake/-/tree/dev-skiphelloverify-backup
-
- 16 Jan, 2023 1 commit
-
-
Cecylia Bocovich authored
-
- 13 Jan, 2023 1 commit
-
-
Cecylia Bocovich authored
-
- 03 Jan, 2023 1 commit
-
-
Cecylia Bocovich authored
-
- 31 Dec, 2022 2 commits
-
-
Cecylia Bocovich authored
Removed stun.stunprotocol.org after a discussion with the operator, and stun.altar.com.pl after noticing it has gone offline. https://lists.torproject.org/pipermail/anti-censorship-team/2022-December/000272.html https://lists.torproject.org/pipermail/anti-censorship-team/2022-December/000276.html
-
Cecylia Bocovich authored
This is the same default that the web-based proxies use. Proxies do not need RFC 5780 compatible STUN servers.
-
- 15 Dec, 2022 2 commits
-
-
David Fifield authored
Replaces the hardcoded numKCPInstances.
-
David Fifield authored
To distribute CPU load. #40200
-
- 13 Dec, 2022 1 commit
-
-
itchyonion authored
-
- 12 Dec, 2022 1 commit
-
-
Flo418 authored
-
- 08 Dec, 2022 4 commits
-
-
David Fifield authored
This design is easier to misuse, because it allows the caller to modify the contents of the slice after queueing it, but it avoids an extra allocation + memmove per incoming packet. Before: $ go test -bench='Benchmark(QueueIncoming|WriteTo)' -benchtime=2s -benchmem BenchmarkQueueIncoming-4 7001494 342.4 ns/op 1024 B/op 2 allocs/op BenchmarkWriteTo-4 3777459 627 ns/op 1024 B/op 2 allocs/op After: $ go test -bench=BenchmarkWriteTo -benchtime 2s -benchmem BenchmarkQueueIncoming-4 13361600 170.1 ns/op 512 B/op 1 allocs/op BenchmarkWriteTo-4 6702324 373 ns/op 512 B/op 1 allocs/op Despite the benchmark results, the change in QueueIncoming turns out not to have an effect in practice. It appears that the compiler had already been optimizing out the allocation and copy in QueueIncoming. #40187 The WriteTo change, on the other hand, in practice reduces the frequency of garbage collection. #40199
-
David Fifield authored
This is to reduce heap usage. #40179 Past discussion of queueSize: https://lists.torproject.org/pipermail/anti-censorship-team/2021-July/000188.html !48 (comment 2744619)
-
David Fifield authored
By forwarding the method to the inner smux.Stream. This is to prevent io.Copy in the top-level proxy function from allocating a buffer per client. The smux.Stream WriteTo method returns io.EOF on success, contrary to the contract of io.Copy that says it should return nil. Ignore io.EOF in the proxy loop to avoid a log message. /anti-censorship/pluggable-transports/snowflake/-/issues/40177
-
David Fifield authored
Rather than use defer. It is only a tiny amount faster, but this function is frequently called. Before: $ go test -bench=BenchmarkSendQueue -benchtime=2s BenchmarkSendQueue-4 15901834 151 ns/op After: $ go test -bench=BenchmarkSendQueue -benchtime=2s BenchmarkSendQueue-4 15859948 147 ns/op #40177
-
- 03 Dec, 2022 1 commit
-
-
David Fifield authored
Recent increases in usage have exhausted the capacity of the map. #40173
-
- 02 Dec, 2022 2 commits
-
-
- 01 Dec, 2022 1 commit
-
-
Cecylia Bocovich authored
-
- 29 Nov, 2022 3 commits
-
-
shelikhoo authored
-
Cecylia Bocovich authored
-
- 28 Nov, 2022 5 commits
-
-
Cecylia Bocovich authored
-
Cecylia Bocovich authored
-
Cecylia Bocovich authored
-
Cecylia Bocovich authored
-
-
- 23 Nov, 2022 2 commits
- 21 Nov, 2022 1 commit
-
-
- 17 Nov, 2022 1 commit
-
-
Cecylia Bocovich authored
We use a call to test -z together with go fmt because it doesn't output a non-zero exit status (triggering CI test failure). However, we lose useful debugging output from the go fmt call because test -z swallows it. This adds very verbose formatting output to the CI test.
-
- 16 Nov, 2022 6 commits
-
-
David Fifield authored
-
David Fifield authored
-
David Fifield authored
I had thought to set a buffer size of 2048, half the websocket package default of 4096. But it turns out when you don't set a buffer size, the websocket package reuses the HTTP server's read/write buffers, which empirically already have a size of 2048. $ go test -bench=BenchmarkUpgradeBufferSize -benchmem -benchtime=5s BenchmarkUpgradeBufferSize/0-4 25669 234566 ns/op 32604 B/op 113 allocs/op BenchmarkUpgradeBufferSize/128-4 24739 238283 ns/op 24325 B/op 117 allocs/op BenchmarkUpgradeBufferSize/1024-4 25352 238885 ns/op 28087 B/op 116 allocs/op BenchmarkUpgradeBufferSize/2048-4 22660 234890 ns/op 32444 B/op 116 allocs/op BenchmarkUpgradeBufferSize/4096-4 25668 232591 ns/op 41672 B/op 116 allocs/op BenchmarkUpgradeBufferSize/8192-4 24908 240755 ns/op 59103 B/op 116 allocs/op
-
David Fifield authored
Otherwise the buffers are re-allocated on every iteration, which is a surprise to me. I thought the compiler would do this transformation itself. Now there is just one allocation per client←server read (one messageReader) and two allocations per server←client read (one messageReader and one messageWriter). $ go test -bench=BenchmarkReadWrite -benchmem -benchtime=5s BenchmarkReadWrite/c←s_150-4 481054 12849 ns/op 11.67 MB/s 8 B/op 1 allocs/op BenchmarkReadWrite/s←c_150-4 421809 14095 ns/op 10.64 MB/s 56 B/op 2 allocs/op BenchmarkReadWrite/c←s_3000-4 208564 28003 ns/op 107.13 MB/s 16 B/op 2 allocs/op BenchmarkReadWrite/s←c_3000-4 186320 30576 ns/op 98.12 MB/s 112 B/op 4 allocs/op
-
David Fifield authored
This avoids io.Copy allocating a 32 KB buffer on every call. https://cs.opensource.google/go/go/+/refs/tags/go1.19.1:src/io/io.go;l=416 $ go test -bench=BenchmarkReadWrite -benchmem -benchtime=5s BenchmarkReadWrite/c←s_150-4 385740 15114 ns/op 9.92 MB/s 4104 B/op 3 allocs/op BenchmarkReadWrite/s←c_150-4 347070 16824 ns/op 8.92 MB/s 4152 B/op 4 allocs/op BenchmarkReadWrite/c←s_3000-4 190257 31581 ns/op 94.99 MB/s 8208 B/op 6 allocs/op BenchmarkReadWrite/s←c_3000-4 163233 34821 ns/op 86.16 MB/s 8304 B/op 8 allocs/op
-
David Fifield authored
In the client←server direction, this hits a fast path that avoids allocating a messageWriter. https://github.com/gorilla/websocket/blob/v1.5.0/conn.go#L760 Cuts the number of allocations in half in the client←server direction: $ go test -bench=BenchmarkReadWrite -benchmem -benchtime=5s BenchmarkReadWrite/c←s_150-4 597511 13358 ns/op 11.23 MB/s 33709 B/op 2 allocs/op BenchmarkReadWrite/s←c_150-4 474176 13756 ns/op 10.90 MB/s 34968 B/op 4 allocs/op BenchmarkReadWrite/c←s_3000-4 156488 36290 ns/op 82.67 MB/s 68673 B/op 5 allocs/op BenchmarkReadWrite/s←c_3000-4 190897 34719 ns/op 86.41 MB/s 69730 B/op 8 allocs/op
-