Support multiple simultaneous SOCKS connections
The Snowflake client accepts multiple simultaneous SOCKS connections from tor, but it only tries to collect one proxy at a time, and each proxy can service only one SOCKS connection (this is true in the turbotunnel branch as well). One of the SOCKS connections gets the only available proxy, while the others starve.
I can think of a few ways to approach this.
- Dynamically adjust the
max
parameter according to how many SOCKS connections there are currently. If there's one SOCKS connection, we need only one proxy. If there's another SOCKS connection, raise the limit to allow the proxy-collecting thread to pick up another one, and lower the limit again if the number of SOCKS connections drops back down. - Start up a separate proxy-collecting thread for each SOCKS connection, as suggested at comment:12:ticket:21314. Each SOCKS connection will make its own broker requests and collect its own proxies, not interacting with those of any other SOCKS connection. A downside of this is that the number of Snowflake proxies you are contacting leaks the number of SOCKS connections you have ongoing. (Which can also be seen as a benefit in that if there are zero SOCKS connections, you don't even bother to contact the broker.)
- Make it possible for multiple SOCKS connections to share the same proxy. Continue using a global proxy-collecting thread, and make there be a single shared
RedialPacketConn
instead of a separate one for each SOCKS connection. As things work now, this would require tagging every packet with the ClientID, instead of sending the ClientID once and letting it be the same implicitly for all packets that follow. - Make it possible for multiple SOCKS connections to share the same proxy, and use a single KCP/QUIC connection for all SOCKS connections. Separate SOCKS connections go into separate streams within the KCP/QUIC connection. In other words, rather than doing both
sess = kcp.NewConn2/quic.Dial
andsess.OpenStream
in the SOCKS handler, we dosess = kcp.NewConn2/quic.Dial
inmain
and thensess.OpenStream
in the SOCKS handler. This way we could continue tagging the ClientID just once, because the program would only ever work with one ClientID at a time. However this way would make it harder to do the "stop using the network when not being used" of legacy/trac#21314 (moved), because that single KCP/QUIC connection would try to keep itself alive all the time and would contact the broker every time it needed a new proxy. Perhaps we could make it so that if there are zero streams, we close the KCP/QUIC connection, and lazily create a new one if and when we get another SOCKS connection.
= status quo= | = 1= | = 2= | = 3= | = 4= | |
---|---|---|---|---|---|
= proxy-collecting threads= | one global | one global | one per SOCKS | one global | one global |
= proxy limit per thread= | 1 | # of SOCKS | 1 | 1 | 1 |
= proxies shared between SOCKSes?= | dedicated | dedicated | dedicated | shared | shared |
= PacketConn s= |
one per SOCKS | one per SOCKS | one per SOCKS | one global | one global |
= KCP/QUIC connections= | one per SOCKS | one per SOCKS | one per SOCKS | one per SOCKS | one global |
= KCP/QUIC streams= | one per SOCKS | one per SOCKS | one per SOCKS | one per SOCKS | one per SOCKS |
= ClientID on every packet?= | no | no | no | yes | no |