Tor issueshttps://gitlab.torproject.org/tpo/core/tor/-/issues2022-03-18T18:09:39Zhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40562consider reducing artifacts disk space usage2022-03-18T18:09:39Zanarcatconsider reducing artifacts disk space usage### Summary
We're having issues with disk space usage on the GitLab server, and artifacts are currently the main offender. This project, and its (mostly) forks take up a significant chunk of the space on the server.
For example, here's...### Summary
We're having issues with disk space usage on the GitLab server, and artifacts are currently the main offender. This project, and its (mostly) forks take up a significant chunk of the space on the server.
For example, here's the top 5 few core/tor forks and their disk usage:
* dgoulet/tor: 9.3GB
* core/tor: 5.9GB
* mikeperry/tor: 4.6GB
* nickm/tor: 4GB
* ahf/tor: 3.3GB
This adds up to 24GB if I count this right, which is about 10% of the disk capacity before we raised it in an emergency last weekend.
(Note: it's actually fairly difficult to come up with good numbers for this.. The GitLab admin interface doesn't allow us to filter by fork, and the name-based search is limited. There's possibly a few more forks that take up space, sometimes a few gigabytes, so that total is a conservative estimate.
### Relevant logs and/or screenshots
![first page of projects named "tor" sorted by disk usage](/uploads/e7c6c49152283237d56c8b13f1e0a88a/image.png)
### Possible fixes
I am not certain. Other projects we have talked with have lowered their retention policy to one week (jnewsome/sponsor-61-sims#13) or one hour (tpo/tpa/team#40616), the latter which helped tremendously. But you already have a lower retention period (one week), so I'm not sure how to fix this.
I open this issue merely so that the team is aware of the problem and that we're hoping to see if you have ideas on how to fix this.
I wonder, for example, if forks need those artifacts at all... I am not sure *if* we could disable artifact retentions on forks, but if that's an option, we could figure out a way to purge those. Otherwise, maybe we could consider a one-day retention period? Latest artifacts are kept [for the most recent successful job](https://gitlab.torproject.org/help/ci/pipelines/job_artifacts#keep-artifacts-from-most-recent-successful-jobs) for each ref, so that should already cover for quite a bit.
I also noticed that you have a lot of jobs running, possibly in parallel. Would there be a way to reuse artifacts across those jobs to reduce disk usage? For example, [in this pipeline](https://gitlab.torproject.org/dgoulet/tor/-/pipelines/26631), all jobs but debian-distcheck, debian-docs and debian-tracing generate a 10-15MB binary for every push. Those seem small at first, but they add up quick... Those, granted, are different, so we could instead take the example of debian-distcheck and debian-tracing who both seem to generate a .tar.gz artifact that is identical. It may seem like I'm pulling at straws here, and it could very well be what I'm doing... but I guess I'm surprised that source code and simple binary artifacts would add up so quickly...
I guess the TL;DR: question here is: how long do you really need artifacts for? Could we reduce retention to a day, knowing that latest artifacts are kept regardless? And is there some duplication we could reduce?
Thanks!Alexander Færøyahf@torproject.orgAlexander Færøyahf@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40560Improve overload-general wrt ntor onionskin drops2023-02-07T10:24:00ZMike PerryImprove overload-general wrt ntor onionskin dropsWe need to do a couple small tweaks to how overload-general reacts to dropped ntor handshakes. Basically:
* overload-general should not be listed unless X% of ntors drop over Y seconds (X is a consensus param, as a fraction of 100, Y ...We need to do a couple small tweaks to how overload-general reacts to dropped ntor handshakes. Basically:
* overload-general should not be listed unless X% of ntors drop over Y seconds (X is a consensus param, as a fraction of 100, Y also consensus param)
* Add checks to `mark_my_descriptor_dirty_if_too_old()` to make us republish our descriptor if `overload-general` disappears, appears, or changes timestamp.
For the first point, the question of X% over how long is relevant, but this matters less if we update our descriptor immediately whenever the overload state or timestamp changes.
Remember that as soon as a relay is so overloaded that it is dropping ntors, traffic is already being biased away from that relay, because those circuits fail. So percents of tolerance, which can be related to the percent of backoff by sbws seem to make the most sense, but I'd be open to other ideas. Favoring implementation simplicity seems important here, too.
This is not super urgent, since we are not reacting to overload-general in terms of relay weights, and won't until after https://gitlab.torproject.org/tpo/network-health/sbws/-/issues/40125, but we should fix this before we hit 0.4.7.x-stable.
Related references:
* https://lists.torproject.org/pipermail/tor-relays/2022-January/020184.html
* https://lists.torproject.org/pipermail/tor-relays/2022-January/020224.html
* https://lists.torproject.org/pipermail/tor-relays/2022-February/020314.html
* https://gitlab.torproject.org/tpo/network-health/analysis/-/issues/27
* https://gitlab.torproject.org/tpo/network-health/team/-/issues/66#note_2773945
Cc: @gk, @dgouletTor: 0.4.7.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40559Reject relays running Tor 0.3.5.x2023-12-03T18:23:05ZGeorg KoppenReject relays running Tor 0.3.5.xSimilar to #31549, #32672, #34357, and #40480 we should reject EOL versions at the directory authority level. This time version 0.3.5 is concerned.
We started the whole process this week and are about to contact all relay operators with...Similar to #31549, #32672, #34357, and #40480 we should reject EOL versions at the directory authority level. This time version 0.3.5 is concerned.
We started the whole process this week and are about to contact all relay operators with a valid contact info. Additionally, we sent an announcement to [tor-relays@](https://lists.torproject.org/pipermail/tor-relays/2022-February/020289.html).Tor: 0.4.7.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40532Fix memory leak from Coverity #14957862022-01-25T21:43:22ZAlexander Færøyahf@torproject.orgFix memory leak from Coverity #1495786Minor potential memory leak in Coverity #1495786 coming from bf10206e9e23ac0ded2cc9727666696ea25d5636 where the `ei` variable may leak when returning in the new `if (BUG(...))` block.Minor potential memory leak in Coverity #1495786 coming from bf10206e9e23ac0ded2cc9727666696ea25d5636 where the `ei` variable may leak when returning in the new `if (BUG(...))` block.Tor: 0.4.7.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40527Remove DNS timeout from overload general2023-02-07T10:24:10ZDavid Gouletdgoulet@torproject.orgRemove DNS timeout from overload generalRelated to our investigation in:
https://gitlab.torproject.org/tpo/network-health/team/-/issues/139
This is also tied to https://gitlab.torproject.org/tpo/core/tor/-/issues/40312 which will lower the Tor DNS timeout from 5 seconds to 1...Related to our investigation in:
https://gitlab.torproject.org/tpo/network-health/team/-/issues/139
This is also tied to https://gitlab.torproject.org/tpo/core/tor/-/issues/40312 which will lower the Tor DNS timeout from 5 seconds to 1 seconds.
This needs to be backported to 0.4.6 because it is creating a lot of confusion for relay operators and inaccurate warnings.Tor: 0.4.7.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40508doc: Update the ReleasingTor.md file with new CI pipeline procedure2021-11-08T14:10:59ZDavid Gouletdgoulet@torproject.orgdoc: Update the ReleasingTor.md file with new CI pipeline procedureProcedure has changed quite a bit with the CI release pipeline and so I would like to update the file because there are other details as well that changed.Procedure has changed quite a bit with the CI release pipeline and so I would like to update the file because there are other details as well that changed.David Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40506prop324: Implement section 9 for onion services2022-02-26T16:01:41ZDavid Gouletdgoulet@torproject.orgprop324: Implement section 9 for onion servicesThat section is about adding `flow-control` line to the onion service descriptor.That section is about adding `flow-control` line to the onion service descriptor.Tor: 0.4.7.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40504OverloadStatistics option is missing in tor's man page2021-11-08T08:06:30ZnusenuOverloadStatistics option is missing in tor's man pageThe torrc option `OverloadStatistics` (!334) is missing in tor's man page.
maybe this change is also relevant to the semantics of the option
https://gitlab.torproject.org/tpo/core/tor/-/issues/40364
because the current description in t...The torrc option `OverloadStatistics` (!334) is missing in tor's man page.
maybe this change is also relevant to the semantics of the option
https://gitlab.torproject.org/tpo/core/tor/-/issues/40364
because the current description in the code comment still mentions extra-info documents.
https://gitlab.torproject.org/tpo/core/tor/-/merge_requests/334/diffs#8a1c19844ee4b3dfe20b1d743783349dc267f35e_677_677Tor: 0.4.7.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40494tor_assertion_failed_(): Bug: ../src/feature/dirclient/dirclient.c:1257: dire...2021-11-03T13:55:05Zsamip537tor_assertion_failed_(): Bug: ../src/feature/dirclient/dirclient.c:1257: directory_initiate_request: Assertion or_addr_port->port || dir_addr_port->port failed;### Summary
Tor crashes with IPv6 enabled DirPort
### Steps to reproduce:
1. Deploy an LXC Debian 11 container
2. Install Tor on it
3. Comment all Directory hardening things in systemd files related to Tor.
4. Set torrc with the follo...### Summary
Tor crashes with IPv6 enabled DirPort
### Steps to reproduce:
1. Deploy an LXC Debian 11 container
2. Install Tor on it
3. Comment all Directory hardening things in systemd files related to Tor.
4. Set torrc with the following contents:
```
Log notice syslog
RunAsDaemon 1
DataDirectory /var/lib/tor
ORPort 443
ORPort [2001:DB8::1]:443
RelayBandwidthRate 5 MB
RelayBandwidthBurst 10 MB
AccountingMax 5 TB
AccountingStart month 3 15:00
DirPort [2001:DB8::1]:80
ExitPolicy reject *:*
```
5. Have it crash after 3-10 minutes from bandwidth self-test.
### What is the current bug behavior?
The whole Tor client crashes when set with DirPort with IPv6 address in brackets followed by port.
### What is the expected behavior?
It would not crash.
### Environment
- Which version of Tor are you using? 0.4.5.10
- Which operating system are you using? Debian 11, in an LXC container
- Which installation method did you use? Distribution package
### Relevant logs and/or screenshots
```
Oct 22 06:42:35 torrelay systemd[1]: Started Anonymizing overlay network for TCP.
Oct 22 06:42:35 torrelay Tor[6069]: Signaled readiness to systemd
Oct 22 06:42:36 torrelay Tor[6069]: Bootstrapped 5% (conn): Connecting to a relay
Oct 22 06:42:36 torrelay Tor[6069]: Opening Socks listener on /run/tor/socks
Oct 22 06:42:36 torrelay Tor[6069]: Opened Socks listener connection (ready) on /run/tor/socks
Oct 22 06:42:36 torrelay Tor[6069]: Opening Control listener on /run/tor/control
Oct 22 06:42:36 torrelay Tor[6069]: Opened Control listener connection (ready) on /run/tor/control
Oct 22 06:42:36 torrelay Tor[6069]: Unable to find IPv4 address for ORPort 443. You might want to specify IPv6Only to it or set an explicit address or set Address.
Oct 22 06:42:36 torrelay Tor[6069]: Bootstrapped 10% (conn_done): Connected to a relay
Oct 22 06:42:36 torrelay Tor[6069]: Bootstrapped 14% (handshake): Handshaking with a relay
Oct 22 06:42:36 torrelay Tor[6069]: External address seen and suggested by a directory authority: <snip>
Oct 22 06:42:36 torrelay Tor[6069]: Bootstrapped 15% (handshake_done): Handshake with a relay done
Oct 22 06:42:36 torrelay Tor[6069]: Bootstrapped 75% (enough_dirinfo): Loaded enough directory info to build circuits
Oct 22 06:42:36 torrelay Tor[6069]: Bootstrapped 90% (ap_handshake_done): Handshake finished with a relay to build circuits
Oct 22 06:42:36 torrelay Tor[6069]: Bootstrapped 95% (circuit_create): Establishing a Tor circuit
Oct 22 06:42:37 torrelay Tor[6069]: Bootstrapped 100% (done): Done
Oct 22 06:43:36 torrelay Tor[6069]: Not advertising Directory Service support (Reason: AccountingMax enabled)
Oct 22 06:44:36 torrelay Tor[6069]: Now checking whether IPv4 ORPort <snip>:443 is reachable... (this may take up to 20 minutes -- look for log messages indicating success)
Oct 22 06:44:36 torrelay Tor[6069]: Now checking whether IPv6 ORPort [2001:DB8::1]:443 is reachable... (this may take up to 20 minutes -- look for log messages indicating success)
Oct 22 06:44:36 torrelay Tor[6069]: Self-testing indicates your ORPort [2001:DB8::1]:443 is reachable from the outside. Excellent.
Oct 22 06:45:37 torrelay Tor[6069]: Self-testing indicates your ORPort <snip>:443 is reachable from the outside. Excellent.
Oct 22 06:45:39 torrelay Tor[6069]: Performing bandwidth self-test...done.
Oct 22 06:48:37 torrelay Tor[6069]: tor_assertion_failed_(): Bug: ../src/feature/dirclient/dirclient.c:1257: directory_initiate_request: Assertion or_addr_port->port || dir_addr_port->port failed; aborting. (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: Tor 0.4.5.10: Assertion or_addr_port->port || dir_addr_port->port failed in directory_initiate_request at ../src/feature/dirclient/dirclient.c:1257: . Stack trace: (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /usr/bin/tor(log_backtrace_impl+0x6c) [0x557a907880] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /usr/bin/tor(tor_assertion_failed_+0x124) [0x557a914364] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /usr/bin/tor(directory_initiate_request+0x714) [0x557a9d5424] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /usr/bin/tor(router_do_reachability_checks+0x184) [0x557a8d27b8] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /usr/bin/tor(connection_dir_reached_eof+0x13cc) [0x557a9d783c] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /usr/bin/tor(+0x1993e4) [0x557a9a93e4] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /usr/bin/tor(+0x68570) [0x557a878570] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /lib/aarch64-linux-gnu/libevent-2.1.so.7(+0x234e4) [0x7f9e42e4e4] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /lib/aarch64-linux-gnu/libevent-2.1.so.7(event_base_loop+0x50c) [0x7f9e42ef84] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /usr/bin/tor(do_main_loop+0xec) [0x557a8799e0] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /usr/bin/tor(tor_run_main+0x1c0) [0x557a8751b4] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /usr/bin/tor(tor_main+0x54) [0x557a871734] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /usr/bin/tor(main+0x20) [0x557a871220] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /lib/aarch64-linux-gnu/libc.so.6(__libc_start_main+0xe8) [0x7f9dd5c218] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /usr/bin/tor(+0x612a8) [0x557a8712a8] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay systemd[1]: tor@default.service: Main process exited, code=killed, status=6/ABRT
Oct 22 06:48:37 torrelay systemd[1]: tor@default.service: Failed with result 'signal'.
Oct 22 06:48:37 torrelay systemd[1]: tor@default.service: Consumed 12.377s CPU time.
Oct 22 06:48:37 torrelay systemd[1]: tor@default.service: Scheduled restart job, restart counter is at 5.
Oct 22 06:48:37 torrelay systemd[1]: Stopped Anonymizing overlay network for TCP.
Oct 22 06:48:37 torrelay systemd[1]: tor@default.service: Consumed 12.377s CPU time.
```
### Possible fixesTor: 0.4.5.x-post-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40491relay: Overload state for DNS timeout error should be done after X% failure2023-02-07T10:24:31ZDavid Gouletdgoulet@torproject.orgrelay: Overload state for DNS timeout error should be done after X% failureWe currently get into an overload state if an Exit only hits _once_ a DNS timeout error which is way too low. Lets get into an overload state only if we have 1% of all our DNS requests end up timing out.
To do this properly, we should a...We currently get into an overload state if an Exit only hits _once_ a DNS timeout error which is way too low. Lets get into an overload state only if we have 1% of all our DNS requests end up timing out.
To do this properly, we should accumulate DNS requests for some time (or some number of requests) before assessing if we've reached the X% threshold. And we should use a timeframe before assessing a certain threshold.
After discussing with @mikeperry on IRC, we think doing consensus parameters controlling `X% over Y minutes` would be the way to do this. And we can set the defaults to X=1 and Y=10 for a 1% DNS errors over 10 minutes.
And every, 10 minutes, that total number of DNS requests needs to be reset so previous period don't affect the subsequent periods.
We need to backport this to 0.4.6 else we'll have a big problem on the network where almost all Exits will start reporting overload state once they migrate to 0.4.6+Tor: 0.4.7.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40490metrics: any DNS error fails to report correctly2023-02-07T10:24:42ZDavid Gouletdgoulet@torproject.orgmetrics: any DNS error fails to report correctlyThanks to https://lists.torproject.org/pipermail/tor-relays/2021-October/019917.html
I can confirm this report as well. Basically, `libevent` doesn't report the `type` (A, AAAA, PTR) when a DNS error occurs. In libevent, the function `r...Thanks to https://lists.torproject.org/pipermail/tor-relays/2021-October/019917.html
I can confirm this report as well. Basically, `libevent` doesn't report the `type` (A, AAAA, PTR) when a DNS error occurs. In libevent, the function `reply_run_callback()` calls back like so:
```
cb->user_callback(cb->err, 0, 0, cb->ttl, NULL, user_pointer);
```
... where `type` is the second parameter which is always 0. I have no idea why it is done this way but that is what we need to work with.
Which means that `rep_hist_note_dns_error(type, result)` always use 0 as the type and that doesn't exists when `get_dns_stats_by_type(0)` is called leading to _never_ recording DNS errors.
I think we should lobby libevent to fix that because I really don't see why it doesn't report the request type. But we'll have to fix that in our code to record all DNS errors without the type and so the reporting will be a blanket "DNS error" instead of being per-type on the metrics port.
This only affects the reporting on the `MetricsPort` and not the overload state fortunately.Tor: 0.4.7.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40488Fuzzing support for inner components of v3 onion descriptors2021-10-16T14:52:20ZNick MathewsonFuzzing support for inner components of v3 onion descriptorsWe have some of these pending for #40392, but we never merged them.We have some of these pending for #40392, but we never merged them.Nick MathewsonNick Mathewsonhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40487On Android, LOCALSTATEDIR should be set to static path2023-09-18T22:24:40ZeighthaveOn Android, LOCALSTATEDIR should be set to static pathOn Android, there is no such thing as installing into the absolute system paths. So we have to hack around that assumption in `./configure`. That means that `LOCALSTATEDIR` ends up getting set to build paths, breaking reproducibility. ...On Android, there is no such thing as installing into the absolute system paths. So we have to hack around that assumption in `./configure`. That means that `LOCALSTATEDIR` ends up getting set to build paths, breaking reproducibility. Since having a compiled-in `LOCALSTATEDIR` is worthless on Android, it should not be used for Android build. It could just be hard-coded to some debug path like `/data/local/tmp`.
This is similar to !460
@n8fr8 FYIAlexander Færøyahf@torproject.orgAlexander Færøyahf@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40484hs: Memory leak in case of config failure2021-10-14T19:18:46ZDavid Gouletdgoulet@torproject.orghs: Memory leak in case of config failureVery minor thing and thus why I think it is OK not to backport.
If the `config_service()` function fails, we fail to free the previously partitioned config line.
I found this by removing v2 support from tor and attempting to load a ver...Very minor thing and thus why I think it is OK not to backport.
If the `config_service()` function fails, we fail to free the previously partitioned config line.
I found this by removing v2 support from tor and attempting to load a version 2 lead to this problem. It can also be triggered if `HiddenServiceVersion` is anything else than 2 or 3 at the moment.Tor: 0.4.7.x-freezeDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40478Coverity warning in XON/XOFF handling2021-10-05T19:28:42ZNick MathewsonCoverity warning in XON/XOFF handling```
397 /* Adjust the token bucket of this edge connection with the drain rate in
398 * the XON. Rate is in bytes from kilobit (kpbs). */
>>> CID 1492322: Integer handling issues (OVERFLOW_BEFORE_WIDEN)
>>> Potenti...```
397 /* Adjust the token bucket of this edge connection with the drain rate in
398 * the XON. Rate is in bytes from kilobit (kpbs). */
>>> CID 1492322: Integer handling issues (OVERFLOW_BEFORE_WIDEN)
>>> Potentially overflowing expression "xon_cell_get_kbps_ewma(xon) * 1000U" with type "unsigned int" (32 bits, unsigned) is evaluated using 32-bit arithmetic, and then used in a context that expects an expression of type "uint64_t" (64 bits, unsigned).
399 uint64_t rate = xon_cell_get_kbps_ewma(xon) * 1000;
400 if (rate == 0 || INT32_MAX < rate) {
401 /* No rate. */
402 rate = INT32_MAX;
403 }
404 token_bucket_rw_adjust(&conn->bucket, (uint32_t) rate, (uint32_t) rate);
```
cc @dgoulet @mikeperryTor: 0.4.7.x-freezeDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40476hs-v2: Remove support for HSv2 on stable versions2021-10-19T15:03:12ZDavid Gouletdgoulet@torproject.orghs-v2: Remove support for HSv2 on stable versionsNow that the HSv2 has been removed from the code base in 0.4.6.x series, we now need to proceed onto the next phase which is to eliminate it from the network.
We'll do this by removing the entry points into the HSv2 code relay side so:
...Now that the HSv2 has been removed from the code base in 0.4.6.x series, we now need to proceed onto the next phase which is to eliminate it from the network.
We'll do this by removing the entry points into the HSv2 code relay side so:
- HSDir will stop accepting or serving v2 descriptors
- Introduction points wills top allowing introductions for v2.
- For Rendezvous points, we'll stop allowing it by refusing the TAP connection from the service side.
For client side:
- Disallow v2 service creation and client connections.
With these guards, we should be good with the removal of v2 in the network. We need this patch in 035 and 045 (the last two stable release we maintain with v2 support).Tor: 0.3.5.x-finalDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40474hs: Ratelimit the v2 log warning2021-10-06T17:27:45ZDavid Gouletdgoulet@torproject.orghs: Ratelimit the v2 log warningEssentially, this boils down to https://gitlab.torproject.org/tpo/core/tor/-/merge_requests/434
It should be enough to log once this and not for each SOCKS connection requesting a v2.Essentially, this boils down to https://gitlab.torproject.org/tpo/core/tor/-/merge_requests/434
It should be enough to log once this and not for each SOCKS connection requesting a v2.Tor: 0.4.7.x-freezeDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40456onionservice; HiddenServiceEnableIntroDoS* settings2021-10-14T13:03:31Zcypherpunksonionservice; HiddenServiceEnableIntroDoS* settingsTHis is Documentation problem.
1. I have read all docs and lists.tpo emails but still can't understand what "HiddenServiceEnableIntroDoS" actually do for service operators.
2. The default you're suggesting, 25,200 , is really wrong bec...THis is Documentation problem.
1. I have read all docs and lists.tpo emails but still can't understand what "HiddenServiceEnableIntroDoS" actually do for service operators.
2. The default you're suggesting, 25,200 , is really wrong because my service (email, forum, webchat, and Irc for 6K+ anons) works fine with:
HiddenServiceEnableIntroDoSDefense 1
HiddenServiceEnableIntroDoSRatePerSec 15
HiddenServiceEnableIntroDoSBurstPerSec 60
3. Plz update your blog post to explain clearly what the HiddenServiceEnableIntroDoS* actually do. Does this limit users per seconds?Tor: 0.4.7.x-freezeDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40453No spaces on "Your server has not managed to confirm..." when showing both IP...2021-08-27T12:25:23ZNeel Chauhanneel@neelc.orgNo spaces on "Your server has not managed to confirm..." when showing both IPv4 and IPv6 addressesWhen I attempt to move a relay to a new ISP and rebooted my OPNsense box, I got this message:
Aug 26 13:09:10.000 [warn] Your server has not managed to confirm reachability for its ORPort(s) at IPv4:143and[IPv6]:143. Relays do not p...When I attempt to move a relay to a new ISP and rebooted my OPNsense box, I got this message:
Aug 26 13:09:10.000 [warn] Your server has not managed to confirm reachability for its ORPort(s) at IPv4:143and[IPv6]:143. Relays do not publish descriptors until their ORPort and DirPort are reachable. Please check your firewalls, ports, address, /etc/hosts file, etc.
One thing I noticed is there's no spaces in `IPv4:143and[IPv6]:143`. We should fix thisNeel Chauhanneel@neelc.orgNeel Chauhanneel@neelc.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40450Implement Prop#324 Flow Control2021-10-04T14:50:03ZMike PerryImplement Prop#324 Flow ControlThere's enough pieces of Flow Control that need to be done to get it merge-ready that a checklist and ticket seem wise at this point.
Here's stuff I will do:
* [x] Improve monotime checks to export a global monotime status
* [x] Remove ...There's enough pieces of Flow Control that need to be done to get it merge-ready that a checklist and ticket seem wise at this point.
Here's stuff I will do:
* [x] Improve monotime checks to export a global monotime status
* [x] Remove mobile xon/xoff limits
* [x] Advertise the average and max edge conn drain rates in XON (after first XOFF, or if above a "low watermark" queue length)
* [x] Low watermark params to send advisory XON before XOFF
* [x] Send periodic XONs if the drain rate changes significantly
* [x] Perform checks for semantically valid XON/XOFF and call circuit_read_valid_data()
* [x] Turn CELL_QUEUE_HIGHWATER_SIZE (and others?) into consensus params
* [x] Implement ways to restrict when advisory XONs can be sent, to reduce side channels from exits
* [x] Alter half-open edge connection checks to work with XON/XOFF wrt valid data
* [x] Fix edge case where XON/XOFF can arrive after stream close (depends on half-open fix)
* [x] Additional consensus parameter for CircEWMA's EWMA_TICK_LEN and edge ratelimit low/high change
* [x] Preliminary tuning of new consensus params over onion svcs
* [x] Misc XXX's
* [x] Update Prop#324 spec with above
* [x] Debug log removal and other log message cleanups/improvements
* [x] Squash branch for review
Here's stuff @dgoulet can do:
* [x] Preliminary code review
* [x] Rate limit packaging data on edge connection's circuits (or reading on edge source sockets?) to match the advertised rate using token buckets
* [x] Improve oomkiller/circuit closing wrt total edge connection outbuf lengths on circs
* [x] Determine if we can use any kernel info from KIST to improve buffer length calls (there's some XXX's to note where in the flow control code this may help)
* [x] Spotcheck existing oomkiller, KIST, and CircEWMA code to see if we should parameterize or tighten anything else for congestion control generally
* [x] Determine if we should parameterize any other buffer lengths inside channel handling, cell handling, and the circuitmux dragon
* [x] Help me figure out why flow_control_decide_xon() is almost always only called when the outbuf is 0, where the data is going to, and what we should do about it (KIST, ask the socket if it wouldblock?)
* [x] Test out the flow control branch on some rate limited onions for weirdness. Help decide some tuning parameter values.
Cc: @dgouletTor: 0.4.7.x-freezeMike PerryMike Perry2021-09-15