Tor issueshttps://gitlab.torproject.org/tpo/core/tor/-/issues2022-03-25T19:26:31Zhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40581prometheus issues label name "port" is not unique: invalid sample when scraping2022-03-25T19:26:31ZAlex Xuprometheus issues label name "port" is not unique: invalid sample when scrapingexample output:
```
tor_hs_app_write_bytes_total{onion="duwq3shurnywtxq5z76dbcy7gbqyjgel4vzauxupuc4v773tiyxif5qd",port="80",port="443"} 1000
```
while I am new to Prometheus, my understanding is that each sample of a metric can have a...example output:
```
tor_hs_app_write_bytes_total{onion="duwq3shurnywtxq5z76dbcy7gbqyjgel4vzauxupuc4v773tiyxif5qd",port="80",port="443"} 1000
```
while I am new to Prometheus, my understanding is that each sample of a metric can have at most one value per label. therefore, `port="80",port="443"` is invalid. i think this makes sense in this context: either tor is able to count the bytes per port, in which case there should be two lines, one with `port="80"`, and one with `port="443"`, or tor can only count the bytes per service, in which case there should be one line and no port labels.
assuming the former is true, I think https://gitlab.torproject.org/tpo/core/tor/-/blob/455471835da35d8ee64e6a2c0a70acb89a003bf4/src/feature/hs/hs_metrics.c#L46-59 should look like:
```
if (base_metrics[i].port_as_label && service->config.ports) {
SMARTLIST_FOREACH_BEGIN(service->config.ports,
const hs_port_config_t *, p) {
metrics_store_entry_t *entry =
metrics_store_add(store, base_metrics[i].type, base_metrics[i].name,
base_metrics[i].help);
/* Add labels to the entry. */
metrics_store_entry_add_label(entry,
metrics_format_label("onion", service->onion_address));
metrics_store_entry_add_label(entry,
metrics_format_label("port", port_to_str(p->virtual_port)));
} SMARTLIST_FOREACH_END(p);
} else {
metrics_store_entry_t *entry =
metrics_store_add(store, base_metrics[i].type, base_metrics[i].name,
base_metrics[i].help);
/* Add labels to the entry. */
metrics_store_entry_add_label(entry,
metrics_format_label("onion", service->onion_address));
}
```
possibly with refactoring, and maybe adjustment of the condition, although I'm not sure it is allowed to have an onion service without ports? I have not submitted this as an MR because I have not tested it at all.
strangely, however, the case of multiple ports per service seems to already be handled at https://gitlab.torproject.org/tpo/core/tor/-/blob/455471835da35d8ee64e6a2c0a70acb89a003bf4/src/feature/hs/hs_metrics.c#L80-91, it is just not initialized properly. possibly this change was intended during development and not completed?Tor: 0.4.7.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40576hs_client.c:774: setup_rendezvous_circ_congestion_control: Non-fatal assertio...2022-03-25T19:26:32ZPeter Gerberhs_client.c:774: setup_rendezvous_circ_congestion_control: Non-fatal assertion !(desc == NULL)Unfortunately, I'm currently unable to reproduce this but I'm seeing the following assertion in the logs:
```
Mar 04 16:10:41.000 [warn] tor_bug_occurred_(): Bug: ../src/feature/hs/hs_client.c:774: setup_rendezvous_circ_congestion_contr...Unfortunately, I'm currently unable to reproduce this but I'm seeing the following assertion in the logs:
```
Mar 04 16:10:41.000 [warn] tor_bug_occurred_(): Bug: ../src/feature/hs/hs_client.c:774: setup_rendezvous_circ_congestion_control: Non-fatal assertion !(desc == NULL) failed. (on Tor 0.4.7.4-alpha )
Mar 04 16:10:41.000 [warn] Bug: Tor 0.4.7.4-alpha: Non-fatal assertion !(desc == NULL) failed in setup_rendezvous_circ_congestion_control at ../src/feature/hs/hs_client.c:774. Stack trace: (on Tor 0.4.7.4-alpha )
Mar 04 16:10:41.000 [warn] Bug: /usr/bin/tor(log_backtrace_impl+0x57) [0x5f4b8e8a5ce7] (on Tor 0.4.7.4-alpha )
Mar 04 16:10:41.000 [warn] Bug: /usr/bin/tor(tor_bug_occurred_+0x16b) [0x5f4b8e8b0f0b] (on Tor 0.4.7.4-alpha )
Mar 04 16:10:41.000 [warn] Bug: /usr/bin/tor(hs_client_circuit_has_opened+0x467) [0x5f4b8e9ac887] (on Tor 0.4.7.4-alpha )
Mar 04 16:10:41.000 [warn] Bug: /usr/bin/tor(circuit_has_opened+0x150) [0x5f4b8e92ec40] (on Tor 0.4.7.4-alpha )
Mar 04 16:10:41.000 [warn] Bug: /usr/bin/tor(+0x16f7f4) [0x5f4b8e9307f4] (on Tor 0.4.7.4-alpha )
Mar 04 16:10:41.000 [warn] Bug: /usr/bin/tor(connection_ap_handshake_attach_circuit+0x19a) [0x5f4b8e930e5a] (on Tor 0.4.7.4-alpha )
Mar 04 16:10:41.000 [warn] Bug: /usr/bin/tor(connection_ap_attach_pending+0x108) [0x5f4b8e9527c8] (on Tor 0.4.7.4-alpha )
Mar 04 16:10:41.000 [warn] Bug: /usr/bin/tor(circuit_build_needed_circs+0x38) [0x5f4b8e92fcb8] (on Tor 0.4.7.4-alpha )
Mar 04 16:10:41.000 [warn] Bug: /usr/bin/tor(+0x69718) [0x5f4b8e82a718] (on Tor 0.4.7.4-alpha )
Mar 04 16:10:41.000 [warn] Bug: /usr/bin/tor(+0x847f7) [0x5f4b8e8457f7] (on Tor 0.4.7.4-alpha )
Mar 04 16:10:41.000 [warn] Bug: /lib/x86_64-linux-gnu/libevent-2.1.so.7(+0x239ef) [0x78b7991e09ef] (on Tor 0.4.7.4-alpha )
Mar 04 16:10:41.000 [warn] Bug: /lib/x86_64-linux-gnu/libevent-2.1.so.7(event_base_loop+0x52f) [0x78b7991e128f] (on Tor 0.4.7.4-alpha )
Mar 04 16:10:41.000 [warn] Bug: /usr/bin/tor(do_main_loop+0x101) [0x5f4b8e82d5b1] (on Tor 0.4.7.4-alpha )
Mar 04 16:10:41.000 [warn] Bug: /usr/bin/tor(tor_run_main+0x1e5) [0x5f4b8e828e85] (on Tor 0.4.7.4-alpha )
Mar 04 16:10:41.000 [warn] Bug: /usr/bin/tor(tor_main+0x49) [0x5f4b8e8252d9] (on Tor 0.4.7.4-alpha )
Mar 04 16:10:41.000 [warn] Bug: /usr/bin/tor(main+0x19) [0x5f4b8e824eb9] (on Tor 0.4.7.4-alpha )
Mar 04 16:10:41.000 [warn] Bug: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xea) [0x78b798a8dd0a] (on Tor 0.4.7.4-alpha )
Mar 04 16:10:41.000 [warn] Bug: /usr/bin/tor(_start+0x2a) [0x5f4b8e824f0a] (on Tor 0.4.7.4-alpha )
```
### Environment
Tor: 0.4.7.4-alpha
OS: Whonix based on Debian 11 "bullseye"Tor: 0.4.7.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40564overload: DNS timeout overload general still being triggered2023-02-07T10:23:51ZDavid Gouletdgoulet@torproject.orgoverload: DNS timeout overload general still being triggeredThis only applies to 0.4.7.x.
Even though we worked on
https://gitlab.torproject.org/tpo/core/tor/-/issues/40527, it appears that a second call from the DNS subsystem triggering the general overload timeout was forgotten/missed.
The p...This only applies to 0.4.7.x.
Even though we worked on
https://gitlab.torproject.org/tpo/core/tor/-/issues/40527, it appears that a second call from the DNS subsystem triggering the general overload timeout was forgotten/missed.
The problem lies in the libevent DNS callback.Tor: 0.4.7.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40560Improve overload-general wrt ntor onionskin drops2023-02-07T10:24:00ZMike PerryImprove overload-general wrt ntor onionskin dropsWe need to do a couple small tweaks to how overload-general reacts to dropped ntor handshakes. Basically:
* overload-general should not be listed unless X% of ntors drop over Y seconds (X is a consensus param, as a fraction of 100, Y ...We need to do a couple small tweaks to how overload-general reacts to dropped ntor handshakes. Basically:
* overload-general should not be listed unless X% of ntors drop over Y seconds (X is a consensus param, as a fraction of 100, Y also consensus param)
* Add checks to `mark_my_descriptor_dirty_if_too_old()` to make us republish our descriptor if `overload-general` disappears, appears, or changes timestamp.
For the first point, the question of X% over how long is relevant, but this matters less if we update our descriptor immediately whenever the overload state or timestamp changes.
Remember that as soon as a relay is so overloaded that it is dropping ntors, traffic is already being biased away from that relay, because those circuits fail. So percents of tolerance, which can be related to the percent of backoff by sbws seem to make the most sense, but I'd be open to other ideas. Favoring implementation simplicity seems important here, too.
This is not super urgent, since we are not reacting to overload-general in terms of relay weights, and won't until after https://gitlab.torproject.org/tpo/network-health/sbws/-/issues/40125, but we should fix this before we hit 0.4.7.x-stable.
Related references:
* https://lists.torproject.org/pipermail/tor-relays/2022-January/020184.html
* https://lists.torproject.org/pipermail/tor-relays/2022-January/020224.html
* https://lists.torproject.org/pipermail/tor-relays/2022-February/020314.html
* https://gitlab.torproject.org/tpo/network-health/analysis/-/issues/27
* https://gitlab.torproject.org/tpo/network-health/team/-/issues/66#note_2773945
Cc: @gk, @dgouletTor: 0.4.7.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40559Reject relays running Tor 0.3.5.x2023-12-03T18:23:05ZGeorg KoppenReject relays running Tor 0.3.5.xSimilar to #31549, #32672, #34357, and #40480 we should reject EOL versions at the directory authority level. This time version 0.3.5 is concerned.
We started the whole process this week and are about to contact all relay operators with...Similar to #31549, #32672, #34357, and #40480 we should reject EOL versions at the directory authority level. This time version 0.3.5 is concerned.
We started the whole process this week and are about to contact all relay operators with a valid contact info. Additionally, we sent an announcement to [tor-relays@](https://lists.torproject.org/pipermail/tor-relays/2022-February/020289.html).Tor: 0.4.7.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40532Fix memory leak from Coverity #14957862022-01-25T21:43:22ZAlexander Færøyahf@torproject.orgFix memory leak from Coverity #1495786Minor potential memory leak in Coverity #1495786 coming from bf10206e9e23ac0ded2cc9727666696ea25d5636 where the `ei` variable may leak when returning in the new `if (BUG(...))` block.Minor potential memory leak in Coverity #1495786 coming from bf10206e9e23ac0ded2cc9727666696ea25d5636 where the `ei` variable may leak when returning in the new `if (BUG(...))` block.Tor: 0.4.7.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40527Remove DNS timeout from overload general2023-02-07T10:24:10ZDavid Gouletdgoulet@torproject.orgRemove DNS timeout from overload generalRelated to our investigation in:
https://gitlab.torproject.org/tpo/network-health/team/-/issues/139
This is also tied to https://gitlab.torproject.org/tpo/core/tor/-/issues/40312 which will lower the Tor DNS timeout from 5 seconds to 1...Related to our investigation in:
https://gitlab.torproject.org/tpo/network-health/team/-/issues/139
This is also tied to https://gitlab.torproject.org/tpo/core/tor/-/issues/40312 which will lower the Tor DNS timeout from 5 seconds to 1 seconds.
This needs to be backported to 0.4.6 because it is creating a lot of confusion for relay operators and inaccurate warnings.Tor: 0.4.7.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40506prop324: Implement section 9 for onion services2022-02-26T16:01:41ZDavid Gouletdgoulet@torproject.orgprop324: Implement section 9 for onion servicesThat section is about adding `flow-control` line to the onion service descriptor.That section is about adding `flow-control` line to the onion service descriptor.Tor: 0.4.7.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40504OverloadStatistics option is missing in tor's man page2021-11-08T08:06:30ZnusenuOverloadStatistics option is missing in tor's man pageThe torrc option `OverloadStatistics` (!334) is missing in tor's man page.
maybe this change is also relevant to the semantics of the option
https://gitlab.torproject.org/tpo/core/tor/-/issues/40364
because the current description in t...The torrc option `OverloadStatistics` (!334) is missing in tor's man page.
maybe this change is also relevant to the semantics of the option
https://gitlab.torproject.org/tpo/core/tor/-/issues/40364
because the current description in the code comment still mentions extra-info documents.
https://gitlab.torproject.org/tpo/core/tor/-/merge_requests/334/diffs#8a1c19844ee4b3dfe20b1d743783349dc267f35e_677_677Tor: 0.4.7.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40491relay: Overload state for DNS timeout error should be done after X% failure2023-02-07T10:24:31ZDavid Gouletdgoulet@torproject.orgrelay: Overload state for DNS timeout error should be done after X% failureWe currently get into an overload state if an Exit only hits _once_ a DNS timeout error which is way too low. Lets get into an overload state only if we have 1% of all our DNS requests end up timing out.
To do this properly, we should a...We currently get into an overload state if an Exit only hits _once_ a DNS timeout error which is way too low. Lets get into an overload state only if we have 1% of all our DNS requests end up timing out.
To do this properly, we should accumulate DNS requests for some time (or some number of requests) before assessing if we've reached the X% threshold. And we should use a timeframe before assessing a certain threshold.
After discussing with @mikeperry on IRC, we think doing consensus parameters controlling `X% over Y minutes` would be the way to do this. And we can set the defaults to X=1 and Y=10 for a 1% DNS errors over 10 minutes.
And every, 10 minutes, that total number of DNS requests needs to be reset so previous period don't affect the subsequent periods.
We need to backport this to 0.4.6 else we'll have a big problem on the network where almost all Exits will start reporting overload state once they migrate to 0.4.6+Tor: 0.4.7.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40490metrics: any DNS error fails to report correctly2023-02-07T10:24:42ZDavid Gouletdgoulet@torproject.orgmetrics: any DNS error fails to report correctlyThanks to https://lists.torproject.org/pipermail/tor-relays/2021-October/019917.html
I can confirm this report as well. Basically, `libevent` doesn't report the `type` (A, AAAA, PTR) when a DNS error occurs. In libevent, the function `r...Thanks to https://lists.torproject.org/pipermail/tor-relays/2021-October/019917.html
I can confirm this report as well. Basically, `libevent` doesn't report the `type` (A, AAAA, PTR) when a DNS error occurs. In libevent, the function `reply_run_callback()` calls back like so:
```
cb->user_callback(cb->err, 0, 0, cb->ttl, NULL, user_pointer);
```
... where `type` is the second parameter which is always 0. I have no idea why it is done this way but that is what we need to work with.
Which means that `rep_hist_note_dns_error(type, result)` always use 0 as the type and that doesn't exists when `get_dns_stats_by_type(0)` is called leading to _never_ recording DNS errors.
I think we should lobby libevent to fix that because I really don't see why it doesn't report the request type. But we'll have to fix that in our code to record all DNS errors without the type and so the reporting will be a blanket "DNS error" instead of being per-type on the metrics port.
This only affects the reporting on the `MetricsPort` and not the overload state fortunately.Tor: 0.4.7.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40421hs: Return the 0xF6 extended socks error for a v22021-07-07T10:05:10ZDavid Gouletdgoulet@torproject.orghs: Return the 0xF6 extended socks error for a v2In #40373, we made it that a v2 address is properly parsed and then a warning is logged indicating that it is not supported anymore.
Problem is that we failed to send back a `0xF6` extended socks error code so Tor Browser UX is better. ...In #40373, we made it that a v2 address is properly parsed and then a warning is logged indicating that it is not supported anymore.
Problem is that we failed to send back a `0xF6` extended socks error code so Tor Browser UX is better. At the moment, we just close the stream and thus TB gets "Unable to connect".Tor: 0.4.6.x-post-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40419Android: FAIL src/test/test_address_set.c:80: assert(ret OP_EQ 1): 0 vs 12022-07-07T00:47:10ZeighthaveAndroid: FAIL src/test/test_address_set.c:80: assert(ret OP_EQ 1): 0 vs 1### Summary
I rebased our Android patches on tor-0.4.6.5 and ran the CI. It passed on _android-24/armeabi-v7a_ and failed on _android-22/x86_64_. Both pass on tor-0.4.5.9. This is the failure:
```console
address_set/contains: [forki...### Summary
I rebased our Android patches on tor-0.4.6.5 and ran the CI. It passed on _android-24/armeabi-v7a_ and failed on _android-22/x86_64_. Both pass on tor-0.4.5.9. This is the failure:
```console
address_set/contains: [forking]
FAIL src/test/test_address_set.c:80: assert(ret OP_EQ 1): 0 vs 1
[contains FAILED]
```
https://gitlab.com/eighthave/tor/-/jobs/1367099673
My guess is that this is either an intermittent bug or it is due to the CPU arch.
### Steps to reproduce:
1. fork https://gitlab.com/eighthave/tor
2. trigger a CI pipeline on the tag _tor-android-0.4.6.5_
### Environment
Running on Debian/stretch via the Docker image `registry.gitlab.com/fdroid/ci-images-client`. The emulator is running without KVM support, so it is very slow.
### Possible fixes
If that test is not relevant on Android, I can disable it in our fork.
FYI @n8fr8 @sysrqbTor: 0.4.6.x-post-stableAlexander Færøyahf@torproject.orgAlexander Færøyahf@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40494tor_assertion_failed_(): Bug: ../src/feature/dirclient/dirclient.c:1257: dire...2021-11-03T13:55:05Zsamip537tor_assertion_failed_(): Bug: ../src/feature/dirclient/dirclient.c:1257: directory_initiate_request: Assertion or_addr_port->port || dir_addr_port->port failed;### Summary
Tor crashes with IPv6 enabled DirPort
### Steps to reproduce:
1. Deploy an LXC Debian 11 container
2. Install Tor on it
3. Comment all Directory hardening things in systemd files related to Tor.
4. Set torrc with the follo...### Summary
Tor crashes with IPv6 enabled DirPort
### Steps to reproduce:
1. Deploy an LXC Debian 11 container
2. Install Tor on it
3. Comment all Directory hardening things in systemd files related to Tor.
4. Set torrc with the following contents:
```
Log notice syslog
RunAsDaemon 1
DataDirectory /var/lib/tor
ORPort 443
ORPort [2001:DB8::1]:443
RelayBandwidthRate 5 MB
RelayBandwidthBurst 10 MB
AccountingMax 5 TB
AccountingStart month 3 15:00
DirPort [2001:DB8::1]:80
ExitPolicy reject *:*
```
5. Have it crash after 3-10 minutes from bandwidth self-test.
### What is the current bug behavior?
The whole Tor client crashes when set with DirPort with IPv6 address in brackets followed by port.
### What is the expected behavior?
It would not crash.
### Environment
- Which version of Tor are you using? 0.4.5.10
- Which operating system are you using? Debian 11, in an LXC container
- Which installation method did you use? Distribution package
### Relevant logs and/or screenshots
```
Oct 22 06:42:35 torrelay systemd[1]: Started Anonymizing overlay network for TCP.
Oct 22 06:42:35 torrelay Tor[6069]: Signaled readiness to systemd
Oct 22 06:42:36 torrelay Tor[6069]: Bootstrapped 5% (conn): Connecting to a relay
Oct 22 06:42:36 torrelay Tor[6069]: Opening Socks listener on /run/tor/socks
Oct 22 06:42:36 torrelay Tor[6069]: Opened Socks listener connection (ready) on /run/tor/socks
Oct 22 06:42:36 torrelay Tor[6069]: Opening Control listener on /run/tor/control
Oct 22 06:42:36 torrelay Tor[6069]: Opened Control listener connection (ready) on /run/tor/control
Oct 22 06:42:36 torrelay Tor[6069]: Unable to find IPv4 address for ORPort 443. You might want to specify IPv6Only to it or set an explicit address or set Address.
Oct 22 06:42:36 torrelay Tor[6069]: Bootstrapped 10% (conn_done): Connected to a relay
Oct 22 06:42:36 torrelay Tor[6069]: Bootstrapped 14% (handshake): Handshaking with a relay
Oct 22 06:42:36 torrelay Tor[6069]: External address seen and suggested by a directory authority: <snip>
Oct 22 06:42:36 torrelay Tor[6069]: Bootstrapped 15% (handshake_done): Handshake with a relay done
Oct 22 06:42:36 torrelay Tor[6069]: Bootstrapped 75% (enough_dirinfo): Loaded enough directory info to build circuits
Oct 22 06:42:36 torrelay Tor[6069]: Bootstrapped 90% (ap_handshake_done): Handshake finished with a relay to build circuits
Oct 22 06:42:36 torrelay Tor[6069]: Bootstrapped 95% (circuit_create): Establishing a Tor circuit
Oct 22 06:42:37 torrelay Tor[6069]: Bootstrapped 100% (done): Done
Oct 22 06:43:36 torrelay Tor[6069]: Not advertising Directory Service support (Reason: AccountingMax enabled)
Oct 22 06:44:36 torrelay Tor[6069]: Now checking whether IPv4 ORPort <snip>:443 is reachable... (this may take up to 20 minutes -- look for log messages indicating success)
Oct 22 06:44:36 torrelay Tor[6069]: Now checking whether IPv6 ORPort [2001:DB8::1]:443 is reachable... (this may take up to 20 minutes -- look for log messages indicating success)
Oct 22 06:44:36 torrelay Tor[6069]: Self-testing indicates your ORPort [2001:DB8::1]:443 is reachable from the outside. Excellent.
Oct 22 06:45:37 torrelay Tor[6069]: Self-testing indicates your ORPort <snip>:443 is reachable from the outside. Excellent.
Oct 22 06:45:39 torrelay Tor[6069]: Performing bandwidth self-test...done.
Oct 22 06:48:37 torrelay Tor[6069]: tor_assertion_failed_(): Bug: ../src/feature/dirclient/dirclient.c:1257: directory_initiate_request: Assertion or_addr_port->port || dir_addr_port->port failed; aborting. (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: Tor 0.4.5.10: Assertion or_addr_port->port || dir_addr_port->port failed in directory_initiate_request at ../src/feature/dirclient/dirclient.c:1257: . Stack trace: (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /usr/bin/tor(log_backtrace_impl+0x6c) [0x557a907880] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /usr/bin/tor(tor_assertion_failed_+0x124) [0x557a914364] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /usr/bin/tor(directory_initiate_request+0x714) [0x557a9d5424] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /usr/bin/tor(router_do_reachability_checks+0x184) [0x557a8d27b8] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /usr/bin/tor(connection_dir_reached_eof+0x13cc) [0x557a9d783c] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /usr/bin/tor(+0x1993e4) [0x557a9a93e4] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /usr/bin/tor(+0x68570) [0x557a878570] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /lib/aarch64-linux-gnu/libevent-2.1.so.7(+0x234e4) [0x7f9e42e4e4] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /lib/aarch64-linux-gnu/libevent-2.1.so.7(event_base_loop+0x50c) [0x7f9e42ef84] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /usr/bin/tor(do_main_loop+0xec) [0x557a8799e0] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /usr/bin/tor(tor_run_main+0x1c0) [0x557a8751b4] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /usr/bin/tor(tor_main+0x54) [0x557a871734] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /usr/bin/tor(main+0x20) [0x557a871220] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /lib/aarch64-linux-gnu/libc.so.6(__libc_start_main+0xe8) [0x7f9dd5c218] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay Tor[6069]: Bug: /usr/bin/tor(+0x612a8) [0x557a8712a8] (on Tor 0.4.5.10 )
Oct 22 06:48:37 torrelay systemd[1]: tor@default.service: Main process exited, code=killed, status=6/ABRT
Oct 22 06:48:37 torrelay systemd[1]: tor@default.service: Failed with result 'signal'.
Oct 22 06:48:37 torrelay systemd[1]: tor@default.service: Consumed 12.377s CPU time.
Oct 22 06:48:37 torrelay systemd[1]: tor@default.service: Scheduled restart job, restart counter is at 5.
Oct 22 06:48:37 torrelay systemd[1]: Stopped Anonymizing overlay network for TCP.
Oct 22 06:48:37 torrelay systemd[1]: tor@default.service: Consumed 12.377s CPU time.
```
### Possible fixesTor: 0.4.5.x-post-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40370config: MetricsPort seems to keep wanting to open2021-05-17T13:04:35ZDavid Gouletdgoulet@torproject.orgconfig: MetricsPort seems to keep wanting to openSeveral users have reported this problem and I just stumble on it by enabling `MetricsPort` on a relay:
```
Apr 15 14:16:56.005 [notice] Opening Metrics listener on 127.0.0.1:9090 ...Several users have reported this problem and I just stumble on it by enabling `MetricsPort` on a relay:
```
Apr 15 14:16:56.005 [notice] Opening Metrics listener on 127.0.0.1:9090
Apr 15 14:16:56.005 [notice] Opened Metrics listener connection (ready) on 127.0.0.1:9090
[...]
Apr 15 14:17:00.370 [notice] Opening Metrics listener on 127.0.0.1:9090
Apr 15 14:17:00.370 [warn] Could not bind to 127.0.0.1:9090: Address already in use. Is Tor already running?
Apr 15 14:18:00.375 [notice] Opening Metrics listener on 127.0.0.1:9090
Apr 15 14:18:00.375 [warn] Could not bind to 127.0.0.1:9090: Address already in use. Is Tor already running?
...
```
The port works and is open but every minute, Tor seems to retry to open it. Possible backport candidate.Tor: 0.4.5.x-post-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40295Bug Report for Metrics Endpoint2021-04-16T14:16:05ZGusBug Report for Metrics EndpointFrom RT:
I am testing the metrics port from this issue:
https://gitlab.torproject.org/tpo/core/tor/-/issues/40063
The issue I am getting is `"connection_finished_flushing(): Bug: got unexpected conn type 20. (on Tor 0.4.5.6 )"`
My set...From RT:
I am testing the metrics port from this issue:
https://gitlab.torproject.org/tpo/core/tor/-/issues/40063
The issue I am getting is `"connection_finished_flushing(): Bug: got unexpected conn type 20. (on Tor 0.4.5.6 )"`
My setup is the following:
```
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.2 LTS
Release: 20.04
Codename: focal
```
```
$ tor --version
Tor version 0.4.5.6.
```
I added the following values to my torrc:
```
MetricsPort 127.0.0.1:9135
MetricsPortPolicy accept 127.0.0.1
```
Several issues arise:
a) Tor is not able to determine, that the Metrics Port has already been opened. Tor tried to open it again
b) Something inside tor crashes as soon as someone tries to access the metrics information.
I attached the log messages from syslog. [logfile.txt](/uploads/dfc846864430f16cc038447c64e04e55/logfile.txt)
Please note, that I am using tor for relay as well as for a hidden service.
I checked, there is nothing listening on port 9135. Tor has this port exclusively.
The metrics port is disabled again for my setup. If you need more information, please don't hesitate to ask.Tor: 0.4.5.x-post-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40107Bridge chooses IPv6 instead of IPv4 (as configured) for server transport 'obfs4'2021-02-24T12:14:47ZtoralfBridge chooses IPv6 instead of IPv4 (as configured) for server transport 'obfs4'Running a bridge at a Debian (buster stable) with 0.4.3.6-1~d10.buster+1 and 0.0.7-4+b12 brought the issue that configuring the server transport 'obfs4' according to the official Tor documentation to listen at IPv4 as
```
ServerTransport...Running a bridge at a Debian (buster stable) with 0.4.3.6-1~d10.buster+1 and 0.0.7-4+b12 brought the issue that configuring the server transport 'obfs4' according to the official Tor documentation to listen at IPv4 as
```
ServerTransportListenAddr obfs4 0.0.0.0:443
```
let the bridge choose IPv6:
```
Aug 16 09:44:57.000 [notice] Registered server transport 'obfs4' at '[::]:443'
```
I set ServerTransportListenAddr to the real IP address which helped:
```
# we have to explicitly set this (and NOT to "0.0.0.0:443" and "[..]:443" respectively)
ServerTransportListenAddr obfs4 <ip addr>:<obfs4 port>
```
This bridge behaviour effectively turnes the bridge in being unusable for obfs4 connections for me.
I tested the above with a connection from a local Tor client and from within Tails from a client location where only IPv4 was working.Tor: 0.4.5.x-post-stableGeorge KadianakisGeorge Kadianakishttps://gitlab.torproject.org/tpo/core/tor/-/issues/40405Implement Prop#3242023-10-25T15:34:59ZMike PerryImplement Prop#324This is the ticket for implementation work on congestion control algorithms:
https://gitlab.torproject.org/tpo/core/torspec/-/blob/master/proposals/324-rtt-congestion-control.txt
Sub-tickets:
* [x] Handshake negotiation spec: https://g...This is the ticket for implementation work on congestion control algorithms:
https://gitlab.torproject.org/tpo/core/torspec/-/blob/master/proposals/324-rtt-congestion-control.txt
Sub-tickets:
* [x] Handshake negotiation spec: https://gitlab.torproject.org/tpo/core/tor/-/issues/40377
* [x] Flow Control implementation: https://gitlab.torproject.org/tpo/core/tor/-/issues/40450
* [x] Full negotiation implementation: https://gitlab.torproject.org/tpo/core/tor/-/issues/40444
* [x] Parameter default update and other fixups from simulation: https://gitlab.torproject.org/tpo/core/tor/-/issues/40524
* [x] Unit testing work: https://gitlab.torproject.org/tpo/core/tor/-/issues/40443
* [x] Simulation work: https://gitlab.torproject.org/tpo/core/tor/-/issues/40404
* [x] Flow control XON/XOFF pseudocode specification: https://gitlab.torproject.org/tpo/core/tor/-/issues/40573
* [ ] Specification updates: https://gitlab.torproject.org/tpo/core/tor/-/issues/40572
* [x] Extra-info fields for flow control: https://gitlab.torproject.org/tpo/core/tor/-/issues/40477Sponsor 61 - Making the Tor network faster & more reliable for users in Internet-repressive placesMike PerryMike Perryhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40404Shadow Experiments for Congestion Control2023-06-08T18:14:59ZMike PerryShadow Experiments for Congestion ControlFor congestion control, we will want to do a series of experiments in Shadow to determine behavior of the algorithms. See section 6 of the proposal:
https://gitlab.torproject.org/tpo/core/torspec/-/blob/master/proposals/324-rtt-congestio...For congestion control, we will want to do a series of experiments in Shadow to determine behavior of the algorithms. See section 6 of the proposal:
https://gitlab.torproject.org/tpo/core/torspec/-/blob/master/proposals/324-rtt-congestion-control.txt#L723
To start, we'll want to run some basic simulations of shadow v2 with current Tor, and extract baseline metrics on the TTFB and throughput of downloads. We will also want to get statistics on time spent in queues on relays, and length of cell queues and edge connection buffering quantity. I am hoping for the latter, @dgoulet can give suggestions based on the things he studied for KIST, EWMA, and circuitmux.
We have switched Shadow to use guards, but we will also need to run multiple instances of the guard-based onionperf models, taking care to discard those results from graphing, similar to how we did for onionperf.
Once we get some baselines we are satisfied with, we will begin testing actual congestion control algorithms and see how they change things. We'll start simple, with sims of just one algorithm at a time, but eventually we will want to see how they behave in mixed networks in competition with each other. See also: https://github.com/shadow/tornettools/issues/10
I also have questions about how Shadow v2 simulates exit TCP connections, as that interaction may impact results (and we want to know if it is realistic).Sponsor 61 - Making the Tor network faster & more reliable for users in Internet-repressive placesJim NewsomeJim Newsomehttps://gitlab.torproject.org/tpo/core/tor/-/issues/40312Lower Tor Exit-side DNS Timeout2023-01-23T15:09:50ZMike PerryLower Tor Exit-side DNS TimeoutTor exits currently wait 5 seconds before deciding to time out a DNS request. It is overwhelmingly likely that this is the cause of the performance issue we saw in https://gitlab.torproject.org/tpo/metrics/analysis/-/issues/33076#note_27...Tor exits currently wait 5 seconds before deciding to time out a DNS request. It is overwhelmingly likely that this is the cause of the performance issue we saw in https://gitlab.torproject.org/tpo/metrics/analysis/-/issues/33076#note_2720122
The performance degradation there exactly matches the behavior we would see if relays become overloaded and drop UDP packets containing DNS queries, triggering a timeout and retry, at 5 second intervals. When coupled with the fact that shadow doesn't do DNS, and could not reproduce the issue, makes it overwhelmingly likely that DNS timeouts are the culprit here
Even, if by some dark curse, there is another 5 second timeout somewhere else in Tor that contributed to that issue, we definitely know that waiting 5 seconds for DNS on the modern internet is a bit too long.
Lowering the Exit-side DNS timeout to 1 second (or lower) will make this issue impact UX much less. The actual solution is to emit an overload signal when this happens, as per https://gitlab.torproject.org/tpo/core/tor/-/issues/40222, and then use sbws to reduce the weights on such relays until the overload signal disappears.
See also https://gitlab.torproject.org/tpo/core/tor/-/issues/40222#note_2727445Sponsor 61 - Making the Tor network faster & more reliable for users in Internet-repressive placesDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.org