Trac issueshttps://gitlab.torproject.org/legacy/trac/-/issues2020-06-13T15:05:06Zhttps://gitlab.torproject.org/legacy/trac/-/issues/21120evdns_get_orig_address: tor_fragile_assert() warning for unknown rtype.2020-06-13T15:05:06ZTracevdns_get_orig_address: tor_fragile_assert() warning for unknown rtype.The trace (below) appeared while routinely examining my tor log while having network connectivity issues (arch is x86/32 bit, kernel is 3.11):
```
Jan 02 16:22:56.000 [warn] {BUG} tor_bug_occurred_(): Bug: src/or/dnsserv.c:294: evdns_ge...The trace (below) appeared while routinely examining my tor log while having network connectivity issues (arch is x86/32 bit, kernel is 3.11):
```
Jan 02 16:22:56.000 [warn] {BUG} tor_bug_occurred_(): Bug: src/or/dnsserv.c:294: evdns_get_orig_address: This line should not have been reached. (Future instances of this warning will be silenced.) (on Tor 0.2.9.4-alpha 8b0755c9bb296ae2)
Jan 02 16:22:56.000 [warn] {BUG} Bug: Line unexpectedly reached at evdns_get_orig_address at src/or/dnsserv.c:294. Stack trace: (on Tor 0.2.9.4-alpha 8b0755c9bb296ae2)
Jan 02 16:22:56.000 [warn] {BUG} Bug: /usr/bin/tor(log_backtrace+0x56) [0xb765aee6] (on Tor 0.2.9.4-alpha 8b0755c9bb296ae2)
Jan 02 16:22:56.000 [warn] {BUG} Bug: /usr/bin/tor(tor_bug_occurred_+0xf5) [0xb7676435] (on Tor 0.2.9.4-alpha 8b0755c9bb296ae2)
Jan 02 16:22:56.000 [warn] {BUG} Bug: /usr/bin/tor(dnsserv_resolved+0x255) [0xb7642435] (on Tor 0.2.9.4-alpha 8b0755c9bb296ae2)
Jan 02 16:22:56.000 [warn] {BUG} Bug: /usr/bin/tor(connection_ap_handshake_socks_resolved+0x8e) [0xb75fe4fe] (on Tor 0.2.9.4-alpha 8b0755c9bb296ae2)
Jan 02 16:22:56.000 [warn] {BUG} Bug: /usr/bin/tor(+0x593fe) [0xb75513fe] (on Tor 0.2.9.4-alpha 8b0755c9bb296ae2)
Jan 02 16:22:56.000 [warn] {BUG} Bug: /usr/bin/tor(+0x5d296) [0xb7555296] (on Tor 0.2.9.4-alpha 8b0755c9bb296ae2)
Jan 02 16:22:56.000 [warn] {BUG} Bug: /usr/bin/tor(circuit_receive_relay_cell+0x316) [0xb7556f66] (on Tor 0.2.9.4-alpha 8b0755c9bb296ae2)
Jan 02 16:22:56.000 [warn] {BUG} Bug: /usr/bin/tor(command_process_cell+0x22f) [0xb75d8f0f] (on Tor 0.2.9.4-alpha 8b0755c9bb296ae2)
Jan 02 16:22:56.000 [warn] {BUG} Bug: /usr/bin/tor(channel_queue_cell+0xd2) [0xb75b0882] (on Tor 0.2.9.4-alpha 8b0755c9bb296ae2)
Jan 02 16:22:56.000 [warn] {BUG} Bug: /usr/bin/tor(channel_tls_handle_cell+0x2bc) [0xb75b695c] (on Tor 0.2.9.4-alpha 8b0755c9bb296ae2)
Jan 02 16:22:56.000 [warn] {BUG} Bug: /usr/bin/tor(+0x11095b) [0xb760895b] (on Tor 0.2.9.4-alpha 8b0755c9bb296ae2)
Jan 02 16:22:56.000 [warn] {BUG} Bug: /usr/bin/tor(+0xf8df4) [0xb75f0df4] (on Tor 0.2.9.4-alpha 8b0755c9bb296ae2)
Jan 02 16:22:56.000 [warn] {BUG} Bug: /usr/bin/tor(connection_handle_read+0x7e3) [0xb75f9983] (on Tor 0.2.9.4-alpha 8b0755c9bb296ae2)
Jan 02 16:22:56.000 [warn] {BUG} Bug: /usr/bin/tor(+0x3458f) [0xb752c58f] (on Tor 0.2.9.4-alpha 8b0755c9bb296ae2)
Jan 02 16:22:56.000 [warn] {BUG} Bug: /usr/lib/libevent-2.1.so.6(+0x207d5) [0xb745a7d5] (on Tor 0.2.9.4-alpha 8b0755c9bb296ae2)
Jan 02 16:22:56.000 [warn] {BUG} Bug: /usr/lib/libevent-2.1.so.6(event_base_loop+0x2b4) [0xb745af84] (on Tor 0.2.9.4-alpha 8b0755c9bb296ae2)
Jan 02 16:22:56.000 [warn] {BUG} Bug: /usr/bin/tor(do_main_loop+0x46c) [0xb752732c] (on Tor 0.2.9.4-alpha 8b0755c9bb296ae2)
Jan 02 16:22:56.000 [warn] {BUG} Bug: /usr/bin/tor(tor_main+0x1315) [0xb7528965] (on Tor 0.2.9.4-alpha 8b0755c9bb296ae2)
Jan 02 16:22:56.000 [warn] {BUG} Bug: /usr/bin/tor(main+0x33) [0xb7523e23] (on Tor 0.2.9.4-alpha 8b0755c9bb296ae2)
Jan 02 16:22:56.000 [warn] {BUG} Bug: /lib/libc.so.6(__libc_start_main+0xe6) [0xb7033cc6] (on Tor 0.2.9.4-alpha 8b0755c9bb296ae2)
Jan 02 16:22:56.000 [warn] {BUG} Bug: /usr/bin/tor(+0x2bd11) [0xb7523d11] (on Tor 0.2.9.4-alpha 8b0755c9bb296ae2)
```
**Trac**:
**Username**: mr-4Tor: 0.3.1.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/10722Wanted to contact directory mirror XXX ... but but it's in our ExcludedNodes ...2020-06-13T14:33:53ZTracWanted to contact directory mirror XXX ... but but it's in our ExcludedNodes list and StrictNodes is set.When I try to use tor hidden service, most often than not I get the following message:
"[warn] {DIR} Wanted to contact directory mirror XXX at xx.xx.xx.xx for hidden-service v2 descriptor fetch, but it's in our ExcludedNodes list and S...When I try to use tor hidden service, most often than not I get the following message:
"[warn] {DIR} Wanted to contact directory mirror XXX at xx.xx.xx.xx for hidden-service v2 descriptor fetch, but it's in our ExcludedNodes list and StrictNodes is set. Skipping. This choice might make your Tor not work."
There is so much wrong in this, I don't know where to start!
1. Provided I have both EntryNodes, ExcludedNodes and StrictNodes all set, why is tor attempting to connect to a directory mirror it knows is in the ExcludedNodes list and then starts moaning?
This is like banging your head against the wall and then start complaining that it hurts!
The appropriate course of action should have been for tor to try and connect to a directory mirror which is NOT in my ExcludedNodes list.
2. This whole issue stems from the fact that tor seems to ignore any of my [Alternate]DirAuthority settings as well and does what he wants - please refer to #10461 for details where I provided a detailed bug report, but nobody from the tor devs seems to care.
I also posted about this problem on the tor-talk ML but had nothing but a wall of silence there as well.
As it stands, I can't effectively use ANY tor hidden services as tor tries (and often fails) to get its v2 descriptors and everything goes pair shaped.
**Trac**:
**Username**: mr-4Tor: 0.2.5.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/10518local tor client policy remotely modified2020-06-13T14:33:36ZTraclocal tor client policy remotely modifiedAs part of my torrc I have a MapAddress directive, which redirects all requests to a specific domain via a tor exit point (I still use ".exit"). That works satisfactory and served me well for a good while.
Today when I tried to access t...As part of my torrc I have a MapAddress directive, which redirects all requests to a specific domain via a tor exit point (I still use ".exit"). That works satisfactory and served me well for a good while.
Today when I tried to access that domain, I received and error (domain inaccessible) and when I inspected the tor logs I found a sequence of these messages: "Requested exit point 'XXXX' is excluded or would refuse request. Closing."
This is obviously incorrect as I don't have such policy and have not restricted using that particular node (I did double-check my torrc file and since I also use default-torrc I checked that as well).
Using the atlas service I made sure that the node in question is up and running and that was indeed the case (the tor node has been running for more than 40 days - continuously).
Next, I stopped tor and restarted it (keeping the whole /var/lib/tor/* intact) and tried to access the same domain. I've got the same error message.
Finally, I stopped tor again, wiped out the entire /var/lib/tor/* directory to force my tor client to download fresh consensus and cold-boot everything. After doing that I tried to access the redirected domain again and this time I was SUCCESSFUL!
All of this leaves me to conclude that my tor client policy was remotely modified/altered, which if true, is a very serious issue, hence reporting it here.
**Trac**:
**Username**: mr-4Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/10461tor seems to ignore "DirServer" option2020-06-13T14:33:29ZTractor seems to ignore "DirServer" optionI've got the following 3 directives in my torrc:
DirServer 95.223.60.130:443 23155386E3B4B93B0294DB3A6263A8FAFE273255
DirServer 89.245.227.226:9001 6CB447C4CBCC4F5BDB4BA096902C2956CB534999
DirServer 109.228.139.83:9001 9DD97868543CB3CF4...I've got the following 3 directives in my torrc:
DirServer 95.223.60.130:443 23155386E3B4B93B0294DB3A6263A8FAFE273255
DirServer 89.245.227.226:9001 6CB447C4CBCC4F5BDB4BA096902C2956CB534999
DirServer 109.228.139.83:9001 9DD97868543CB3CF432B96C082DFAC1FD16F6768
but none of the above statements seem to have been honoured by tor as I get this in my logs (debug-level):
[notice] {GENERAL} 0 entries in guards
[info] {CIRC} compute_weighted_bandwidths(): Empty routerlist passed in to consensus weight node selection for rule weight as guard
[info] {CIRC} smartlist_choose_node_by_bandwidth(): Empty routerlist passed in to old node selection for rule weight as guard
[info] {DIR} directory_pick_generic_dirserver(): No router found for consensus network-status fetch; falling back to dirserver list.
[info] {DIR} router_pick_dirserver_generic(): No dirservers are reachable. Trying them all again.
[notice] {DIR} While fetching directory info, no running dirservers known. Will try again later. (purpose 14)
[info] {GENERAL} or_state_save(): Saved state to "/var/lib/tor/tor/state"
Why?
I don't usually set the DirServer options, but, as of yesterday, my tor gets stuck at 5% when I shutdown tor after getting the following log:
{PROTOCOL} Received a bad CERTS cell from [scrubbed]:9001: The link certificate didn't match the TLS public key
I then wiped out the entire /var/lib/tor directory (to force fresh consensus download) and then got this when I tried to start tor (again, debug-level):
=============================
Dec 21 11:17:24.000 [notice] {GENERAL} 0 entries in guards
Dec 21 11:17:24.000 [info] {CIRC} compute_weighted_bandwidths(): Empty routerlist passed in to consensus weight node selection for rule weight as guard
Dec 21 11:17:24.000 [info] {CIRC} smartlist_choose_node_by_bandwidth(): Empty routerlist passed in to old node selection for rule weight as guard
Dec 21 11:17:24.000 [info] {DIR} directory_pick_generic_dirserver(): No router found for consensus network-status fetch; falling back to dirserver list.
Dec 21 11:17:24.000 [debug] {DIR} directory_initiate_command_rend(): anonymized 0, use_begindir 1.
Dec 21 11:17:24.000 [debug] {DIR} directory_initiate_command_rend(): Initiating consensus network-status fetch
Dec 21 11:17:24.000 [info] {APP} connection_ap_make_link(): Making internal direct tunnel to [scrubbed]:443 ...
Dec 21 11:17:24.000 [debug] {NET} connection_add_impl(): new conn type Socks, socket -1, address (Tor_internal), n_conns 3.
Dec 21 11:17:24.000 [debug] {DIR} circuit_get_open_circ_or_launch(): considering 1, $7BE683E65D48141321C5ED92F075C55364AC7123
Dec 21 11:17:24.000 [debug] {CIRC} onion_pick_cpath_exit(): Launching a one-hop circuit for dir tunnel.
Dec 21 11:17:24.000 [info] {CIRC} onion_pick_cpath_exit(): Using requested exit node '$7BE683E65D48141321C5ED92F075C55364AC7123~7BE683E65D48141321C at 193.23.244.244'
Dec 21 11:17:24.000 [debug] {CIRC} onion_extend_cpath(): Path is 0 long; we want 1
Dec 21 11:17:24.000 [debug] {CIRC} onion_extend_cpath(): Chose router $7BE683E65D48141321C5ED92F075C55364AC7123~7BE683E65D48141321C at 193.23.244.244 for hop 1 (exit is 7BE683E65D48141321C5ED92F075C55364AC7123)
Dec 21 11:17:24.000 [debug] {CIRC} onion_extend_cpath(): Path is complete: 1 steps long
Dec 21 11:17:24.000 [debug] {CIRC} circuit_handle_first_hop(): Looking for firsthop '193.23.244.244:443'
Dec 21 11:17:24.000 [info] {CIRC} circuit_handle_first_hop(): Next router is [scrubbed]: Not connected. Connecting.
Dec 21 11:17:24.000 [notice] {CONTROL} Bootstrapped 5%: Connecting to directory server.
Dec 21 11:17:24.000 [debug] {CHANNEL} channel_tls_connect(): In channel_tls_connect() for channel 0xb797c2e8 (global id 0)
Dec 21 11:17:24.000 [debug] {CHANNEL} channel_set_identity_digest(): Setting remote endpoint digest on channel 0xb797c2e8 with global ID 0 to digest 7BE683E65D48141321C5ED92F075C55364AC7123
Dec 21 11:17:24.000 [debug] {NET} connection_connect(): Connecting to [scrubbed]:443.
Dec 21 11:17:25.000 [debug] {NET} connection_connect(): Connection to [scrubbed]:443 in progress (sock 4).
Dec 21 11:17:25.000 [debug] {NET} connection_add_impl(): new conn type OR, socket 4, address 193.23.244.244, n_conns 4.
Dec 21 11:17:25.000 [debug] {CHANNEL} channel_tls_connect(): Got orconn 0xb797c3c0 for channel with global id 0
Dec 21 11:17:25.000 [debug] {CHANNEL} channel_register(): Registering channel 0xb797c2e8 (ID 0) in state opening (1) with digest 7BE683E65D48141321C5ED92F075C55364AC7123
Dec 21 11:17:25.000 [debug] {CHANNEL} channel_add_to_digest_map(): Added channel 0xb797c2e8 (global ID 0) to identity map in state opening (1) with digest 7BE683E65D48141321C5ED92F075C55364AC7123
Dec 21 11:17:25.000 [debug] {CHANNEL} channel_set_cell_handlers(): Setting cell_handler callback for channel 0xb797c2e8 to 0xb7668500
Dec 21 11:17:25.000 [debug] {CHANNEL} channel_set_cell_handlers(): Setting var_cell_handler callback for channel 0xb797c2e8 to 0xb7667340
Dec 21 11:17:25.000 [debug] {CIRC} circuit_handle_first_hop(): connecting in progress (or finished). Good.
Dec 21 11:17:25.000 [info] {APP} connection_ap_make_link(): ... application connection created and linked.
Dec 21 11:17:25.000 [debug] {NET} connection_add_impl(): new conn type Directory, socket -1, address 193.23.244.244, n_conns 5.
Dec 21 11:17:25.000 [info] {DIR} directory_send_command(): Downloading consensus from 193.23.244.244:443 using /tor/status-vote/current/consensus-microdesc/14C131+27B6B5+49015F+585769+805509+D586D1+E8A9C4+ED03BB+EFCBE7.z
Dec 21 11:17:25.000 [info] {GENERAL} or_state_save(): Saved state to "/var/lib/tor/tor/state"
Dec 21 11:17:25.000 [debug] {NET} conn_read_callback(): socket -1 wants to read.
Dec 21 11:17:25.000 [info] {EDGE} connection_edge_process_inbuf(): data from edge while in 'waiting for circuit' state. Leaving it on buffer.
Dec 21 11:17:25.000 [info] {EDGE} connection_edge_process_inbuf(): data from edge while in 'waiting for circuit' state. Leaving it on buffer.
Dec 21 11:17:25.000 [debug] {DIR} connection_dir_finished_flushing(): client finished sending command.
Dec 21 11:17:25.000 [debug] {NET} conn_read_callback(): socket 4 wants to read.
Dec 21 11:17:25.000 [info] {CONTROL} control_event_bootstrap_problem(): Problem bootstrapping. Stuck at 5%: Connecting to directory server. (Connection refused; CONNECTREFUSED; count 1; recommendation ignore)
Dec 21 11:17:25.000 [debug] {CHANNEL} channel_close_for_error(): Closing channel 0xb797c2e8 due to lower-layer error
Dec 21 11:17:25.000 [debug] {CHANNEL} channel_change_state(): Changing state of channel 0xb797c2e8 (global ID 0) from "opening" to "closing"
Dec 21 11:17:25.000 [debug] {CHANNEL} channel_remove_from_digest_map(): Removed channel 0xb797c2e8 (global ID 0) from identity map in state closing (4) with digest 7BE683E65D48141321C5ED92F075C55364AC7123
Dec 21 11:17:25.000 [debug] {CHANNEL} connection_mark_for_close_internal_(): Calling connection_mark_for_close_internal_() on an OR conn at src/or/connection.c:2828
Dec 21 11:17:25.000 [debug] {NET} conn_close_if_marked(): Cleaning up connection (fd -1).
Dec 21 11:17:25.000 [debug] {CIRC} circuit_n_chan_done(): chan to NULL/193.23.244.244:443, status=0
Dec 21 11:17:25.000 [info] {CIRC} circuit_n_chan_done(): Channel failed; closing circ.
Dec 21 11:17:25.000 [info] {OR} circuit_build_failed(): Our circuit died before the first hop with no connection
Dec 21 11:17:25.000 [info] {APP} connection_ap_fail_onehop(): Closing one-hop stream to '$7BE683E65D48141321C5ED92F075C55364AC7123/193.23.244.244' because the OR conn just failed.
Dec 21 11:17:25.000 [debug] {CIRC} circuit_increment_failure_count(): n_circuit_failures now 1.
Dec 21 11:17:25.000 [debug] {CHANNEL} channel_change_state(): Changing state of channel 0xb797c2e8 (global ID 0) from "closing" to "channel error"
Dec 21 11:17:25.000 [info] {HANDSHAKE} connection_or_note_state_when_broken(): Connection died in state 'connect()ing with SSL state (No SSL object)'
Dec 21 11:17:25.000 [debug] {NET} connection_remove(): removing socket -1 (type OR), n_conns now 5
Dec 21 11:17:25.000 [debug] {NET} conn_close_if_marked(): Cleaning up connection (fd -1).
Dec 21 11:17:25.000 [debug] {NET} connection_remove(): removing socket -1 (type Socks), n_conns now 4
Dec 21 11:17:25.000 [info] {GENERAL} connection_free_(): Freeing linked Socks connection [waiting for circuit] with 152 bytes on inbuf, 0 on outbuf.
Dec 21 11:17:25.000 [debug] {NET} conn_read_callback(): socket -1 wants to read.
Dec 21 11:17:25.000 [info] {HTTP} connection_dir_client_reached_eof(): 'fetch' response not all here, but we're at eof. Closing.
Dec 21 11:17:25.000 [debug] {NET} conn_close_if_marked(): Cleaning up connection (fd -1).
=============================
Repeat ad-nauseum! The above log seems to stem from the fact that it looks as though dannenberg ($7BE683E65D48141321C5ED92F075C55364AC7123/193.23.244.244) doesn't accept connections on ports 443 or 80 anymore (is it down?), grinding my tor bootup to a screeching halt - something I tried to offset by explicitly setting 3 "DirServer" options, but to no avail.
Secondly, why is tor insisting on downloading its descriptors/data from that directory and not trying some other - is dannenberg the only one? I seriously doubt it!
**Trac**:
**Username**: mr-4Tor: unspecifiedNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/legacy/trac/-/issues/9972Failed to find node for hop 0 of our path. Discarding this circuit.2020-06-13T14:32:39ZTracFailed to find node for hop 0 of our path. Discarding this circuit.When I introduce EntryNodes restrictions in my torrc file (also having StrictNodes 1) and then start tor, I get the following rather bizarre sequence going:
[notice] {DIR} We now have enough directory information to build circuits.
[not...When I introduce EntryNodes restrictions in my torrc file (also having StrictNodes 1) and then start tor, I get the following rather bizarre sequence going:
[notice] {DIR} We now have enough directory information to build circuits.
[notice] {CONTROL} Bootstrapped 80%: Connecting to the Tor network.
[warn] {CIRC} Failed to find node for hop 0 of our path. Discarding this circuit.
[...ad nauseum...]
If, at this point, I shut down tor and then start it again, without changing anything at all, the bootstrap completes (100% done) and I have no further problems.
My EntryNodes statement isn't very restrictive (something like {DE},{SE},{AT},{EU}), but even if it is, I don't think it should prevent tor from bootstrapping properly.
**Trac**:
**Username**: mr-4Tor: unspecifiedNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/legacy/trac/-/issues/8859pathbias_scale_close_rates bug2020-06-13T14:29:05ZTracpathbias_scale_close_rates bugThe full message, which so far appeared only once in the logs (after about 3 days of operation) is this:
[notice] {BUG} pathbias_scale_close_rates(): Bug: Scaling has mangled pathbias counts to 151.750000/150.875000 (5/1 open) for guard...The full message, which so far appeared only once in the logs (after about 3 days of operation) is this:
[notice] {BUG} pathbias_scale_close_rates(): Bug: Scaling has mangled pathbias counts to 151.750000/150.875000 (5/1 open) for guard XXX ($XXX)
The above message was followed closely by another group of messages, described in Bug #8858 (they started appearing the next second after I've got the above message).
**Trac**:
**Username**: mr-4Tor: 0.2.4.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/8858pathbias_count_build_success bug2020-06-13T14:29:05ZTracpathbias_count_build_success bugAfter continuous tor (client) operation for about 3 days, I started getting these messages for one particular bridge:
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (152.750000/151.875000) for gua...After continuous tor (client) operation for about 3 days, I started getting these messages for one particular bridge:
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (152.750000/151.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (153.750000/152.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (154.750000/153.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (155.750000/154.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (156.750000/155.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (157.750000/156.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (158.750000/157.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (159.750000/158.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (160.750000/159.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (161.750000/160.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (162.750000/161.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (163.750000/162.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (164.750000/163.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (165.750000/164.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (166.750000/165.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (167.750000/166.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (168.750000/167.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (169.750000/168.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (170.750000/169.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (171.750000/170.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (172.750000/171.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (173.750000/172.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (174.750000/173.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (175.750000/174.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (176.750000/175.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (177.750000/176.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (178.750000/177.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (179.750000/178.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (180.750000/179.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (181.750000/180.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (182.750000/181.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (183.750000/182.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (184.750000/183.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (185.750000/184.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (186.750000/185.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (187.750000/186.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (189.750000/188.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (190.750000/189.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (191.750000/190.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (192.750000/191.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (193.750000/192.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (194.750000/193.875000) for guard XXX ($XXX)
[notice] {BUG} pathbias_count_build_success(): Bug: Unexpectedly high successes counts (195.750000/194.875000) for guard XXX ($XXX)
The time interval between the first and last message is about 10 hours. Should I be worried about that bridge (it is my bridge, even though the message says "guard")?
**Trac**:
**Username**: mr-4Tor: 0.2.4.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/8185circuit_package_relay_cell(): Bug: outgoing relay cell has n_chan==NULL. Drop...2020-06-13T15:16:59ZTraccircuit_package_relay_cell(): Bug: outgoing relay cell has n_chan==NULL. Dropping.Don't know whether the above message is related to bugs #8093 or #6252 (if so, apologies!), but decided to report this anyway. The full message is:
[warn] {BUG} circuit_package_relay_cell(): Bug: outgoing relay cell has n_chan==NULL. Dr...Don't know whether the above message is related to bugs #8093 or #6252 (if so, apologies!), but decided to report this anyway. The full message is:
[warn] {BUG} circuit_package_relay_cell(): Bug: outgoing relay cell has n_chan==NULL. Dropping.
There are no particular circumstances which I've noticed this message to appear (in fact, I've noticed it only when checking the tor logs a couple of hours later) - tor has been running more than a day at that point.
**Trac**:
**Username**: mr-4Tor: 0.2.9.x-finalNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/legacy/trac/-/issues/8108Implement IncludeFile (or similar) directive in torrc2020-06-13T14:26:56ZTracImplement IncludeFile (or similar) directive in torrcUse case: I have multiple torrcX configurations (X=1,2...) and instead of duplicating quite a few torrc options in all those files, it would be better if I could have IncludeFile (or similar) directive to ask tor to parse the extra file ...Use case: I have multiple torrcX configurations (X=1,2...) and instead of duplicating quite a few torrc options in all those files, it would be better if I could have IncludeFile (or similar) directive to ask tor to parse the extra file and include the options listed there at the point where they are specified. For example:
torrc1 (present day)
====================
SocksPolicy accept 127.0.0.1:*
SocksPolicy accept 127.0.0.28:*
[...]
SocksPort 9050
SocksListenAddress 127.0.0.1:9050
SocksPolicy accept 10.0.1.0/24:*
SocksPolicy accept 10.0.2.0/24:*
SocksPolicy accept 10.0.3.0/24:*
SocksPolicy accept 10.0.4.0/24:*
SocksPolicy reject *:*
ReachableAddresses *:9001, *:9090-9091
ExcludeNodes <large_list>
[...]
torrc2 (present day)
====================
[...]
SocksPort 9050
SocksListenAddress 127.0.0.1:9050
SocksPolicy accept 10.0.1.0/24:*
SocksPolicy accept 10.0.2.0/24:*
SocksPolicy accept 10.0.3.0/24:*
SocksPolicy accept 10.0.4.0/24:*
SocksPolicy reject *:*
ReachableAddresses *:9001, *:9090-9091
ExcludeNodes <large_list>
[...]
Note that the majority of Socks* options, as well as ReachableAddresses and ExcludeNodes are duplicated in both files. If I had the opportunity to use IncludeFile option, then I could do something like:
torrc.inc
=========
SocksPort 9050
SocksListenAddress 127.0.0.1:9050
SocksPolicy accept 10.0.1.0/24:*
SocksPolicy accept 10.0.2.0/24:*
SocksPolicy accept 10.0.3.0/24:*
SocksPolicy accept 10.0.4.0/24:*
SocksPolicy reject *:*
ReachableAddresses *:9001, *:9090-9091
ExcludeNodes <large_list>
torrc1
======
SocksPolicy accept 127.0.0.1:*
SocksPolicy accept 127.0.0.28:*
[...]
IncludeFile torrc.inc
[...]
torrc2
======
[...]
IncludeFile torrc.inc
[...]
In this case, if I need to change one of the "common" options all I have to do is change the torrc.inc file and not trash all my torrcX files and change each and every option in them.
That is particularly painful if I have to maintain a list of my ExcludeNodes for example.
Another useful "technique" in which the IncludeFile directive will be very handy is if some/all of the options listed in torrc.inc are auto-generated (building ExcludeNodes statement from parsing an updated version of a geoip database for example) - that way I don't even need to lift my finger and go through all my torrcX files and update those options and will just run the appropriate (shell) script to build torrc.inc and that will be that.
**Trac**:
**Username**: mr-4Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/8083add "{??}" (or other suitable alternative) for specifying nodes2020-06-13T14:26:50ZTracadd "{??}" (or other suitable alternative) for specifying nodesCurrently, if the origin of the IP address of a particular node is unknown, either due to messed-up GeoIP database, or due to tor having to rely on an outdated copy, there is no way to specify these nodes and include them in various *Nod...Currently, if the origin of the IP address of a particular node is unknown, either due to messed-up GeoIP database, or due to tor having to rely on an outdated copy, there is no way to specify these nodes and include them in various *Nodes torrc options.
By adding something like the above option (or other similar syntax) this can be achieved.
**Trac**:
**Username**: mr-4Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/8014Reject the reference-implementation of curve25519 from donna in a more compre...2020-06-13T14:28:15ZTracReject the reference-implementation of curve25519 from donna in a more comprehensible way.during ./configure I get the following result:
[...]
checking whether we can use curve25519-donna-c64... no
checking whether we can use curve25519 from nacl... no
I do have nacl(-devel) installed on my machine. Closer inspection of conf...during ./configure I get the following result:
[...]
checking whether we can use curve25519-donna-c64... no
checking whether we can use curve25519 from nacl... no
I do have nacl(-devel) installed on my machine. Closer inspection of config.log tells me that conftest.c has "#include <crypto_scalarmult_curve25519.h>". That file is in /usr/include/nacl, so I had to add "-I/usr/include/nacl" for this particular error to pass.
That didn't work though as I am now getting "conftest.c:58:4: error: #error Hey, this is the reference implementation!". The conftest.c file itself has this:
#include <crypto_scalarmult_curve25519.h>
#ifdef crypto_scalarmult_curve25519_ref_BYTES
#error Hey, this is the reference implementation!
#endif
"crypto_scalarmult_curve25519_ref_BYTES" is indeed defined in /usr/include/nacl/crypto_scalarmult_curve25519.h, so how am I supposed to satisfy this test then?
**Trac**:
**Username**: mr-4Tor: 0.2.4.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/7890meaningless error message displayed by tor at start up2020-06-13T14:26:19ZTracmeaningless error message displayed by tor at start upDuring tor start up, I get the following gem: "No specified non-excluded exit routers seem to be running: can't choose an exit."
The first one who could decipher that for me will get a pat on the back.
I figured out the cause, eventua...During tor start up, I get the following gem: "No specified non-excluded exit routers seem to be running: can't choose an exit."
The first one who could decipher that for me will get a pat on the back.
I figured out the cause, eventually - I have restricted tor to use exit node (via ExitNodes), but that node was unavailable for some reason.
**Trac**:
**Username**: mr-4Tor: 0.3.2.x-finalNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/legacy/trac/-/issues/7799Tor seems to build+timeout circuits for hours when offline/idle2020-06-13T14:26:01ZTracTor seems to build+timeout circuits for hours when offline/idleSince I've upgraded to .7-alpha I am having my log file filled with the above message, which is generated a couple of times a *second*!
\
I have, so far, accumulated over 14,000 lines with this crap, which seems to be generated when tor ...Since I've upgraded to .7-alpha I am having my log file filled with the above message, which is generated a couple of times a *second*!
\
I have, so far, accumulated over 14,000 lines with this crap, which seems to be generated when tor was left idle (unused) overnight. I had to increase my log level (to "warn") in order to get rid of it temporarily, but that isn't good enough, since I cannot see my other (very useful) messages appearing at that level (notice) without having my log filled rapidly.
Here's the logline from the original bug title:
"No circuits are opened. Relaxed timeout for a circuit with channel state open to 60000ms. However, it appears the circuit has timed out anyway. 0 guards are live."
**Trac**:
**Username**: mr-4Tor: 0.2.4.x-finalMike PerryMike Perryhttps://gitlab.torproject.org/legacy/trac/-/issues/7797tor DNS resolver unable to handle/return SRV type DNS records2020-06-13T16:09:00ZTractor DNS resolver unable to handle/return SRV type DNS recordsTor's internal DNS resolver is incapable of looking up SRV (service type) DNS records.
SRV-type DNS records have the following format: _service._protocol.name (like "_sip._udp.ekiga.net" for example). The "output" I am getting from tor...Tor's internal DNS resolver is incapable of looking up SRV (service type) DNS records.
SRV-type DNS records have the following format: _service._protocol.name (like "_sip._udp.ekiga.net" for example). The "output" I am getting from tor is as follows (127.0.0.1 refers to tor's internal DNS server):
# dig @127.0.0.1 _sip._udp.ekiga.net SRV
; <<>> DiG 9.7.3-P1-RedHat-9.7.3-2.P1.fc18 <<>> @127.0.0.1 _sip._udp.ekiga.net SRV
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOTIMP, id: 29790
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;_sip._udp.ekiga.net. IN SRV
;; Query time: 2 msec
;; SERVER: 127.0.0.1 # 53(127.0.0.1)
;; WHEN: Wed Dec 26 00:45:05 2012
;; MSG SIZE rcvd: 37
Note the "status" above as NOTIMP (Not Implemented). The correct output, using a "proper" DNS server (marked as xxx.xxx.xxx.xxx below) is as follows:
# dig @xxx.xxx.xxx.xxx _sip._udp.ekiga.net SRV
; <<>> DiG 9.7.3-P1-RedHat-9.7.3-2.P1.fc18 <<>> @xxx.xxx.xxx.xxx _sip._udp.ekiga.net SRV
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 65507
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; QUESTION SECTION:
;_sip._udp.ekiga.net. IN SRV
;; ANSWER SECTION:
_sip._udp.ekiga.net. 86400 IN SRV 0 0 5060 ekiga.net.
;; ADDITIONAL SECTION:
ekiga.net. 85881 IN A 86.64.162.35
;; Query time: 24 msec
;; SERVER: xxx.xxx.xxx.xxx # 53(xxx.xxx.xxx.xxx)
;; WHEN: Wed Dec 26 00:48:01 2012
;; MSG SIZE rcvd: 82
As evident, the "status" returned is NOERROR (No Error).
**Trac**:
**Username**: mr-4Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/7646fix/enhance getinfo ns/id/* commands2020-06-13T14:25:23ZTracfix/enhance getinfo ns/id/* commands1. Fix getinfo ns/id/*
See the last few comments on #7059 with regards to the ns/id/* getinfo command.
Currently, it only returns information about tor nodes which have microdescriptors and ignores those that use "normal" descriptors, i...1. Fix getinfo ns/id/*
See the last few comments on #7059 with regards to the ns/id/* getinfo command.
Currently, it only returns information about tor nodes which have microdescriptors and ignores those that use "normal" descriptors, in other words, it behaves exactly like the md/id/node command.
I can understand the rationale behind having desc/id/node to only show information about tor nodes with "normal" descriptors and I can understand the reasons for md/id/node to do the same for tor nodes with microdescriptors, but ns/id/node should, in my view, be implemented in such a way to return information about all tor nodes, regardless of whether or not they use microdescriptors, particularly so because by definition of this command in control-specs.txt it is supposed to return V2 directory info on all nodes without making such distinctions.
2. Enhance ns/* - just an idea (not a bug!): currently, there is no way I could get information on a tor node by specifying its ip address.
I would love to be able to get that information by executing something like "getinfo ns/ip/<ip_address>" (note the "ip" in the middle!). This should return information on all tor nodes with the specified ip address (and/or netmask), regardless of whether they use microdescriptors or not.
**Trac**:
**Username**: mr-4Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/7091Assertion 0 == g->n_members failed error (from libevent)2020-06-13T14:23:33ZTracAssertion 0 == g->n_members failed error (from libevent)The full error message I get is "Error from libevent: bufferevent_ratelim.c:724: Assertion 0 == g->n_members failed in bufferevent_rate_limit_group_free".
This happened after the following sequence of events:
1. Tried to connect to goo...The full error message I get is "Error from libevent: bufferevent_ratelim.c:724: Assertion 0 == g->n_members failed in bufferevent_rate_limit_group_free".
This happened after the following sequence of events:
1. Tried to connect to google.com using tor to check my mail and got "Tried for 120 seconds to get a connection to [scrubbed]:993. Giving up. (waiting for circuit)" message appearing twice.
2. After a while I shutdown tor.
3. The above message appeared.
Please let me know if you need any more info.
**Trac**:
**Username**: mr-4Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/7059TorControl: Unrecognized (node) key errors2020-06-13T14:25:23ZTracTorControl: Unrecognized (node) key errorsOK, I am not 100% sure whether this is a new bug, though I thought to report it anyway, just in case.
Some background: Alongside tor, sometimes I use event handler, which reports all tor events I am interested in (mainly to create and c...OK, I am not 100% sure whether this is a new bug, though I thought to report it anyway, just in case.
Some background: Alongside tor, sometimes I use event handler, which reports all tor events I am interested in (mainly to create and close streams and also to display the full path of those streams, together with their IP address and country code).
All this is java based (I will try to attach the java code - 3 files in total - at the end of this report).
Up until I upgraded to 0.2.4.3, everything was OK, but since the upgrade I started getting a lot of errors like these:
Error from Tor process: dash.four.tor.torctl.TorControlError: Error reply: Unrecognized key "desc/id/$97B867190D2A0387F3259EA65664A90884A8BDC4" [null]
Error from Tor process: dash.four.tor.torctl.TorControlError: Error reply: Unrecognized key "desc/id/$A1130635A0CDA6F60C276FBF6994EFBD4ECADAB1" [null]
Error from Tor process: dash.four.tor.torctl.TorControlError: Error reply: Unrecognized key "desc/id/$8894CAA82AA1DC8B72B679ACB54FFED561737B68" [null]
Error from Tor process: dash.four.tor.torctl.TorControlError: Error reply: Unrecognized key "desc/id/$F4EAE475A7117DD53EF631F031F000966972A2F5" [null]
Error from Tor process: dash.four.tor.torctl.TorControlError: Error reply: Unrecognized key "desc/id/$6330CCF8FEED2EF9B12FCF6688E2577C65522BA4" [null]
Exception in thread "Thread-0" dash.four.tor.torctl.TorControlError: Error reply: Unrecognized key "desc/id/$B749E078AD23AFA33D5BD2FF1AD15EE6409E3D04"
These are caused by the DebuggingEventHandler, and con.getInfo("desc/id/" + node) in particular.
Am I missing something or has the API changed in 0.2.4.3? the same code used to work, with very small exceptions in 0.2.3.x.
Let me know if you need any more info from me.
**Trac**:
**Username**: mr-4Tor: unspecifiedAaron GibsonAaron Gibsonhttps://gitlab.torproject.org/legacy/trac/-/issues/6690compare_tor_addr_to_addr_policy assertion error2020-06-13T14:22:03ZTraccompare_tor_addr_to_addr_policy assertion errorOccasionally, usually after tor fetches its directory microdescriptors or when I "wake" tor up after a long time of inactivity, I get the following error:
compare_tor_addr_to_addr_policy(): Bug: policies.c:716: compare_tor_addr_to_addr...Occasionally, usually after tor fetches its directory microdescriptors or when I "wake" tor up after a long time of inactivity, I get the following error:
compare_tor_addr_to_addr_policy(): Bug: policies.c:716: compare_tor_addr_to_addr_policy: Assertion port != 0 failed; aborting.
After which tor exits. This didn't happen before (and I have been with this version of tor for a long time), though I've had this error twice in a week now, so thought to report it.
**Trac**:
**Username**: mr-4Tor: 0.2.2.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/5840tor disregards MapAddress .exit mapping2020-06-13T14:19:33ZTractor disregards MapAddress .exit mappingIn my torrc file I have the following statements:
AllowDotExit 1
[...]
MapAddress imap.google.com imap.google.com.<exit_node_hash>.exit
This is to prevent the stupid google blocks when my account appears to be accessed "from different l...In my torrc file I have the following statements:
AllowDotExit 1
[...]
MapAddress imap.google.com imap.google.com.<exit_node_hash>.exit
This is to prevent the stupid google blocks when my account appears to be accessed "from different location".
This, however, seems to be completely disregarded by tor as I have at least 4 different IP addresses - all from different countries - in my google imap logs (visible when I log in using the web interface and click on "details" on my "Last account activity" status at the bottom).
Before you ask - these were my sessions and not somebody else trying to hack my account as I know the times where I accessed it.
This is *extremely* annoying as when my MapAddress mapping is completely disregarded by tor, google blocks my account as it treats access to it "from different location" as suspicious, so every time that happens I have to manually log in via the web interface and reset that account and change my password.
**Trac**:
**Username**: mr-4Tor: 0.2.3.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/4696add OutboundBindInterface option to torrc2020-06-13T14:15:56ZTracadd OutboundBindInterface option to torrcFirst, I am well-aware that there is OutboundBindAddress option in tor.
I am also aware that tor "automatically" chooses an IP address/interface to bind to (if OutboundBindAddress is not specified) based on the current routing table.
...First, I am well-aware that there is OutboundBindAddress option in tor.
I am also aware that tor "automatically" chooses an IP address/interface to bind to (if OutboundBindAddress is not specified) based on the current routing table.
There are quite a few instances where OutboundBindAddress option is not suitable, particularly where the IP address changes frequently (vpn as well as most dhcp-dependant interfaces).
Whether I use OutboundBindAddress or just leave tor to make a decision which address to bind to is not suitable (at least) in the following two cases:
1. When I temporarily loose my IP address which tor has used up until now due to dhcp client renewing its lease (and receive a new IP address) and that doesn't happen - for whatever reason - instantly.
This results in one of two possible - wrong - outcomes: a) in case of absence of OutboundBindAddress option, Tor decides that my IP address "has changed" and tries to bind to the default interface, which may not be the one I have used previously; or b) when OutboundBindAddress is specified, tor just sits there trying to use the "old" address specified, resulting in a stall.
2. When I temporarily loose my current IP address due to vpn connection becoming (temporarily) unstable and it takes a bit of time for my machine to renew its IP address (this may take from a minute to up to 20+ minutes depending on the status of the vpn server at the other end) the outcome is exactly the same as I listed above - tor either tries to use the default interface (wrong!) or tries to bind to the IP address I specified with OutboundBindAddress (wrong again).
With the introduction of this new (OutboundBindInterface) option, tor can follow the IP address on the specified *interface* (regardless of what that might be) and the above - erroneous - outcome could be avoided.
**Trac**:
**Username**: mr-4Tor: unspecified