Tor issueshttps://gitlab.torproject.org/tpo/core/tor/-/issues2023-05-02T09:22:32Zhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40714add option for declaring the result of querying the socks proxy port via cont...2023-05-02T09:22:32ZAndreasadd option for declaring the result of querying the socks proxy port via control port### Summary
I'm using tor in a container within a docker-compose application and thus have the Socks port bound to all interfaces:
```
SocksPort 0.0.0.0:9050
SocksPort [::]:9050
```
I'm also using a control port (config not shown here...### Summary
I'm using tor in a container within a docker-compose application and thus have the Socks port bound to all interfaces:
```
SocksPort 0.0.0.0:9050
SocksPort [::]:9050
```
I'm also using a control port (config not shown here) to have hidden services managed by applications, more specifically bitcoind.
The application queries via control port for the socks proxy port and arbitrary gets the last entry from the above list as result. This is already weird, why the last entry and not the first? To make matters worse, both entries are entirely wrong from the perspective of the application container, because it resolves to local addresses. If the application then tries to connect the socky proxy, it will fail.
To work around this problem, I replaced my declarations with this one
```
SocksPort tor:9050
```
"tor" being the hostname of my tor container. This works, but now the tor proxy is only listening on its external interface and only on one of the protocol families IPv4 or IPv6 (again, not clear which). This is undesired mainly because I want the socks proxy listening on both IPv4 and IPv6, and also on all interfaces. Note I can't use numerical IP addresses here, because Docker changes them around for each fresh container.
**Feature request:**
So my feature request would be a new (optional) configuration entry to declare the result of the query for socks proxy port via control port explicitly and independant of the binding declaration. And maybe a warning message if wildcard SocksPorts are used and the new configuration option is not present. Something like
```
SocksPort 0.0.0.0:9050
SocksPort [::]:9050
SocksPortQueryResult tor:9050
```
should do. Feel free to come up with a better option name.https://gitlab.torproject.org/tpo/core/tor/-/issues/40711Tor with new ipv6 only works after a restart2022-12-16T18:26:07ZGusTor with new ipv6 only works after a restartA relay operator reported that every day their relay changes the IPv4 and IPv6 addresses. But while Tor with IPv4 works automatically, for IPv6 connection, they need to restart the service.
```
Auto-discovered IPv6 address [1111:::ffff]...A relay operator reported that every day their relay changes the IPv4 and IPv6 addresses. But while Tor with IPv4 works automatically, for IPv6 connection, they need to restart the service.
```
Auto-discovered IPv6 address [1111:::ffff]:9001 has not been found reachable. However, IPv4 address is reachable. Publishing server descriptor without IPv6 address. [2 similar message(s) suppressed in last 2400 seconds])
```
#### Tor version
0.4.7.10.
#### torrc
```
SocksPort 0
Log notice file /var/log/tor/notices.log
DataDirectory /var/lib/tor
ControlPort xxxxx
HashedControlPassword xxxxx
ORPort 9001
Nickname justme
RelayBandwidthRate 250 KB # Throttle traffic to 100KB/s (800Kbps)
RelayBandwidthBurst 750 KB # But allow bursts up to 200KB/s (1600Kbps)
ContactInfo me@there.net
DirPort 9005
ExitRelay 0
ExitPolicy reject *:* # no exits allowed
```
#### Forum link
https://forum.torproject.net/t/ipv6-with-dynamic-prefix-behind-nat/5296https://gitlab.torproject.org/tpo/core/tor/-/issues/40702Single Onion Service Rends become 7 hop after retry2023-02-09T16:22:00ZMike PerrySingle Onion Service Rends become 7 hop after retryIn `retry_service_rendezvous_point()`, if a rend connect fails for a non-anonymous rend, we promote it to a 7-hop slow rend for some reason.
This will impact non-anonymous onions who want performance, especially during the DoS.
David n...In `retry_service_rendezvous_point()`, if a rend connect fails for a non-anonymous rend, we promote it to a 7-hop slow rend for some reason.
This will impact non-anonymous onions who want performance, especially during the DoS.
David notes that this decision to fall back to full anonymous mode in the event of timeout or failure was explicitly written just in case a non-anonymous onion service was also behind a restrictive firewall, and that firewall was the thing that happened to cause a timeout. There is also a comment that explains this, believe it or not. Back then, decision making in C-Tor was a bit more...special.
I bet if we get funders who actually care about single onion performance, they would prefer that their single onions not randomly double in latency on a timeout or failure, just to support the case where some single onion out there might be behind a firewall that they don't know about. Such a funder might suggest that we provide some other option for people behind firewalls to use, instead of this madness.
But I look forward to more research.https://gitlab.torproject.org/tpo/core/tor/-/issues/40690Scoped Bandwidth Accounting2022-11-14T17:41:18ZJeremy Sakladjeremy@saklad5.comScoped Bandwidth Accounting### Summary
Last discussed in https://gitlab.torproject.org/tpo/core/tor/-/issues/8276, I feel it would be quite useful to allow AccountingMax to both ignore and avoid impacting onion services. Running a relay and hidden onion service o...### Summary
Last discussed in https://gitlab.torproject.org/tpo/core/tor/-/issues/8276, I feel it would be quite useful to allow AccountingMax to both ignore and avoid impacting onion services. Running a relay and hidden onion service on the same node is obviously a bad idea, but that's not who this would be for.
The way I see it, Tor's functionality can be cleanly separated into two categories: anonymous and non-anonymous. It is ill-advised to mix the two. Clients and standard/hidden onion services are anonymous, while relays, directories, and single onion services are not.
To take load off exit relays, I run single onion services for pretty much every server I operate. If I have space in my traffic quota, I also configure those instances as relays. If I only have *some* space, like with a VPS, I use AccountingMax to shut off the relay and avoid overage charges. Unfortunately, this shuts off the single onion services, which were the main reason I'm even running an instance there.
I don't want to choose between donating spare bandwidth and optimizing my servers for Tor clients, and I certainly don't want the added complexity and overhead of running multiple Tor instances. **Previous discussions have indicated that this would be difficult to implement due to how Tor is coded, but I think this should be considered for Arti's shiny new codebase.**
### What is the expected behavior?
There are two ways this could be configured:
1. Add more AccountingRule options, such as "relay-out", that only apply to relay traffic
2. Add a new option, "AccountingScope", with options such as "client", "onion-service", and most importantly "relay", which further constrain "AccountingRule"
I think \#2 would be cleanest.
Once the limit is hit, Tor will hibernate only the functionality that is tracked under "AccountingScope". This obviously means that the instance would go over that budget, but anyone setting this should be fully aware of that. In theory you could add an additional option that controls whether everything is hibernated even with scoped accounting, but I can't conceive of a use case for that.https://gitlab.torproject.org/tpo/core/tor/-/issues/40689dirauth: Add new Faravahar 2.0 back to code2023-04-12T15:17:56ZDavid Gouletdgoulet@torproject.orgdirauth: Add new Faravahar 2.0 back to codeWe'll be in the next release removing the current Faravahar (https://gitlab.torproject.org/tpo/core/tor/-/issues/40688) due to its network location at team Cymru. Sina, its operator, is in the process of moving it out and securing a new ...We'll be in the next release removing the current Faravahar (https://gitlab.torproject.org/tpo/core/tor/-/issues/40688) due to its network location at team Cymru. Sina, its operator, is in the process of moving it out and securing a new location.
At this point, we'll add it back into the code with fresh new keys and IP. This ticket tracks this effort.https://gitlab.torproject.org/tpo/core/tor/-/issues/40682Directory authorities might store wrong descriptor in relay list2023-05-11T17:27:28ZGeorg KoppenDirectory authorities might store wrong descriptor in relay listThis morning we had a relay operator wondering why their healthy [guard relay](https://metrics.torproject.org/rs.html#details/2889D778367BFBE2D594E95524E3FB908B49AA02) with a consensus weight of 30000 suddenly dropped to a consensus weig...This morning we had a relay operator wondering why their healthy [guard relay](https://metrics.torproject.org/rs.html#details/2889D778367BFBE2D594E95524E3FB908B49AA02) with a consensus weight of 30000 suddenly dropped to a consensus weight of 20. In particular, as the sibling relay on the *same* machine was still behaving fine.
It turned out the relay, `nognu`, got just two measurements and made it barely in the consensus, so the consensus weight = 20 fallback kicked in. Now, why exactly did it not make into, e.g. `moria1`'s vote then? For some reason `moria1` did not get the latest two descriptors directly but from different directory authorities:
```
published 2022-10-11 17:06:38
@downloaded-at 2022-10-11 17:50:44
@source "154.35.175.225"
published 2022-10-11 18:36:51
@downloaded-at 2022-10-11 18:50:03
@source "199.58.81.140"
```
That's not too bad. However, `moria1` thought none of those was the latest descriptor, rather it was the one it fetched from `dizum`:
```
published 2022-10-10 23:06:35
@downloaded-at 2022-10-11 18:50:05
@source "45.66.33.45"
```
It seems it arrived two seconds later after `moria1` got the latest descriptor suddenly making this old descriptor the latest. And given that it already expired more than `ROUTER_MAX_AGE_TO_PUBLISH` ago *that* descriptor got discarded and now `moria1` (and it seems a bunch of other directory authorities) think that relay does not exist, i.e. they don't include it in their vote.
@arma did a bunch of debugging here. So, I'll take the liberty to just paste the IRC debug logs into this ticket, so I don't lose important details:
```
09:52 <+arma1> and then issue two is: there seems to be a bug where tor dir auths
store the wrong new descriptor in the relay list
09:53 <+arma1> they should take the new descriptor and compare published-by and
take the newest
09:53 <+arma1> though now that i think about it, i think there is some logic to
try to consense upon the most popular one
09:53 <+arma1> so if a relay publishes a new one every minute and there are
thousands of descriptors for it, the dir auths don't all vote
about a different one
09:57 <+arma1> yes, here is that logic: see dirserv_add_descriptor() in
feature/dirauth/process_descs.c,
09:57 <+arma1> /* Check whether this descriptor is semantically identical to
the last one
09:57 <+arma1> * from this server. (We do this here and not in
router_add_to_routerlist
09:57 <+arma1> * because we want to be able to accept the newest router
descriptor that
09:57 <+arma1> * another authority has, so we all converge on the same one.) */
09:59 <+arma1> we hit a race where in one voting period, we got two votes about
nognu each naming a different descriptor
10:02 <+arma1> Oct 11 14:50:03.773 [notice] longclaw posted a vote to me from
199.58.81.140.
10:02 <+arma1> Oct 11 14:50:07.099 [notice] dizum posted a vote to me from
45.66.33.45.
10:04 <+arma1> i wonder if the downloaded-at's are utc or local time
10:04 <+arma1> looks like they are utc
10:10 <+arma1> how did i receive the nognu descriptor from dizum at 18:50:03 if i
received dizum's vote at 14:50:07.099? that is weird.
10:10 <+arma1> erm, i mean 18:50:05. how did i receive it from dizum at 18:50:05
if i got the vote 2 seconds after that.
10:14 <+arma1> ok, i think i know where the bug happened,
10:14 <+arma1> in router_add_to_routerlist(),
10:14 <+arma1> if (old_router) {
10:14 <+arma1> if (!in_consensus && (router->cache_info.published_on <=
10:14 <+arma1> old_router->cache_info.published_on)) {
10:14 <+arma1> } else {
10:14 <+arma1> /* Same key, and either new, or listed in the consensus. */
10:14 <+arma1> log_debug(LD_DIR, "Replacing entry for router %s",
10:14 <+arma1> router_describe(router));
10:15 <+arma1> i bet that earlier nognu descriptor was the one listed in the
consensus at 18:00 utc on that day
10:15 <+arma1> so when i heard it from dizum, i fetched a copy, and put it as my
primary descriptor for nognu because that's what other people were
voting about at the time
10:20 <+arma1> https://gitlab.torproject.org/tpo/core/tor/-/issues/543 is the
original bug
10:20 -zwiebelbot:#tor-relays- tor:tpo/core/tor#543: 0.2.0.9-alpha servers don't
update enough dir info - https://bugs.torproject.org/tpo/core/tor/543
10:21 <+arma1> as added in commit acaa9a7f696
10:24 <+arma1> we do have code to rescue old descriptors if we see them listed in
the consensus,
10:24 <+arma1> log_info(LD_DIR, "%d router descriptors listed in consensus
are "
10:24 <+arma1> "currently in old_routers; making them current.",
10:24 <+arma1> smartlist_len(no_longer_old));
10:24 <+arma1> but we seem to have wrapped that code in if
(!authdir_mode_v3(options)
10:24 <+arma1> i.e. everybody does the rescuing except v3 dir auths
```
Tagging @nickm as I heard he might be our best bet here. :)Nick MathewsonNick Mathewsonhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40679systemd service file use incorrect network target2023-05-25T14:50:19Ztortuxsystemd service file use incorrect network target### Summary
The service file use the target network.target instant of network-online.target.
### Steps to reproduce:
1. Enable the tor service
2. restart the server
### What is the current bug behavior?
Tor don't start at the first t...### Summary
The service file use the target network.target instant of network-online.target.
### Steps to reproduce:
1. Enable the tor service
2. restart the server
### What is the current bug behavior?
Tor don't start at the first try. Up to 5 times it will be restarted until it runs.
### What is the expected behavior?
That the service will starts after the network-online.target is reached, because only now all ip addresses of the system are set.
### Environment
- Which version of Tor are you using? Run `tor --version` to get the version if you are unsure.
tor-0.4.7.10-1.el9.x86_64
- Which operating system are you using? For example: Debian GNU/Linux 10.1, Windows 10, Ubuntu Xenial, FreeBSD 12.2, etc.
Rocky Linux release 9.0 (Blue Onyx)
- Which installation method did you use? Distribution package (apt, pkg, homebrew), from source tarball, from Git, etc.
The rpm package from the tor repos.
### Relevant logs and/or screenshots
### Possible fixes
Simple modify the after target from network.target to network-online.target in the service file.https://gitlab.torproject.org/tpo/core/tor/-/issues/40677Errors parsing descriptors2022-10-31T20:42:12ZTom Rittertom@ritter.vgErrors parsing descriptors```
Sep 27 12:18:43.000 [notice] Bootstrapped 55% (loading_descriptors): Loading relay descriptors
Sep 27 12:18:43.000 [warn] Bad element "$E470DD7B0" while parsing a node family.
Sep 27 12:18:43.000 [warn] Bogus ed25519 key in microdesc...```
Sep 27 12:18:43.000 [notice] Bootstrapped 55% (loading_descriptors): Loading relay descriptors
Sep 27 12:18:43.000 [warn] Bad element "$E470DD7B0" while parsing a node family.
Sep 27 12:18:43.000 [warn] Bogus ed25519 key in microdesc
Sep 27 12:18:43.000 [warn] parse error: Malformed object: missing object end line
Sep 27 12:18:43.000 [warn] Unparseable microdescriptor found in download or generated string
Sep 27 12:18:43.000 [warn] Bad element "$B101B81F3CB7C284ADDF19CD" while parsing a node family.
Sep 27 12:18:43.000 [warn] Bad element "$AC249C56C11FDDFA9" while parsing a node family.
Sep 27 12:18:43.000 [warn] parse error: Malformed object: missing object end line
Sep 27 12:18:43.000 [warn] Unparseable microdescriptor found in download or generated string
Sep 27 12:18:43.000 [warn] Bogus ed25519 key in microdesc
Sep 27 12:18:43.000 [warn] parse error: Malformed object: missing object end line
Sep 27 12:18:43.000 [warn] Unparseable microdescriptor found in download or generated string
```
I'm running 0.4.7.8https://gitlab.torproject.org/tpo/core/tor/-/issues/40675NETINFO log line truncates canonical_addr2022-10-28T15:53:33ZpseudonymisaTorNETINFO log line truncates canonical_addrOn Tor 0.4.7.10 (recommended) Relay, the `Got good NETINFO cell loglines` appear to randomly truncate the canonical_addr ports
Examples:
```
[INFO] channel_tls_process_netinfo_cell(): Got good NETINFO cell on OR connection (open) with...On Tor 0.4.7.10 (recommended) Relay, the `Got good NETINFO cell loglines` appear to randomly truncate the canonical_addr ports
Examples:
```
[INFO] channel_tls_process_netinfo_cell(): Got good NETINFO cell on OR connection (open) with
│ 45.58.152.45:50268 ID=59KGIISLyF2YgL052AUBH29QyhJe9almx9mcIXYIooc RSA_ID=0541CE9E2DDA8C0B1D988FB640D65916B0683C35
│ canonical_addr=45.58.152.45:443; OR connection is now open, using protocol version 5. Its ID digest is
│ 0541CE9E2DDA8C0B1D988FB640D65916B0683C35. Our address is apparently
[INFO] channel_tls_process_netinfo_cell(): Got good NETINFO cell on OR connection (open) with
│ 130.162.208.178:50226 ID=cqxU2kBKfWPKwB1L04+ZuQwUybzUjSMWgG5Y21mGX5o RSA_ID=344F966AF902986227DA6C2F523D53BBD604892A
│ canonical_addr=130.162.208.178:; OR connection is now open, using protocol version 5. Its ID digest is
│ 344F966AF902986227DA6C2F523D53BBD604892A. Our address is apparently
[INFO] channel_tls_process_netinfo_cell(): Got good NETINFO cell on OR connection (open) with
│ 144.76.219.195:50302 ID=RRU6DDTnWc9azmpx7v3IhdLdPJNB4ciJ+j2Kv8lcv0Y RSA_ID=CE9003208A047960246052C604A213C3BF096F61
│ canonical_addr=144.76.219.195:9; OR connection is now open, using protocol version 5. Its ID digest is
│ CE9003208A047960246052C604A213C3BF096F61. Our address is apparently
```
just removed the relays IP myself behind `Our address is apparently`
Doesn't seem this is intended?
Notice that the shorter the IP address is, the more port digits fit into.
```
123.123.123.123:;
12.12.12.12:9001;
1.12.12.12:12345;
```https://gitlab.torproject.org/tpo/core/tor/-/issues/40672Display nicnames on bridge cards2022-10-24T20:48:45ZcypherpunksDisplay nicnames on bridge cardsFor example, to simplify correlation of log events and cards.For example, to simplify correlation of log events and cards.https://gitlab.torproject.org/tpo/core/tor/-/issues/40671tor-0.4.7.10 tag missing 0.4.7.9 and 0.4.7.10 changelogs; anything to fix in ...2022-10-24T20:48:41ZRoger Dingledinetor-0.4.7.10 tag missing 0.4.7.9 and 0.4.7.10 changelogs; anything to fix in release checklist?The tor-0.4.7.10 git tag has a ChangeLog and ReleaseNotes that end at 0.4.7.8. That is, the tagged version doesn't have the last two changelog stanzas in it.
Whereas the tor-0.4.7.9 git tag *does* have the 0.4.7.9 changelog stanzas in i...The tor-0.4.7.10 git tag has a ChangeLog and ReleaseNotes that end at 0.4.7.8. That is, the tagged version doesn't have the last two changelog stanzas in it.
Whereas the tor-0.4.7.9 git tag *does* have the 0.4.7.9 changelog stanzas in it. This is confusing to me.
The 0.4.7.10 tarball has the right changelog stanzas in it, which I guess means the tarball isn't built from the tag, which is also a problem.
I guess it is too late for these particular releases, but it gives us the opportunity to check: is there anything in the release checklist that we can improve to reduce the chances of a repeat issue?
Thanks!https://gitlab.torproject.org/tpo/core/tor/-/issues/40670vanguards is not picking new Guard when the Guard is very unstable.2022-10-24T20:48:35Zcypherpunksvanguards is not picking new Guard when the Guard is very unstable.vanguards are yelling "The connection to guard [REDACTED] was closed with a live circuit" for about a week yet vanguards is not changing this guard to more better one.
Isn't this dangerous? Adversary could notice this tor service is dis...vanguards are yelling "The connection to guard [REDACTED] was closed with a live circuit" for about a week yet vanguards is not changing this guard to more better one.
Isn't this dangerous? Adversary could notice this tor service is disconnecting a lot which is not true cause the internet is always online.
```
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
WARNING: Possible Tor bug, or possible attack if very frequent: Got 1 dropped cell on circ X (in state HS_SERVICE_REND HSSR_JOINED; old state HS_SERVICE_REND HSSR_CONNECTING)
NOTICE: We force-closed circuit X
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
WARNING: Possible Tor bug, or possible attack if very frequent: Got 1 dropped cell on circ X (in state HS_SERVICE_REND HSSR_JOINED; old state HS_SERVICE_REND HSSR_CONNECTING)
NOTICE: We force-closed circuit X
WARNING: Possible Tor bug, or possible attack if very frequent: Got 2 dropped cell on circ X (in state HS_SERVICE_REND HSSR_JOINED; old state HS_SERVICE_REND HSSR_CONNECTING)
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [The Second Guard] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
NOTICE: Circ X exceeded CIRC_MAX_MEGABYTES: 2151034 > X.
NOTICE: We force-closed circuit X
NOTICE: The connection to guard [Guard Node ONE] was closed with a live circuit.
```https://gitlab.torproject.org/tpo/core/tor/-/issues/40669Win10x64 - tor.exe not starting anymore at all when using GeoIPFile or GeoIPv...2022-08-30T20:18:51ZtpgrrWin10x64 - tor.exe not starting anymore at all when using GeoIPFile or GeoIPv6FileIn an (AFAIK) unchanged Tor installation¹ (torbrowser-install-win64-11.0.6_en-US.exe) tor.exe ceases to start since 2022/04/28.
Last successful start: 2022/04/20 (not tried in between), no log file is written whatsoever (despite "Log not...In an (AFAIK) unchanged Tor installation¹ (torbrowser-install-win64-11.0.6_en-US.exe) tor.exe ceases to start since 2022/04/28.
Last successful start: 2022/04/20 (not tried in between), no log file is written whatsoever (despite "Log notice file ..." in torrc) when failing.
Tracing the issue down I reduced torrc to only these 4 lines:
> Log notice file R:\Temp\Tor.log
> DataDirectory C:\Tools\Tor\Data\
> GeoIPFile C:\Tools\Tor\Data\geoip
> GeoIPv6File C:\Tools\Tor\Data\geoip6
I found that _disabling_ GeoIP functionality allows tor.exe to start again.
> #GeoIPFile C:\Tools\Tor\Data\geoip
> #GeoIPv6File C:\Tools\Tor\Data\geoip6
As soon as I enable either one (or both) of the GeoIP DBs tor just crashes without a trace.
Same with Tor extracted from latest version²
System:
Win10x64 Version 10.0.19043.1288, no AV except MS Defender 4.18.2203.5-0
¹ Windows unchanged as well - all updates are manual, no other admins
² extracted from torbrowser-install-win64-11.0.10_de.exe\Browser\TorBrowser\ and [...]\Browser\Data\ (using 7-zip)
P.S. Sorry for odd the line spacing, I may be too dumb for this editorhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40668bash-completion for tor2022-10-24T20:48:34Znyxnorbash-completion for torMade a bash-completion script for tor.
Miscellaneous addition.
It does not need to be in `core/tor`, I would just like some review from upstream (TPO) before trying to merge to the official repo on https://github.com/scop/bash-completio...Made a bash-completion script for tor.
Miscellaneous addition.
It does not need to be in `core/tor`, I would just like some review from upstream (TPO) before trying to merge to the official repo on https://github.com/scop/bash-completion/blob/master/completions/
It has all options, but it doesn't generate all arguments because that is too much work.
You can check the up to date version here https://github.com/nyxnor/tor-bash-completion/blob/main/tor
But if you just want to get a look at it.
```bash
# tor(1) completion -*- shell-script -*-
_comp_xfunc_tor_torrcopt()
{
_tor_torrc_opt="$(tor --list-torrc-options | sed "s|^|--|")"
COMPREPLY=($(compgen -W "${_tor_torrc_opt}" -- "$cur"))
}
_tor()
{
local cur prev words cword
_init_completion -s || return
case $prev in
## tor cli
--allow-missing-torrc | --ignore-missing-torrc | --verify-config | \
--list-torrc-options | --list-deprecated-options | \
--list-modules | --help | --version | --quiet | --hush )
return
;;
--hash-password )
## requires argument but not possible to complete
return
;;
--defaults-torrc | --torrc-file )
_filedir
return
;;
--dump-config )
COMPREPLY=($(compgen -W "short full" -- "$cur"))
return
;;
--list-fingerprint )
COMPREPLY=($(compgen -W "rsa ed25519" -- "$cur"))
return
;;
--keygen )
COMPREPLY=($(compgen -W "--newpass" -- "$cur"))
return
;;
--key-expiration )
COMPREPLY=($(compgen -W "sign" -- "$cur"))
return
;;
--format )
## depends on --key-expiration
COMPREPLY=($(compgen -W "iso8601 timestamp" -- "$cur"))
return
;;
--passphrase-fd )
COMPREPLY=($(compgen -W "$(ls /proc/$$/fd)" -- "$cur"))
return
;;
## torrc options
--AccelDir | --CacheDirectory | --DataDirectory | \
--ClientOnionAuthDir | --KeyDirectory | --HiddenServiceDir )
_filedir -d
return
;;
--ControlPortWriteToFile | --ControlSocket | --CookieAuthFile | \
--ExtORPortCookieAuthFile | --PidFile | --GeoIPFile | \
--GeoIPv6File | --ServerDNSResolvConfFile | \
--DirPortFrontPage | --GuardfractionFile | --V3BandwidthsFile )
_filedir
return
;;
--AvoidDiskWrites | --ConstrainedSockets | \
--ControlPortFileGroupReadable | --ControlSocketGroupWritable | \
--CookieAuthentication | --CookieAuhtfileGroupReadable | \
--CountPrivateBandwidth | --DataDirectoryGroupReadable | \
--DisableAllSwap | --DisableDebuggerAttachement | --DisableNetwork | \
--ExtORPortCookieAuthFileGroupReadable | --FetchDirInfoEarly | \
--FetchDirInfoExtraEarly | --FetchHidServDescriptors | \
--FetchServerDescriptors | --FetchUselessDescriptors | \
--HardwareAccel | --LogMessageDomains | --NoExec | \
--ProtocolWarnings | --RunAsDaemon | --SandBox | \
--TrunateLogFile| --UnixSocksGroupWritable | \
--UseDefaultFallbackDirs | --AllowNonRFC953Hostnames | \
--AutomapHostsOnResolve | --CircuitPadding | \
--ReducedCircuitPadding | --ClientDNSRejectInternalAddresses | \
--ClientOnly | --ClientRejectInternalAddresses | \
--ClientUseIPv4 | --ClientUseIPv6 | --ReducedConnectionPadding | \
--DownloadExtraInfo | --EnforceDistinctSubnets | \
--FascistFirewall | --SafeSocks | --TestSocks | \
--UpdateBridgesFromAuthority | --UseBridges | --UseEntryGuards | \
--LearnCircuitBuildTimeout | --DormantCanceledByStartup | \
--DormantOnFirstStartup | --DormantTimeoutDisabledByIdleStreams | \
--DormantTimeoutEnabled | --StrictNodes | --AddressDisableIPv6 | \
--AssumeReachable | --BridgeRelay | --DisableOOSCheck | \
--ExitPolicyRejectLocalInterfaces | --ExitPolicyRejectPrivate | \
--ExtendAllowPrivateAddresses | --IPv6Exit | --MainloopStats | \
--OfflineMasterKey | --ReducedExitPolicy | \
--ServerDNSAllowBrokenConfig | \
--ServerDNSAllowNonRFC953Hostnames | --ServerDNSDetectHijacking | \
--ServerDNSRandomizeCase | --ServerDNSSearchDomains | \
--BridgeRecordUsageByCountry | --CellStatistics | \
--ConnDirectionStatistics | --DirReqStatistics | \
--EntryStatistics | --ExitPortStatistics | --ExtraInfoStatistics | \
--HiddenServiceStatistics | --OverloadStatistics | \
--PaddingStatistics | --DirCache | \
--HiddenServiceEnableIntroDoSDefense | \
--AuthoritativeDirectory | --BridgeAuthoritativeDir | \
--V3AuthoritativeDirectory | --AuthDirHasIPv6Connectivity | \
--AuthDirListBadExits | --AuthDirListMiddleOnly | \
--AuthDirPinKeys | --AuthDirRejectRequestsUnderLoad | \
--AuthDirSharedRandomness | --AuthDirTestEd25519LinkKeys | \
--AuthDirTestReachability | --DirAllowPrivateAddresses | \
--V3AuthUseLegacyKey | --VersioningAuthoritativeDirectory | \
--HiddenServiceAllowUnknownPorts | \
--HiddenServiceDirGroupReadable | \
--HiddenServiceOnionBalanceInstance | \
--HiddenServiceMaxStreamsCloseCircuit | \
--HiddenServiceSingleHopMode | \
--HiddenServiceNonAnonymousMode | \
--PublishHidServDescriptors | \
--TestingTorNetwork | \
--TestingDirAuthVoteExitIsStrict | \
--TestingDirAuthVoteGuardIsStrict | \
--TestingDirAuthVoteHSDirIsStrict | \
--TestingEnableCellStatsEvent | \
--TestingEnableConnBwEvent )
COMPREPLY=($(compgen -W "0 1" -- "$cur"))
return
;;
--CacheDirectoryGroupReadable | --ExtendByEd25519ID | \
--KeepBindCapabilities | --ClientPreferIPv6DirPort | \
--ClientPreferIPv6ORPort | --ConnectionPadding | \
--UseGuardFraction | --VanguardsLiteEnabled | \
--UseMicroDescriptors | --GeoIPExcludeUnknown | \
--AssumeReachableIPv6 | --ExitRelay | \
--KeyDirectoryGroupReadable | --RefuseUnknownExits | \
--DoSCircuitCreationEnabled | --DoSConnectionEnabled | \
--DoSRefuseSingleHopClientRendezvous )
COMPREPLY=($(compgen -W "0 1 auto" -- "$cur"))
return
;;
--SafeLogging )
COMPREPLY=($(compgen -W "0 1 relay" -- "$cur"))
return
;;
--TransProxyType )
COMPREPLY=($(compgen -W "default TPROXY ipfw pf-divert" -- "$cur"))
return
;;
--AccountingRule )
COMPREPLY=($(compgen -W "sum max in out" -- "$cur"))
return
;;
--PublishServerDescriptor )
COMPREPLY=($(compgen -W "0 1 v3 bridge" -- "$cur"))
return
;;
--HiddenServiceExportCircuitID )
COMPREPLY=($(compgen -W "haproxy" -- "$cur"))
return
;;
--HiddenServiceVersion )
COMPREPLY=($(compgen -W "3" -- "$cur"))
return
;;
esac
if [[ $cur == -* ]]; then
_comp_xfunc_tor_torrcopt
COMPREPLY=($(compgen -W "--help --torrc-file --allow-missing-torrc
--defaults-torrc --ignore-missing-torrc --hash-password
--list-fingerprint --verify-config --dump-config
--list-torrc-options --list-deprecated-options --list-modules
--version --quiet --hush --keygen --newpass --passphrase-fd
--key-expiration --format ${_tor_torrc_opt}" -- "$cur"))
return
fi
} &&
complete -F _tor tor
# ex: filetype=sh
```https://gitlab.torproject.org/tpo/core/tor/-/issues/40667pathbias_count_use_success() logs unhashed fingerprint twice2023-10-12T16:44:33Ztoralfpathbias_count_use_success() logs unhashed fingerprint twiceIn the notice log of a Tor client connected to the public bridge *toralf4elster* I got:
```
Aug 27 14:41:54.000 [notice] pathbias_count_use_success(): Bug: Unexpectedly high use successes counts (23.000000/1.000000) for guard $<snip> ($<...In the notice log of a Tor client connected to the public bridge *toralf4elster* I got:
```
Aug 27 14:41:54.000 [notice] pathbias_count_use_success(): Bug: Unexpectedly high use successes counts (23.000000/1.000000) for guard $<snip> ($<snip>) (on Tor 0.4.8.0-alpha-dev 982c50401c5e9bde)
...
Aug 27 21:48:38.000 [notice] pathbias_count_use_success(): Bug: Unexpectedly high use successes counts (27.000000/5.000000) for guard $<snip> ($<snip>) (on Tor 0.4.8.0-alpha-dev 982c50401c5e9bde)
```
where *\<snip\>* was always the (unhashed) fingerprint itself. This is IMO a waste of space -or- the hashed fingerprint was meant ?https://gitlab.torproject.org/tpo/core/tor/-/issues/40666Bug: Acting on config options left us in a broken state. Dying. (on Tor 0.4.8...2022-10-24T20:48:30ZtoralfBug: Acting on config options left us in a broken state. Dying. (on Tor 0.4.8.0-alpha-dev 982c50401c5e9bde)Happened, when I configure torrc in this manner:
```
MetricsPortPolicy accept 127.0.0.1,accept ::1
```Happened, when I configure torrc in this manner:
```
MetricsPortPolicy accept 127.0.0.1,accept ::1
```https://gitlab.torproject.org/tpo/core/tor/-/issues/40665Disable DH10242022-10-24T20:48:29ZSebastian HahnDisable DH1024An automated scanner detected that Tor is still offering Diffie Hellman with a group size of 1024 bits, and it is creating a security policy question. Can we get this disabled? DH1024 should be pretty much dead around the web.An automated scanner detected that Tor is still offering Diffie Hellman with a group size of 1024 bits, and it is creating a security policy question. Can we get this disabled? DH1024 should be pretty much dead around the web.https://gitlab.torproject.org/tpo/core/tor/-/issues/40662Incorrect observed bandwidth2022-10-24T20:48:27ZVortIncorrect observed bandwidthRecently I noticed that Metrics shows 17.95 MiB/s observed bandwidth for my relay.
Such value is incorrect since I have physically only 100 Mbit/s connection.
I don't know if such problem is important since load balancing is based on...Recently I noticed that Metrics shows 17.95 MiB/s observed bandwidth for my relay.
Such value is incorrect since I have physically only 100 Mbit/s connection.
I don't know if such problem is important since load balancing is based on bwauth measurements, but I decided to tell about it just in case.
I can think about two reasons for such behaviour:
1. Overload because of DDoS.
2. Time jumps because of active NTPD.
Reason # 2 seems unlikely, because as far as I know ntpd is used in almost all Linux distributions.
Also OS should provide monotonic clocks which are immune to time jumps.
_Version: Tor 0.4.7.10 on Windows 7_https://gitlab.torproject.org/tpo/core/tor/-/issues/40653Is it useful turn on "keep alive" for Tor SOCKS5 TCP connection?2022-10-24T20:48:23ZmolnardIs it useful turn on "keep alive" for Tor SOCKS5 TCP connection?
Wasabi Wallet application connects to the [Tor SOCKS5](https://gitweb.torproject.org/torspec.git/tree/socks-extensions.txt) endpoint ([code](https://github.com/zkSNACKs/WalletWasabi/blob/13fdc1566b626ac2dc56c33f0eb707bcab04e55b/WalletWa...
Wasabi Wallet application connects to the [Tor SOCKS5](https://gitweb.torproject.org/torspec.git/tree/socks-extensions.txt) endpoint ([code](https://github.com/zkSNACKs/WalletWasabi/blob/13fdc1566b626ac2dc56c33f0eb707bcab04e55b/WalletWasabi/Tor/Socks5/TorTcpConnectionFactory.cs#L77-L93)) and it specifies various "keep alive" options for that socket connections. Note that the TCP connections connect the application and Tor both of which runs on the same machine and under the same user.
**Question**
Is it necessary to specify that keep-alive options? Is it recommended? Is it useless?
**Version**
we use the latest released Tor 0.4.7.8.
_The question asked here as well_
https://tor.stackexchange.com/questions/23258/is-it-useful-turn-on-keep-alive-for-tor-socks5-tcp-connectionhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40650The vanguards packages are outdated compared to github's one.2022-10-24T20:51:10ZcypherpunksThe vanguards packages are outdated compared to github's one.I have installed `apt install vanguards` and many configuration options are missing, compared to github's one. There is another identifiable problem. The package's configuration file has `num_layer2_guards = 3` but github's config exampl...I have installed `apt install vanguards` and many configuration options are missing, compared to github's one. There is another identifiable problem. The package's configuration file has `num_layer2_guards = 3` but github's config example file says `num_layer2_guards = 4`. This allows an adversary to identify the version. PLaese publish the new version.