Tor issueshttps://gitlab.torproject.org/tpo/core/tor/-/issues2021-02-22T14:10:24Zhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40289Bug 0.4.5.5-rc ORPort relay (using both IPv4 & IPv6)2021-02-22T14:10:24ZcypherpunksBug 0.4.5.5-rc ORPort relay (using both IPv4 & IPv6)Hi,
I'm using Debian Buster to run my (non-exit) relay. I use 'deb https://deb.torproject.org/torproject.org buster main' repository in apt sources list.
After I upgraded to Tor 0.4.5.5-rc from 0.4.4.6, my configuration does not work a...Hi,
I'm using Debian Buster to run my (non-exit) relay. I use 'deb https://deb.torproject.org/torproject.org buster main' repository in apt sources list.
After I upgraded to Tor 0.4.5.5-rc from 0.4.4.6, my configuration does not work anymore. My relay has (/etc/network/interfaces) public IPv6 address and private IPv4 address, because relay is behind NAT (there is no configuration errors in general firewall settings, and this setup has worked before).
After upgrade to 0.4.5.5-rc, Tor does not bind anymore to IPv4 address. Just to make sure again: there has not been problem whatsoever with my configuration, and after I downgraded back to 0.4.4.6, everything worked as before. I spotted that somebody else has some problems also with 0.4.5.5-rc https://www.reddit.com/r/TOR/comments/lgnt7g/orport_is_not_reachable_after_updating_tor_from/
I think that something has changed in a way 0.4.5.5-rc is handling configuration file (“torrc”).
Log from syslog:
```
Feb 10 20:53:39 Tor[1004]: Opening Socks listener on 127.0.0.1:9050
Feb 10 20:53:39 Tor[1004]: Opened Socks listener connection (ready) on 127.0.0.1:9050
Feb 10 20:53:39 Tor[1004]: Opening Control listener on 127.0.0.1:9051
Feb 10 20:53:39 Tor[1004]: Opened Control listener connection (ready) on 127.0.0.1:9051
Feb 10 20:53:39 Tor[1004]: Opening OR listener on [{my public working IPv6 address}]:443
Feb 10 20:53:39 Tor[1004]: Opened OR listener connection (ready) on [{my public working IPv6 address}]:443
Feb 10 20:53:39 Tor[1004]: Opening Directory listener on 0.0.0.0:80
Feb 10 20:53:39 Tor[1004]: Opened Directory listener connection (ready) on 0.0.0.0:80
Feb 10 20:53:39 Tor[1004]: Opening Directory listener on [{my public working IPv6 address}]:80
Feb 10 20:53:39 Tor[1004]: Opened Directory listener connection (ready) on [{my public working IPv6 address}]:80
Feb 10 20:54:34 Tor[1004]: Now checking whether IPv4 ORPort {my public working IPv4 address}:443 is reachable... (this may take up to 20 minutes -- look for log messages indicating success)
Feb 10 20:54:34 Tor[1004]: Now checking whether IPv6 ORPort [{my public working IPv6 address}]:443 is reachable... (this may take up to 20 minutes -- look for log messages indicating success)
Feb 10 20:54:34 Tor[1004]: Now checking whether IPv4 DirPort {my public working IPv4 address}:80 is reachable... (this may take up to 20 minutes -- look for log messages indicating success)
Feb 10 20:54:34 Tor[1004]: Self-testing indicates your DirPort is reachable from the outside. Excellent.
Feb 10 20:54:52 Tor[1004]: Self-testing indicates your ORPort [{my public working IPv6 address}]:443 is reachable from the outside. Excellent.
Feb 10 21:14:32 Tor[1004]: Your server has not managed to confirm reachability for its ORPort(s) at {my public working IPv4 address}:443. Relays do not publish descriptors until their ORPort and DirPort are reachable. Please check your firewalls, ports, address, /etc/hosts file, etc.
```
“tor --verify-config command says:
"[warn] Configured public relay to listen only on an IPv6 address. Tor needs to listen on an IPv4 address too.
[warn] Failed to parse/validate config: Misconfigured server ports"
My /etc/tor/torrc file (also, there is Debian default shipped “tor-service-defaults-torrc” -file, which I have not edited and I think it does not have any relevance on this issue; I think that file just sets user permissions of tor daemon to behave in a Debian way):
```
ControlPort 9051
ControlSocket 0
CookieAuthentication 0
HashedControlPassword {my hashed password}
ORPort {my internal/local IPv4 address behind NAT}:443 NoAdvertise
ORPort {my public working IPv4 address}:443 NoListen
ORPort [{my public working IPv6 address}]:443
OutboundBindAddress {my internal/local IPv4 address behind NAT}
OutboundBindAddress [{my public working IPv6 address}]
DirPort {my public working IPv4 address}:80 NoListen
DirPort {my internal/local IPv4 address behind NAT}:80 NoAdvertise
DirPort [{my public working IPv6 address}]:80 NoAdvertise
DirPortFrontPage /etc/tor/tor-exit-notice.html
ExitPolicy reject *:*
ExitPolicy reject6 *:*
ExitRelay 0
BandwidthRate {private} MBits
BandwidthBurst {private} MBits
MaxAdvertisedBandwidth {private} MBits
RelayBandwidthRate {private} MBits
RelayBandwidthBurst {private} MBits
LogMessageDomains 0
BridgeRelay 0
ContactInfo {private}
IPv6Exit 0
Nickname {private}
EntryStatistics 1
DoSCircuitCreationEnabled auto
DoSConnectionEnabled auto
AuthoritativeDirectory 0
V3AuthoritativeDirectory 0
VersioningAuthoritativeDirectory 0
BridgeAuthoritativeDir 0
ConnDirectionStatistics 1
CellStatistics 1
HardwareAccel 1
MaxUnparseableDescSizeToLog 100 MB
RendPostPeriod 15 minutes
```
I posted my whole /etc/tor/torrc file. I know, that there maybe some unnecessary settings, but anyway, that configuration has worked prior 0.4.5.5-rc.
P.S. This Anon Ticket service is very good for one-time bug senders, like I am.Tor: 0.4.5.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/402880.4.5 with unreachable autodetected ipv6 orport logs about testing every few ...2021-02-17T14:51:39ZRoger Dingledine0.4.5 with unreachable autodetected ipv6 orport logs about testing every few minutes foreverNow that we've finished #40279 (yay!), my relay now logs over and over about finding its IPv4 ORPort reachable, guessing its IPv6 ORPort, checking reachable, etc:
```
Feb 10 03:21:22.906 [notice] Now checking whether IPv4 ORPort 128.31....Now that we've finished #40279 (yay!), my relay now logs over and over about finding its IPv4 ORPort reachable, guessing its IPv6 ORPort, checking reachable, etc:
```
Feb 10 03:21:22.906 [notice] Now checking whether IPv4 ORPort 128.31.0.39:9005 is reachable... (this may take up to 20 minutes -- look for log messages indicating success)
Feb 10 03:21:24.199 [notice] Self-testing indicates your ORPort 128.31.0.39:9005 is reachable from the outside. Excellent. Publishing server descriptor.
Feb 10 03:21:25.199 [notice] Guessed our IP address as [2603:400a:ffff:bb8:42a8:f0ff:fe75:6090] (source: METHOD=NONE).
Feb 10 03:22:22.908 [notice] Now checking whether IPv4 ORPort 128.31.0.39:9005 is reachable... (this may take up to 20 minutes -- look for log messages indicating success)
Feb 10 03:22:24.254 [notice] Self-testing indicates your ORPort 128.31.0.39:9005 is reachable from the outside. Excellent. Publishing server descriptor.
Feb 10 03:22:25.255 [notice] Guessed our IP address as [2603:400a:ffff:bb8:42a8:f0ff:fe75:6090] (source: METHOD=NONE).
Feb 10 03:23:22.909 [notice] Now checking whether IPv4 ORPort 128.31.0.39:9005 is reachable... (this may take up to 20 minutes -- look for log messages indicating success)
Feb 10 03:23:25.257 [notice] Guessed our IP address as [2603:400a:ffff:bb8:42a8:f0ff:fe75:6090] (source: METHOD=NONE).
Feb 10 03:23:25.944 [notice] Self-testing indicates your ORPort 128.31.0.39:9005 is reachable from the outside. Excellent. Publishing server descriptor.
Feb 10 03:23:26.943 [notice] Guessed our IP address as [2603:400a:ffff:bb8:42a8:f0ff:fe75:6090] (source: METHOD=NONE).
```
I guess the first question is: is this just a bunch of extra log messages (in which case, we should just change it to only say it if there's something new to say), or are we actually launching these reachability tests, forever?
In the past two days or so:
```
$ grep "Now checking whether" notice-level-log |wc -l
1270
```
```
grep "Guessed our IP address" notice-level-log |wc -l
2075
```
This one seems to show up once a minute forever.Tor: 0.4.5.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40283relay: Don't fail if AF_INET6 family is not supported2021-02-09T16:15:51ZDavid Gouletdgoulet@torproject.orgrelay: Don't fail if AF_INET6 family is not supportedSomeone on tor-relays@ reported that when going from 044 to 045, their config failed to work:
```
ORPort 587
DIRPORT 995
```
with :
```
[notice] Opening OR listener on 0.0.0.0:587
[notice] Opened OR listener connection (ready) on 0.0....Someone on tor-relays@ reported that when going from 044 to 045, their config failed to work:
```
ORPort 587
DIRPORT 995
```
with :
```
[notice] Opening OR listener on 0.0.0.0:587
[notice] Opened OR listener connection (ready) on 0.0.0.0:587
[notice] Opening OR listener on [::]:587
[warn] Socket creation failed: Address family not supported by protocol
[notice] Opening Directory listener on 0.0.0.0:995
[notice] Closing partially-constructed OR listener connection (ready) on
0.0.0.0:587
[notice] Closing partially-constructed Directory listener connection
(ready) on 0.0.0.0:995
```
We should probably not fail this in that implicit case for IPv6 if the address is not supported.Tor: 0.4.5.x-stablehttps://gitlab.torproject.org/tpo/core/tor/-/issues/40281Don't even warn when an ancient consensus says we're missing a protocol.2021-02-12T11:53:14ZNick MathewsonDon't even warn when an ancient consensus says we're missing a protocol.We need this change or else #40221 will give a scary log message when we're starting with an ancient cached consensus and a newer version of Tor.We need this change or else #40221 will give a scary log message when we're starting with an ancient cached consensus and a newer version of Tor.Tor: 0.4.5.x-stableNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/tpo/core/tor/-/issues/402790.4.5 relay with autodetected unreachable ipv6 port says it's publishing but ...2021-02-12T17:57:54ZRoger Dingledine0.4.5 relay with autodetected unreachable ipv6 port says it's publishing but then doesn'tI restarted an old relay on Tor 0.4.5, and to my surprise, it auto detects some sort of ipv6 address, even though I have an "Address" line listing an IPv4 address explicitly.
```
Feb 06 03:17:57.448 [notice] Now checking whether IPv4 OR...I restarted an old relay on Tor 0.4.5, and to my surprise, it auto detects some sort of ipv6 address, even though I have an "Address" line listing an IPv4 address explicitly.
```
Feb 06 03:17:57.448 [notice] Now checking whether IPv4 ORPort 128.31.0.39:9005 is reachable... (this may take up to 20 minutes -- look for log messages indicating success)
Feb 06 03:17:57.457 [notice] Now checking whether IPv6 ORPort [2603:400a:ffff:bb8:42a8:f0ff:fe75:6090]:9005 is reachable... (this may take up to 20 minutes -- look for log messages indicating success)
Feb 06 03:17:58.264 [notice] Self-testing indicates your ORPort 128.31.0.39:9005 is reachable from the outside. Excellent.
Feb 06 03:19:00.478 [notice] Performing bandwidth self-test...done.
```
and then it doesn't find the IPv6 address reachable (not surprising), and it sits there. I was preparing to file a "woah, are we going to lose a lot of relays?" ticket, when 19 minutes later, I get this line:
```
Feb 06 03:37:55.776 [notice] Auto-discovered IPv6 address [2603:400a:ffff:bb8:42a8:f0ff:fe75:6090]:9005 has not been found reachable. However, IPv4 address is reachable. Publishing server descriptor without IPv6 address.
```
Well great!
But then it doesn't actually publish any descriptor.Tor: 0.4.5.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40274test suite fails on hppa (0.4.5.5-rc)2021-02-08T19:20:00Zweasel (Peter Palfrader)test suite fails on hppa (0.4.5.5-rc)[Debian#981648: tor FTBFS on hppa][Debian#981648] was reported, and it came with a [patch/pull-request on github][p2128].
The affected code is many years old (it was added in 2013).
The issue appears to be that the linux/hppa kernel ch...[Debian#981648: tor FTBFS on hppa][Debian#981648] was reported, and it came with a [patch/pull-request on github][p2128].
The affected code is many years old (it was added in 2013).
The issue appears to be that the linux/hppa kernel changed the O_NONBLOCK constant, so you can't just do (flags & FOO) == FOO to check if foo is set, you better do (flags & FOO) != 0. The patch concerns itself with O_NONBLOCK, but it probably also applies to FD_CLOEXEC.
[Debian#981648]: https://bugs.debian.org/981648
[p2128]: https://github.com/torproject/tor/pull/2128Tor: 0.4.5.x-stableNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40270relay: Send back connection refused end reason on network reentry2021-02-03T14:50:48ZDavid Gouletdgoulet@torproject.orgrelay: Send back connection refused end reason on network reentryThe current used `TORPROTOCOL` end reason has a very bad side effect of killing the circuit on the client side and thus all the streams attached to it.
It is very bad if that would happen since anyone could just serve a HTTP resource in...The current used `TORPROTOCOL` end reason has a very bad side effect of killing the circuit on the client side and thus all the streams attached to it.
It is very bad if that would happen since anyone could just serve a HTTP resource in a page that would contact a relay on its ORPort and see that circuit collapsing.
Instead, send back `CONNECTION_REFUSED` so the circuit won't be used anymore but will not collapse.Tor: 0.4.5.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40269relay: Avoid false positive on network reentry detection2021-02-03T14:50:48ZDavid Gouletdgoulet@torproject.orgrelay: Avoid false positive on network reentry detectionBloomfilters are by design probabilistic and thus there is a small chance that we have false positive when looking them up.
When detecting network reentry, a false positive implies that the `addr + port` destination is then consider a r...Bloomfilters are by design probabilistic and thus there is a small chance that we have false positive when looking them up.
When detecting network reentry, a false positive implies that the `addr + port` destination is then consider a reentry and the connections are refused.
Even though it is rare, that pair could be a very busy destination like `wikipedia.org + 443` and so every users at that Exit would be unable to reach that destination.
Move to use an hashtable here (as a set) of `addr+port` so the lookup is still `O(1)` but with certainty. The memory footprint will be more important but still below a megabyte for all the relays to be in that set.Tor: 0.4.5.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40268Add consensus param to enable network re-entry at Exit relays2021-02-03T14:50:48ZDavid Gouletdgoulet@torproject.orgAdd consensus param to enable network re-entry at Exit relaysFollowing the tor#2667 work, we want a consensus parameters that control this so in case we made a mistake or things start to go bad, we can disable it through the consensus.Following the tor#2667 work, we want a consensus parameters that control this so in case we made a mistake or things start to go bad, we can disable it through the consensus.Tor: 0.4.5.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40263CID 1472584: Error handling issues in check_descriptor_ipaddress_changed()2021-02-12T17:54:21ZNick MathewsonCID 1472584: Error handling issues in check_descriptor_ipaddress_changed()It looks like Coverity wants us to check the return value on `relay_find_addr_to_publish`.
```
*** CID 1472584: Error handling issues (CHECKED_RETURN)
/src/feature/relay/router.c: 2700 in check_descriptor_ipaddress_changed()
2694 ...It looks like Coverity wants us to check the return value on `relay_find_addr_to_publish`.
```
*** CID 1472584: Error handling issues (CHECKED_RETURN)
/src/feature/relay/router.c: 2700 in check_descriptor_ipaddress_changed()
2694 previous = &my_ri->ipv6_addr;
2695 }
2696
2697 /* Attempt to discovery the publishable address for the family which will
2698 * actively attempt to discover the address if we are configured with a
2699 * port for the family. */
>>> CID 1472584: Error handling issues (CHECKED_RETURN)
>>> Calling "relay_find_addr_to_publish" without checking return value (as is done elsewhere 4 out of 5 times).
2700 relay_find_addr_to_publish(get_options(), family, RELAY_FIND_ADDR_NO_FLAG,
2701 ¤t);
2702
```
Assigning to @dgoulet, but please let me know if you want me to do this.Tor: 0.4.5.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40257Crash on MetricsPort when prematurely terminating socket2021-02-08T19:28:33ZNeel Chauhanneel@neelc.orgCrash on MetricsPort when prematurely terminating socketIf I setup a `MetricsPort` and telnet into it, and then prematurely terminate the socket without doing anything, we get a crash:
Jan 23 14:03:51.000 [notice] Bootstrapped 100% (done): Done
Jan 23 14:03:56.000 [warn] conn_read_ca...If I setup a `MetricsPort` and telnet into it, and then prematurely terminate the socket without doing anything, we get a crash:
Jan 23 14:03:51.000 [notice] Bootstrapped 100% (done): Done
Jan 23 14:03:56.000 [warn] conn_read_callback: Bug: Unhandled error on read for Metrics connection (fd 10); removing (on Tor 0.4.6.0-alpha-dev 878c124e0dda4cde)
Jan 23 14:03:56.000 [warn] tor_bug_occurred_: Bug: src/core/mainloop/mainloop.c:899: conn_read_callback: This line should not have been reached. (Future instances of this warning will be silenced.) (on Tor 0.4.6.0-alpha-dev 878c124e0dda4cde)
Jan 23 14:03:56.000 [warn] Bug: Tor 0.4.6.0-alpha-dev (git-878c124e0dda4cde): Line unexpectedly reached at conn_read_callback at src/core/mainloop/mainloop.c:899. Stack trace: (on Tor 0.4.6.0-alpha-dev 878c124e0dda4cde)
Jan 23 14:03:56.000 [warn] Bug: 0x130985c <log_backtrace_impl+0x5c> at /usr/home/neel/code/tor/tor/src/app/tor (on Tor 0.4.6.0-alpha-dev 878c124e0dda4cde)
Jan 23 14:03:56.000 [warn] Bug: 0x1317d91 <tor_bug_occurred_+0x1d1> at /usr/home/neel/code/tor/tor/src/app/tor (on Tor 0.4.6.0-alpha-dev 878c124e0dda4cde)
Jan 23 14:03:56.000 [warn] Bug: 0x116a843 <conn_read_callback+0x1021103> at /usr/home/neel/code/tor/tor/src/app/tor (on Tor 0.4.6.0-alpha-dev 878c124e0dda4cde)
Jan 23 14:03:56.000 [warn] Bug: 0x80140519d <event_base_assert_ok_nolock_+0xbfd> at /usr/local/lib/libevent-2.1.so.7 (on Tor 0.4.6.0-alpha-dev 878c124e0dda4cde)
Jan 23 14:03:56.000 [warn] Bug: 0x80140112c <event_base_loop+0x58c> at /usr/local/lib/libevent-2.1.so.7 (on Tor 0.4.6.0-alpha-dev 878c124e0dda4cde)
Jan 23 14:03:56.000 [warn] Bug: 0x116cbba <do_main_loop+0x12a> at /usr/home/neel/code/tor/tor/src/app/tor (on Tor 0.4.6.0-alpha-dev 878c124e0dda4cde)
Jan 23 14:03:56.000 [warn] Bug: 0x1155f1c <tor_run_main+0x12c> at /usr/home/neel/code/tor/tor/src/app/tor (on Tor 0.4.6.0-alpha-dev 878c124e0dda4cde)
Jan 23 14:03:56.000 [warn] Bug: 0x1154871 <tor_main+0x61> at /usr/home/neel/code/tor/tor/src/app/tor (on Tor 0.4.6.0-alpha-dev 878c124e0dda4cde)
^CJan 23 14:04:02.000 [notice] Interrupt: exiting cleanly.
neel@concorde:~/code/tor/tor %Tor: 0.4.5.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40254Can't migrate ORPort options from 0.4.4.6 to 0.4.5.3-rc2021-01-27T14:36:55ZVortCan't migrate ORPort options from 0.4.4.6 to 0.4.5.3-rcHello.
I was using unusual configuration for my relay:
Relay was listening at public IPv4 address and private IPv6 address:
```
ORPort 443
ORPort [ipv6_address]:ipv6_port NoAdvertise
```
IPv6 address belongs to Yggdrasil network, but it...Hello.
I was using unusual configuration for my relay:
Relay was listening at public IPv4 address and private IPv6 address:
```
ORPort 443
ORPort [ipv6_address]:ipv6_port NoAdvertise
```
IPv6 address belongs to Yggdrasil network, but it actually does not matter.
What important is that my IPv6 interface have no access to Internet.
This configuration was working fine with Tor 0.4.4.6.
But with upgrade to 0.4.5.3-rc, Tor begins to open not only IPv4:443, but also IPv6:443, which is, of course, then marked as unreachable.
I thought that it is possible to disable IPv6 listening at 443 port by modifying config this way:
```
ORPort 443 IPv4Only
ORPort [ipv6_address]:ipv6_port NoAdvertise
```
But then my log file began to be flooded with such messages:
```
Jan 22 15:55:04.000 [notice] Self-testing indicates your ORPort ipv4_address:443 is reachable from the outside. Excellent. Publishing server descriptor.
Jan 22 15:55:05.000 [notice] Guessed our IP address as [ipv6_address] (source: METHOD=INTERFACE).
Jan 22 15:55:05.000 [notice] Self-testing indicates your ORPort ipv4_address:443 is reachable from the outside. Excellent. Publishing server descriptor.
Jan 22 15:55:06.000 [notice] Guessed our IP address as [ipv6_address] (source: METHOD=INTERFACE).
Jan 22 15:55:06.000 [notice] Self-testing indicates your ORPort ipv4_address:443 is reachable from the outside. Excellent. Publishing server descriptor.
Jan 22 15:55:07.000 [notice] Guessed our IP address as [ipv6_address] (source: METHOD=INTERFACE).
...
```
Adding `Address ipv4_address` to `torrc` did not helped too.
Please let me know how to properly configure new version of Tor.
Or fix a bug if it should work, but not working as intended.Tor: 0.4.5.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40246wrong IPv6 is used2021-01-19T20:21:20Zpeskyrabbitwrong IPv6 is usedOkay, Roger wanted a longer version of my issue, here we go:
These are my relays:
https://metrics.torproject.org/rs.html#search/flag:exit%20as:AS200052
lets pick this relay:
https://metrics.torproject.org/rs.html#details/D286ED33FCB2...Okay, Roger wanted a longer version of my issue, here we go:
These are my relays:
https://metrics.torproject.org/rs.html#search/flag:exit%20as:AS200052
lets pick this relay:
https://metrics.torproject.org/rs.html#details/D286ED33FCB28385BAD9DA3D6B12A604C0834313
this a part of the torrc:
-cut-cut-cut-cut-
```
exit@worlddomination3:~/relayon0301$ cat torrc
ControlPort 0 # port controllers can connect to
CookieAuthentication 1 # method for controller authentication
RunAsDaemon 1 # runs as a background process
ORPort 30200 # port used for relaying traffic
DirPort 40200 # port used for mirroring directory information
Nickname relayon0301 # name for this relay
ContactInfo abuse@relayon.org # contact information in case there's an issue
RelayBandwidthRate 100 MB # limit for the bandwidth we'll use to relay
RelayBandwidthBurst 100 MB # maximum rate when relaying bursts of traffic
SocksPort 0 # prevents tor from being used as a client
ExitRelay 1 # This is an Exit
IPv6Exit 1 # IPv6 Exit
OutboundBindAddress 185.220.101.200
Address 185.220.101.200
ORPort [2a0b:f4c2:2::200]:20200
```
-cut-cut-cut-cut-
Port + IPv4 & IPv6 address is hard coded in the torrc file which worked until I upgraded to the Tor 0.4.6.0-alpha-dev version.
Half the relays like https://metrics.torproject.org/rs.html#details/A14F90953AE9462CF3A862C4CA95F73BF94A6F8B
show the correct IPv6 address: [2a0b:f4c2:2::213]:10213
The other half ignored the torrc and whats it looked like grabbed the first IPv6 it could find on the server.
Which does not work at all, because my server has more than 1 IP:
```
exit@worlddomination3:~/relayon0301$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet 185.220.101.193/26 scope global lo
valid_lft forever preferred_lft forever
inet 185.220.101.194/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.195/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.196/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.197/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.198/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.199/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.200/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.201/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.202/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.203/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.204/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.205/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.206/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.207/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.208/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.209/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.210/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.211/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.212/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.213/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.214/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.215/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.216/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.217/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.218/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.219/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.220/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.221/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.222/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.223/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.224/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.225/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.226/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.227/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.228/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.229/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.230/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.231/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.232/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.233/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.234/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.235/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.236/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.237/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.238/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.239/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.240/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.241/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.242/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.243/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.244/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.245/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.246/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.247/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.248/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.249/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.250/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.251/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.252/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.253/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet 185.220.101.254/26 scope global secondary lo
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:9094:9066:4388:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:1069:1282:1781:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:7644:9828:2596:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:4528:8088:8249:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:7151:9132:8936:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:4147:5001:7338:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:7544:7645:6156:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:9537:1333:5527:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:9394:5054:9439:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:1692:7763:4818:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:364:7942:9175:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:4508:6076:6213:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:7331:501:5315:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:8526:1882:7796:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:2031:9685:6149:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:9609:884:3904:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:5807:676:3729:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:2671:2000:6456:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:4991:1105:4390:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:4522:1955:5203:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:4620:3976:2652:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:6459:7111:1352:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:7646:652:7884:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:5947:8516:3005:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:8075:6939:1925:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:2454:1501:8190:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:1904:735:6298:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:4817:677:1588:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:2638:6669:2892:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:2652:1593:4484:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:3910:432:2424:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:9958:1508:4695:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:8106:5820:9061:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:9886:4210:2436:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:8932:4953:243:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:7042:49:2264:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:147:645:4149:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:4950:9130:6993:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:9281:2295:7608:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:295:470:3289:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:8982:4309:8937:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:6491:5818:1784:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:3056:2914:7038:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:6982:2759:1477:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:8064:9420:9810:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:6348:2638:1950:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:4697:2638:8928:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:2550:6237:6274:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:2893:99:1610:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:4781:8278:4000:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:6437:5675:8039:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:8829:7686:5924:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:6276:9716:7229:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:6655:6196:4235:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:6992:9958:6102:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:8497:8426:2935:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:9675:6322:2097:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:7029:5505:4758:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:2259:3969:5014:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:3411:1538:6733:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:1183:3894:2472:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:1690:7193:6369:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:4230:5675:8985:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:3731:4242:7857:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:7050:9312:4548:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:4827:3165:9188:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:1516:2389:6522:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:8238:6259:371:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:839:7187:4640:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:4698:3498:2573:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:7764:9850:6059:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:4315:4233:8160:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:4402:4870:9630:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:1065:9654:905:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:1370:1725:7274:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:5803:3403:6361:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:6875:6753:3377:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:6351:9916:2613:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:7324:610:2579:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:1524:47:1268:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:3726:967:4959:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:8859:3382:6167:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:6423:2663:6838:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:2787:6020:46:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:7017:9065:3388:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:1047:2581:7164:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:1165:2145:7425:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:554:2002:7894:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:8427:7714:2751:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:7119:2343:678:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:6217:5701:5195:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:7422:1247:8078:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:8958:4711:502:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:4421:3330:5568:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:2319:1580:6371:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012:0:4297:6159:2487:1/128 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::254/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::253/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::252/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::251/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::250/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::249/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::248/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::247/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::246/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::245/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::244/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::243/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::242/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::241/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::240/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::239/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::238/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::237/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::236/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::235/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::234/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::233/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::232/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::231/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::230/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::229/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::228/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::227/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::226/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::225/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::224/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::223/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::222/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::221/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::220/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::219/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::218/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::217/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::216/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::215/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::214/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::213/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::212/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::211/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::210/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::209/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::208/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::207/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::206/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::205/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::204/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::203/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::202/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::201/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::200/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::199/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::198/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::197/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::196/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::195/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::194/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::193/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0b:f4c2:2::1/48 scope global
valid_lft forever preferred_lft forever
inet6 2a0a:4587:2012::1/48 scope global
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether f8:0f:41:f8:17:e2 brd ff:ff:ff:ff:ff:ff
inet 217.197.91.155/26 brd 217.197.91.191 scope global eno1
valid_lft forever preferred_lft forever
inet6 2001:67c:1401:2140::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::fa0f:41ff:fef8:17e2/64 scope link
valid_lft forever preferred_lft forever
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether f8:0f:41:f8:17:e3 brd ff:ff:ff:ff:ff:ff
4: enp4s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 8c:dc:d4:a8:21:c0 brd ff:ff:ff:ff:ff:ff
5: enp4s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 8c:dc:d4:a8:21:c0 brd ff:ff:ff:ff:ff:ff
6: enp3s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 8c:dc:d4:a8:21:c0 brd ff:ff:ff:ff:ff:ff
7: enp3s0d1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 8c:dc:d4:a8:21:c0 brd ff:ff:ff:ff:ff:ff
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 8c:dc:d4:a8:21:c0 brd ff:ff:ff:ff:ff:ff
inet6 fe80::8edc:d4ff:fea8:21c0/64 scope link
valid_lft forever preferred_lft forever
9: bond0.2000@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 8c:dc:d4:a8:21:c0 brd ff:ff:ff:ff:ff:ff
inet 185.1.74.47/26 brd 185.1.74.63 scope global bond0.2000
valid_lft forever preferred_lft forever
inet6 2001:7f8:a5::20:8294:3/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::8edc:d4ff:fea8:21c0/64 scope link
valid_lft forever preferred_lft forever
```
-cut-cut-cut-cut
So right now I have a lot of relays running on the first IPv6 address that Tor could find who is not even an IP that Tor should be using ...Tor: 0.4.5.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40245Assist dir auths with vote visibility2022-10-11T23:39:35ZRoger DingledineAssist dir auths with vote visibilityI've been accumulating a pile of local patches on moria1, specifically around logging more details during voting, and it seems smart to get them into the hands of the other dir auths too.I've been accumulating a pile of local patches on moria1, specifically around logging more details during voting, and it seems smart to get them into the hands of the other dir auths too.Tor: 0.4.5.x-stableRoger DingledineRoger Dingledinehttps://gitlab.torproject.org/tpo/core/tor/-/issues/40243dirauth: IPv6 sybil detection should not use /642021-01-22T19:22:48ZDavid Gouletdgoulet@torproject.orgdirauth: IPv6 sybil detection should not use /64Offending commit is d07f17f67685d75fec8a851b3ae3d157c1e31aa3
Basically, a `/64` is a network given to end users that is the minimum routable on the Internet iirc.
If dirauth sybil protection uses that, then all relays on the same netwo...Offending commit is d07f17f67685d75fec8a851b3ae3d157c1e31aa3
Basically, a `/64` is a network given to end users that is the minimum routable on the Internet iirc.
If dirauth sybil protection uses that, then all relays on the same network won't be able to join the network. At this moment, `moria1` is rejecting 435 relays based on that behavior because at least 3 relays are in the same network and thus all get considered as sybil.
The correct thing to do here I believe is that we should use `/128` as in match the address only, not the network. Path selection is using `/32` here so it is OK to allow multiple relays from the same network as we do in IPv4, just not in the same path.Tor: 0.4.5.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40241Fix issue when using FALLTHROUGH with ALL_BUGS_ARE_FATAL2021-02-05T21:03:37ZNick MathewsonFix issue when using FALLTHROUGH with ALL_BUGS_ARE_FATALThe combination of these two macros is causing our CI to fail.The combination of these two macros is causing our CI to fail.Tor: 0.4.5.x-stableNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/tpo/core/tor/-/issues/402380.4.5 dir auths complain often of missing proto and missing identity-ed25519 ...2021-01-21T22:24:37ZRoger Dingledine0.4.5 dir auths complain often of missing proto and missing identity-ed25519 elementson moria1 I moved to the latest git, and now I have these complaints in the logs:
```
Jan 10 14:30:32.730 [warn] Parse error: missing proto element.
Jan 10 14:30:32.730 [warn] Error tokenizing router descriptor.
Jan 10 14:30:32.730 [warn...on moria1 I moved to the latest git, and now I have these complaints in the logs:
```
Jan 10 14:30:32.730 [warn] Parse error: missing proto element.
Jan 10 14:30:32.730 [warn] Error tokenizing router descriptor.
Jan 10 14:30:32.730 [warn] Parse error: missing identity-ed25519 element.
Jan 10 14:30:32.730 [warn] Error tokenizing extra-info document.
Jan 10 14:31:17.252 [warn] Parse error: missing proto element.
Jan 10 14:31:17.253 [warn] Error tokenizing router descriptor.
Jan 10 14:32:14.620 [warn] Parse error: missing proto element.
Jan 10 14:32:14.620 [warn] Error tokenizing router descriptor.
Jan 10 14:34:37.349 [warn] Parse error: missing proto element.
Jan 10 14:34:37.349 [warn] Error tokenizing router descriptor.
Jan 10 14:34:37.349 [warn] Parse error: missing identity-ed25519 element.
Jan 10 14:34:37.349 [warn] Error tokenizing extra-info document.
Jan 10 14:43:05.706 [warn] Parse error: missing proto element.
Jan 10 14:43:05.706 [warn] Error tokenizing router descriptor.
Jan 10 14:43:05.706 [warn] Parse error: missing identity-ed25519 element.
Jan 10 14:43:05.706 [warn] Error tokenizing extra-info document.
Jan 10 14:44:03.037 [warn] Parse error: missing proto element.
Jan 10 14:44:03.037 [warn] Error tokenizing router descriptor.
Jan 10 14:44:03.038 [warn] Parse error: missing identity-ed25519 element.
Jan 10 14:44:03.038 [warn] Error tokenizing extra-info document.
```
I assume that modern Tor is expecting or assuming something that is not the case in practice.Tor: 0.4.5.x-stableNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40237v3 onion services require a "live" consensus to publish or fetch2022-07-07T00:48:32ZRoger Dingledinev3 onion services require a "live" consensus to publish or fetchIf the Tor v3 onion client, or Tor v3 onion service, do not have a "live" consensus (i.e. one where "valid-until" hasn't elapsed), then they will choose not to even attempt to fetch or publish v3 onion descriptors:
```
Jan 10 11:32:24.36...If the Tor v3 onion client, or Tor v3 onion service, do not have a "live" consensus (i.e. one where "valid-until" hasn't elapsed), then they will choose not to even attempt to fetch or publish v3 onion descriptors:
```
Jan 10 11:32:24.364 [warn] No live consensus so we can't get the responsible hidden service directories.
```
This bug got exposed today because the network went several rounds without a consensus due to the ongoing issues of #33018 and #33072, and thus Tor clients and Tor onion services ended up with a consensus that still worked (it was made within the past 24 hours), but was no longer considered "live".
So normal Tor circuits (using exit relays) still worked, and v2 onion services still worked, but v3 onion services stopped working for that time period -- services wouldn't publish descriptors, and clients wouldn't fetch them.
The fix imo is that we need to make v3 onion services work under the same time assumptions as the other parts of Tor -- if you have a usable consensus, then use it.Tor: 0.4.5.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40236configure summary misleadingly indicates library support based on enable, not...2022-07-07T00:48:32ZAlex Xuconfigure summary misleadingly indicates library support based on enable, not havewhen --enable-zstd, --enable-lzma, etc are set to auto, they are listed in the configure summary as "no" regardless of whether they were detected and used.when --enable-zstd, --enable-lzma, etc are set to auto, they are listed in the configure summary as "no" regardless of whether they were detected and used.Tor: 0.4.5.x-stablehttps://gitlab.torproject.org/tpo/core/tor/-/issues/40234Non-fatal assertion !(smartlist_len(outdated_dirserver_list) > TOO_MANY_OUTDA...2022-07-07T00:48:32ZNick MathewsonNon-fatal assertion !(smartlist_len(outdated_dirserver_list) > TOO_MANY_OUTDATED_DIRSERVERS)A user sent me this email saying that they'd gotten this bug on their relay.
```
tor_bug_occurred_(): Bug: ../src/feature/nodelist/microdesc.c:133:
microdesc_note_outdated_dirserver: Non-fatal assertion
!(smartlist_len(outdated_dirserv...A user sent me this email saying that they'd gotten this bug on their relay.
```
tor_bug_occurred_(): Bug: ../src/feature/nodelist/microdesc.c:133:
microdesc_note_outdated_dirserver: Non-fatal assertion
!(smartlist_len(outdated_dirserver_list) > TOO_MANY_OUTDATED_DIRSERVERS)
failed. (on Tor 0.4.4.6 )
Bug: Tor 0.4.4.6: Non-fatal assertion
!(smartlist_len(outdated_dirserver_list) > TOO_MANY_OUTDATED_DIRSERVERS)
failed in microdesc_note_outdated_dirserver at
../src/feature/nodelist/microdesc.c:133. Stack trace: (on Tor 0.4.4.6 )
Bug: /usr/bin/tor(log_backtrace_impl+0x56) [0x55bdb35379d6] (on Tor
0.4.4.6 )
Bug: /usr/bin/tor(tor_bug_occurred_+0x16c) [0x55bdb3532bdc] (on Tor
0.4.4.6 )
Bug: /usr/bin/tor(microdesc_note_outdated_dirserver+0x147)
[0x55bdb34563c7] (on Tor 0.4.4.6 )
Bug: /usr/bin/tor(+0x1085fb) [0x55bdb341d5fb] (on Tor 0.4.4.6 )
Bug: /usr/bin/tor(+0x10b588) [0x55bdb3420588] (on Tor 0.4.4.6 )
Bug: /usr/bin/tor(connection_dir_reached_eof+0x29) [0x55bdb3422e09]
(on Tor 0.4.4.6 )
Bug: /usr/bin/tor(connection_handle_read+0x8bc) [0x55bdb33862dc]
(on Tor 0.4.4.6 )
Bug: /usr/bin/tor(+0x76729) [0x55bdb338b729] (on Tor 0.4.4.6 )
Bug: /lib/x86_64-linux-gnu/libevent-2.1.so.6(+0x229ba)
[0x7f0bd5d2e9ba] (on Tor 0.4.4.6 )
Bug: /lib/x86_64-linux-gnu/libevent-2.1.so.6(event_base_loop+0x5a7)
[0x7f0bd5d2f537] (on Tor 0.4.4.6 )
Bug: /usr/bin/tor(do_main_loop+0xff) [0x55bdb338c9af] (on Tor 0.4.4.6 )
Bug: /usr/bin/tor(tor_run_main+0x885) [0x55bdb33793a5] (on Tor
0.4.4.6 )
Bug: /usr/bin/tor(tor_main+0x3a) [0x55bdb33772ca] (on Tor 0.4.4.6 )
Bug: /usr/bin/tor(main+0x19) [0x55bdb3376e89] (on Tor 0.4.4.6 )
Bug: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb)
[0x7f0bd561009b] (on Tor 0.4.4.6 )
Bug: /usr/bin/tor(_start+0x2a) [0x55bdb3376eda] (on Tor 0.4.4.6 )
```Tor: 0.4.5.x-stableNick MathewsonNick Mathewson