Trac issueshttps://gitlab.torproject.org/legacy/trac/-/issues2020-06-13T14:55:55Zhttps://gitlab.torproject.org/legacy/trac/-/issues/18689Fallback Directory Selection should exclude down relays earlier2020-06-13T14:55:55ZteorFallback Directory Selection should exclude down relays earlierThe updateFallbackDirs.py script uses OnionOO to find a list of candidate directory mirrors, then checks the consensus download speed from each mirror.
Previously, the script allowed relays that had a good uptime history, but just happe...The updateFallbackDirs.py script uses OnionOO to find a list of candidate directory mirrors, then checks the consensus download speed from each mirror.
Previously, the script allowed relays that had a good uptime history, but just happened to be down right now.
But this doesn't work any more, because those relays can't provide a consensus, so we exclude them in the final consensus download check.
We could be smarter, and avoid the effort of that check by eliminating relays that aren't running right now from the list of fallback candidates.Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/18649Clarify comments around closing excess consensus connections2020-06-13T14:55:45ZteorClarify comments around closing excess consensus connectionsarma made a comment in #18625 about the functions that close excess consensus connections being unclear.
I've added comments to make it clearer what "excess" means, and why we won't ever close our only consensus connection attempt.arma made a comment in #18625 about the functions that close excess consensus connections being unclear.
I've added comments to make it clearer what "excess" means, and why we won't ever close our only consensus connection attempt.Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/18623DirPort reachability test fails preventing the relay to work properly2020-06-13T14:55:30ZDavid Gouletdgoulet@torproject.orgDirPort reachability test fails preventing the relay to work properly(On git master: ea9472d)
Few seconds after the bootstrap is completed, this warning appears:
```
[warn] We just marked ourself as down. Are your external addresses reachable?
```
followed by this in the info log:
```
[info] TLS error ...(On git master: ea9472d)
Few seconds after the bootstrap is completed, this warning appears:
```
[warn] We just marked ourself as down. Are your external addresses reachable?
```
followed by this in the info log:
```
[info] TLS error while handshaking with "XXX.XXX.XXX.XXX": http request (in SSL routines:SSL23_GET_CLIENT_HELLO:unknown state)
[info] connection_tls_continue_handshake(): tls error [misc error]. breaking connection.
...
[info] connection_dir_client_reached_eof(): 'fetch' response not all here, but we're at eof. Closing.
```
and then the relay fails to join the network thus not usable. I track down the commit that introduces this regression (git bisect):
commit `2d33d192fc4dd0da2a2e038dd87b277f8e9b90de`
This is a blocker because anyone using an IPv4 (haven't tested on v6) DirPort address will suffer from this. Only having an ORPort open is fine.Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/18616Make begindir advertise checks consistent with DirPort checks2020-06-13T14:56:33ZtoralfMake begindir advertise checks consistent with DirPort checksThis ticket makes sure the checks that Tor does when advertising begindir support are similar to the checks it does when advertising the DirPort.
In particular:
* bridges should advertise begindir support
* authorities should always adv...This ticket makes sure the checks that Tor does when advertising begindir support are similar to the checks it does when advertising the DirPort.
In particular:
* bridges should advertise begindir support
* authorities should always advertise begindir
* we should never advertise begindir if the network is disabled
* we should never advertise begindir if we don't have an ORPort (redundant, as we don't post descriptors without an ORPort)
* relays should handle AccountingMax like they do for DirPortTor: 0.2.8.x-finalteorteorhttps://gitlab.torproject.org/legacy/trac/-/issues/18570Fix memory handling in incoming cell queues2020-06-13T14:55:13ZAndrea ShepardFix memory handling in incoming cell queuesPer e-mail by David Goulet:
```
Hi little-t tor team!
(Please add anyone in CC if I forgot anyone, my brain is kind of fried to I'm
sure I'm forgetting someone...)
(Also, this is closed CC list because could be security related if my ...Per e-mail by David Goulet:
```
Hi little-t tor team!
(Please add anyone in CC if I forgot anyone, my brain is kind of fried to I'm
sure I'm forgetting someone...)
(Also, this is closed CC list because could be security related if my analysis
is wrong thus not good idea for tor-dev@ right now)
(An idea, a little-t@torproject.org that could email every dev doing work on
little-t tor code actively would be nice instead of having an epic CC chain :)
I stumble upon this yesterday while working with Rob on the tor code. At first,
it seems like a _bad_ security issue but turns out we are OK however there
seems to be some collateral dammage.
We start our code spelunking in connection_or.c (please correct me if I'm wrong
as I go because this ain't a trivial subsystem so I might interpret things
wrong).
Function connection_or_process_cells_from_inbuf() basically takes the cell from
the inbuf of the given OR connection and either handle them or queue them. Two
types of cells here either a var_cell_t or cell_t. Please follow the cell_t in
that function which is in the else statement of this if:
if (connection_fetch_var_cell_from_buf(conn, &var_cell)) {
...
} else {
HERE
}
The issue we noticed is that "cell_t cell" is on the stack and a reference to
it is passed to channel_tls_handle_cell(). This function can end up in
channel_queue_cell(). Remember, at that point the cell pointer passed to that
function is on the stack.
/* Do we need to queue it, or can we just call the handler right away? */
if (!(chan->cell_handler)) need_to_queue = 1;
if (! TOR_SIMPLEQ_EMPTY(&chan->incoming_queue))
need_to_queue = 1
In other words:
* if we do NOT have a cell_handler function for this channel, queue it.
* if the incoming_queue of the channel is NOT empty, queue it.
It all makes sense so far. The issue _could_ have arised if we queued the cell
because we would ended up in cell_queue_entry_new_fixed() which stores the cell
pointer which is BAD!!! because stack pointer thus potentially leaking stack
bytes to the network and breaking lots of things in tor functionnalities :).
However, after testing, I realized that we _never_ end up in that code path so
why? Turns out that we never actually queue cell in the incoming_queue of the
channel. Because 1) cell_handler of the channel seems always set and 2) we only
insert a cell in the queue if the queue is not empty so how can we insert a
cell in there if we only add it if it's not empty? (bootstrap issue :)
Since I can't find any information on the reason for this code or "high level"
view of it either in the commit message or mailing list (maybe my search is
bad), here are some questions that I'm sure someone in CC can answer me :).
1) What's the intended behavior here of "incoming_queue"? I'm pretty sure (not
100%) that we are _NOT_ using it so what are we losing in theory?
2) If that feature makes sense to keep, fixing it here would require a bit more
of testing and check, at the very least _NOT_ using the stack pointer when
queueing :)
3) Should we rip it off from the code if we think we don't need it?
4) In any of those above cases, having a document/proposal/<whatever> to
explain how cell handling works (10k feet view is enough) would be very needed
here fearing too little of us know about it.
(FYI, same goes for var_cell_t I think, channel_queue_var_cell())
Thanks!
David
```
There's no security hazard here because, as channel_t is currently used by the upper layer, the incoming cell queue never fills, but this is definitely a bug and the correct solution is for the channel layer itself to copy cells when queueing them, and be responsible for freeing them after the cell handler returns in that case.Tor: 0.2.7.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/18529Fix duplicate check for "only allow internal addresses if we are on a network...2020-06-13T14:55:10ZNick MathewsonFix duplicate check for "only allow internal addresses if we are on a network with nonstandard authorities"We have this code in config.c:
```
if (tor_addr_is_internal(&myaddr, 0)) {
/* make sure we're ok with publishing an internal IP */
if (!options->DirAuthorities && !options->AlternateDirAuthority) {
/* if they are using th...We have this code in config.c:
```
if (tor_addr_is_internal(&myaddr, 0)) {
/* make sure we're ok with publishing an internal IP */
if (!options->DirAuthorities && !options->AlternateDirAuthority) {
/* if they are using the default authorities, disallow internal IPs
* always. */
log_fn(warn_severity, LD_CONFIG,
"Address '%s' resolves to private IP address '%s'. "
"Tor servers that use the default DirAuthorities must have "
"public IP addresses.", hostname, addr_string);
tor_free(addr_string);
return -1;
}
...
```
And we now have this code in router.c (since #17153):
```
/* Like IPv4, if the relay is configured using the default
* authorities, disallow internal IPs. Otherwise, allow them. */
const int default_auth = (!options->DirAuthorities &&
!options->AlternateDirAuthority);
if (! tor_addr_is_internal(&p->addr, 0) || ! default_auth) {
ipv6_orport = p;
break;
...
```
These two checks are similar and I'd prefer that they be merged when possible.Tor: 0.2.9.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/18517meek is broken in Tor Browser 6.0a32022-03-22T13:25:53ZGeorg Koppenmeek is broken in Tor Browser 6.0a3meek does not work any longer in Tor Browser 6.0a3. It seems this is caused by an underlying bug in tor. After some amount of testing and bisecting commit 23b088907fd23da417f5caf2b7b5f664f317ef4a is the first that introduces the new beha...meek does not work any longer in Tor Browser 6.0a3. It seems this is caused by an underlying bug in tor. After some amount of testing and bisecting commit 23b088907fd23da417f5caf2b7b5f664f317ef4a is the first that introduces the new behavior. Trying to start meek with it results in
```
Mar 10 13:50:53.000 [notice] Ignoring directory request, since no bridge nodes are available yet.
Mar 10 13:50:54.000 [notice] Delaying directory fetches: No running bridges
```
and nothing thereafter: the startup is stalled.Tor: 0.2.8.x-finalteorteorhttps://gitlab.torproject.org/legacy/trac/-/issues/18510"AccountingMax 20 TB" doubles the actual network load compared to "BandwidthR...2020-06-13T14:55:06Ztoralf"AccountingMax 20 TB" doubles the actual network load compared to "BandwidthRate 8 MB"
With "BandwidthRate 8 MB" I do observe a load of about 3-4 MByte/sec, whereas "AccountingMax 20 TB" yields to a load of 6-8 MByte/sec and an advertised BW of about 16.41 MB/sec (F1BE15429B3CE696D6807F4D4A58B1BFEC45C822)
Assuming, that...
With "BandwidthRate 8 MB" I do observe a load of about 3-4 MByte/sec, whereas "AccountingMax 20 TB" yields to a load of 6-8 MByte/sec and an advertised BW of about 16.41 MB/sec (F1BE15429B3CE696D6807F4D4A58B1BFEC45C822)
Assuming, that "BandwidthRate" is meant just for one direction, I do wonder about the factor of 2.https://gitlab.torproject.org/legacy/trac/-/issues/18481Allow the fallback directory schedules to be changed outside a test network2020-06-13T14:56:25ZteorAllow the fallback directory schedules to be changed outside a test networkIn #4483, I made the additional schedules TestingTorNetwork. But, if it turns out they need tuning, we want to be able to change them in the default torrc in a Tor Browser release.
So they need to be turned into non-testing torrc option...In #4483, I made the additional schedules TestingTorNetwork. But, if it turns out they need tuning, we want to be able to change them in the default torrc in a Tor Browser release.
So they need to be turned into non-testing torrc options, and the testing values moved to the common chutney template.
This also involves carefully sanity checking any user-supplied values to these options.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/18460Relays and bridges are not counting directory requests coming in via IPv62020-06-13T14:56:01ZKarsten LoesingRelays and bridges are not counting directory requests coming in via IPv6While testing my #8786 branch I found that relays and bridges are currently not counting directory requests coming in via IPv6 at all. The reason is the `if` in the following code snippet from `directory_handle_command_get()`:
```
...While testing my #8786 branch I found that relays and bridges are currently not counting directory requests coming in via IPv6 at all. The reason is the `if` in the following code snippet from `directory_handle_command_get()`:
```
struct in_addr in;
tor_addr_t addr;
if (tor_inet_aton((TO_CONN(conn))->address, &in)) {
tor_addr_from_ipv4h(&addr, ntohl(in.s_addr));
geoip_note_client_seen(GEOIP_CLIENT_NETWORKSTATUS,
&addr, NULL,
time(NULL));
```
`tor_inet_aton` expects an IPv4 address in dotted-quad notation and returns 0 if it's given an IPv6 address.
When digging deeper into Git history, I found that I had changed that code to `&TO_CONN(conn)->addr` 4 years ago and then again to the code above in 4741aa4 because "Roger notes that address and addr are two different things."
I _think_ this was a mistake and that we can fix this by just reverting 4741aa4. I'll post a branch in a minute that I tested using Chutney's "bridges+ipv6" network (together with teor's #17153 fix).
Please correct me if we should really use `address` here instead of `addr`. In that case we'll probably want to look if `address` contains an IPv6 address string and handle that separately.Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/18458relax directory checking for unix sockets.2020-06-13T14:54:56Zweasel (Peter Palfrader)relax directory checking for unix sockets.I would like to create unix two sockets, one world-writeable, the other not, in the same directory, e.g., /var/lib/tor.
Currently, tor won't let me do that.
It'd be great if I could tell it to allow this action.I would like to create unix two sockets, one world-writeable, the other not, in the same directory, e.g., /var/lib/tor.
Currently, tor won't let me do that.
It'd be great if I could tell it to allow this action.Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/18457continues to start on unix socket open errors2020-06-13T14:54:56Zweasel (Peter Palfrader)continues to start on unix socket open errorsOn 0.2.8.x, Tor will no longer fail to start when it cannot open a unix SocksPort and user switching is enabled.
```
weasel@defiant:~$ sudo -H -i /usr/sbin/tor DataDirectory /home/weasel/.tor User weasel SocksPort unix:/home/weasel/test...On 0.2.8.x, Tor will no longer fail to start when it cannot open a unix SocksPort and user switching is enabled.
```
weasel@defiant:~$ sudo -H -i /usr/sbin/tor DataDirectory /home/weasel/.tor User weasel SocksPort unix:/home/weasel/test/socks
Mar 01 18:29:11.507 [notice] Tor v0.2.8.1-alpha (git-75e920591fe94bf6) running on Linux with Libevent 2.0.21-stable, OpenSSL 1.0.1k and Zlib 1.2.8.
Mar 01 18:29:11.508 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning
Mar 01 18:29:11.508 [notice] This version is not a stable Tor release. Expect more bugs than usual.
Mar 01 18:29:11.508 [notice] Read configuration file "/etc/tor/torrc".
Mar 01 18:29:11.000 [notice] Parsing GEOIP IPv4 file /usr/share/tor/geoip.
Mar 01 18:29:11.000 [notice] Parsing GEOIP IPv6 file /usr/share/tor/geoip6.
Mar 01 18:29:11.000 [notice] Bootstrapped 0%: Starting
Mar 01 18:29:11.000 [notice] Bootstrapped 80%: Connecting to the Tor network
Mar 01 18:29:12.000 [warn] Permissions on directory /home/weasel/test are too permissive.
Mar 01 18:29:12.000 [warn] Before Tor can create a SOCKS socket in "/home/weasel/test/socks", the directory "/home/weasel/test" needs to exist, and to be accessible only by the user account that is running Tor. (On some Unix systems, anybody who can list a socket can connect to it, so Tor is being careful.)
Mar 01 18:29:12.000 [notice] Bootstrapped 85%: Finishing handshake with first hop
Mar 01 18:29:13.000 [notice] Bootstrapped 90%: Establishing a Tor circuit
Mar 01 18:29:13.000 [notice] Tor has successfully opened a circuit. Looks like client functionality is working.
Mar 01 18:29:13.000 [notice] Bootstrapped 100%: Done
^C
```
(there is no socket when it's running)
Without user switching:
```
weasel@defiant:~$ /usr/sbin/tor DataDirectory /home/weasel/.tor User weasel SocksPort unix:/home/weasel/test/socks
Mar 01 18:30:38.444 [notice] Tor v0.2.8.1-alpha (git-75e920591fe94bf6) running on Linux with Libevent 2.0.21-stable, OpenSSL 1.0.1k and Zlib 1.2.8.
Mar 01 18:30:38.444 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning
Mar 01 18:30:38.444 [notice] This version is not a stable Tor release. Expect more bugs than usual.
Mar 01 18:30:38.444 [notice] Read configuration file "/etc/tor/torrc".
Mar 01 18:30:38.449 [warn] Permissions on directory /home/weasel/test are too permissive.
Mar 01 18:30:38.449 [warn] Before Tor can create a SOCKS socket in "/home/weasel/test/socks", the directory "/home/weasel/test" needs to exist, and to be accessible only by the user account that is running Tor. (On some Unix systems, anybody who can list a socket can connect to it, so Tor is being careful.)
Mar 01 18:30:38.449 [warn] Failed to parse/validate config: Failed to bind one of the listener ports.
Mar 01 18:30:38.449 [err] Reading config failed--see warnings above.
```
For comparison, 0.2.7.x:
```
drwxr-xr-x 2 weasel weasel 4096 Mar 1 18:17 test/
weasel@defiant:~$ sudo -H -i /usr/sbin/tor DataDirectory /home/weasel/.tor User weasel SocksPort unix:/home/weasel/test/socks
Mar 01 18:27:21.782 [notice] Tor v0.2.7.6 (git-605ae665009853bd) running on Linux with Libevent 2.0.21-stable, OpenSSL 1.0.1k and Zlib 1.2.8.
Mar 01 18:27:21.782 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning
Mar 01 18:27:21.782 [notice] Read configuration file "/etc/tor/torrc".
Mar 01 18:27:21.787 [warn] Permissions on directory /home/weasel/test are too permissive.
Mar 01 18:27:21.787 [warn] Before Tor can create a SOCKS socket in "/home/weasel/test/socks", the directory "/home/weasel/test" needs to exist, and to be accessible only by the user account that is running Tor. (On some Unix systems, anybody who can list a socket can connect to it, so Tor is being careful.)
Mar 01 18:27:21.787 [warn] Failed to parse/validate config: Failed to bind one of the listener ports.
Mar 01 18:27:21.787 [err] Reading config failed--see warnings above.
```Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/18404"make test-network" failed for test010bc2020-06-13T14:54:50Ztoralf"make test-network" failed for test010bc```
~/devel/tor $ make test-network
make all-am
make[1]: Entering directory '/home/tfoerste/devel/tor'
make[1]: Leaving directory '/home/tfoerste/devel/tor'
./src/test/test-network.sh --hs-multi-client 1
Using Python 2.7.10
Send...```
~/devel/tor $ make test-network
make all-am
make[1]: Entering directory '/home/tfoerste/devel/tor'
make[1]: Leaving directory '/home/tfoerste/devel/tor'
./src/test/test-network.sh --hs-multi-client 1
Using Python 2.7.10
Sending SIGINT to nodes
Waiting for nodes to finish.
Removing stale lock file for test000a ...
Removing stale lock file for test001a ...
Removing stale lock file for test002a ...
Removing stale lock file for test003ba ...
Removing stale lock file for test004r ...
Removing stale lock file for test005r ...
Removing stale lock file for test006r ...
Removing stale lock file for test007r ...
Removing stale lock file for test008br ...
Removing stale lock file for test009c ...
Removing stale lock file for test011h ...
bootstrap-network.sh: boostrapping network: bridges+hs
Using Python 2.7.10
NOTE: renaming '/home/tfoerste/devel/chutney/net/nodes' to '/home/tfoerste/devel/chutney/net/nodes.1456675056'
Creating identity key /home/tfoerste/devel/chutney/net/nodes/000a/keys/authority_identity_key for test000a with /home/tfoerste/devel/tor/src/tools/tor-gencert --create-identity-key --passphrase-fd 0 -i /home/tfoerste/devel/chutney/net/nodes/000a/keys/authority_identity_key -s /home/tfoerste/devel/chutney/net/nodes/000a/keys/authority_signing_key -c /home/tfoerste/devel/chutney/net/nodes/000a/keys/authority_certificate -m 12 -a 127.0.0.1:7000
Creating identity key /home/tfoerste/devel/chutney/net/nodes/001a/keys/authority_identity_key for test001a with /home/tfoerste/devel/tor/src/tools/tor-gencert --create-identity-key --passphrase-fd 0 -i /home/tfoerste/devel/chutney/net/nodes/001a/keys/authority_identity_key -s /home/tfoerste/devel/chutney/net/nodes/001a/keys/authority_signing_key -c /home/tfoerste/devel/chutney/net/nodes/001a/keys/authority_certificate -m 12 -a 127.0.0.1:7001
Creating identity key /home/tfoerste/devel/chutney/net/nodes/002a/keys/authority_identity_key for test002a with /home/tfoerste/devel/tor/src/tools/tor-gencert --create-identity-key --passphrase-fd 0 -i /home/tfoerste/devel/chutney/net/nodes/002a/keys/authority_identity_key -s /home/tfoerste/devel/chutney/net/nodes/002a/keys/authority_signing_key -c /home/tfoerste/devel/chutney/net/nodes/002a/keys/authority_certificate -m 12 -a 127.0.0.1:7002
Creating identity key /home/tfoerste/devel/chutney/net/nodes/003ba/keys/authority_identity_key for test003ba with /home/tfoerste/devel/tor/src/tools/tor-gencert --create-identity-key --passphrase-fd 0 -i /home/tfoerste/devel/chutney/net/nodes/003ba/keys/authority_identity_key -s /home/tfoerste/devel/chutney/net/nodes/003ba/keys/authority_signing_key -c /home/tfoerste/devel/chutney/net/nodes/003ba/keys/authority_certificate -m 12 -a 127.0.0.1:7003
Using Python 2.7.10
Starting nodes
Couldn't launch test010bc (/home/tfoerste/devel/tor/src/or/tor --quiet -f /home/tfoerste/devel/chutney/net/nodes/010bc/torrc): 1
Using Python 2.7.10
test000a is running with PID 25161
test001a is running with PID 25164
test002a is running with PID 25167
test003ba is running with PID 25174
test004r is running with PID 25177
test005r is running with PID 25192
test006r is running with PID 25199
test007r is running with PID 25202
test008br is running with PID 25209
test009c is running with PID 25216
test011h is running with PID 25226
11/12 nodes are running
Makefile:7238: recipe for target 'test-network' failed
make: *** [test-network] Error 2
```https://gitlab.torproject.org/legacy/trac/-/issues/18380Test failure on git master2020-06-13T14:54:44ZcypherpunksTest failure on git master- Git master ( latest commit : e88686cb2cbae39333982505f38f2d7568af4f32 )
- Debian unstable i386
```
dir/v3_networkstatus: [forking]
FAIL ../src/test/test_dir.c:2053: assert(1 OP_EQ networkstatus_add_detached_signatures(con2, dsig1...- Git master ( latest commit : e88686cb2cbae39333982505f38f2d7568af4f32 )
- Debian unstable i386
```
dir/v3_networkstatus: [forking]
FAIL ../src/test/test_dir.c:2053: assert(1 OP_EQ networkstatus_add_detached_signatures(con2, dsig1, "test", LOG_INFO, &msg)): 1 vs -1
[v3_networkstatus FAILED]
dir/random_weighted: OK
dir/scale_bw: OK
dir/clip_unmeasured_bw_kb: [forking]
FAIL ../src/test/test_dir.c:2053: assert(1 OP_EQ networkstatus_add_detached_signatures(con2, dsig1, "test", LOG_INFO, &msg)): 1 vs -1
[clip_unmeasured_bw_kb FAILED]
dir/clip_unmeasured_bw_kb_alt: [forking]
FAIL ../src/test/test_dir.c:2053: assert(1 OP_EQ networkstatus_add_detached_signatures(con2, dsig1, "test", LOG_INFO, &msg)): 1 vs -1
[clip_unmeasured_bw_kb_alt FAILED]
```Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/18370Apparmor prevents last tor build from starting2020-06-13T14:54:42ZTracApparmor prevents last tor build from startingtor_0.2.8.1-alpha-dev-20160222T073925Z package is broken
Directory /var/lib/tor cannot be read: Permission denied
Failed to parse/validate config: Couldn't access/create private data directory "/var/lib/tor"
Reading config failed--see wa...tor_0.2.8.1-alpha-dev-20160222T073925Z package is broken
Directory /var/lib/tor cannot be read: Permission denied
Failed to parse/validate config: Couldn't access/create private data directory "/var/lib/tor"
Reading config failed--see warnings above.
apparmor="DENIED" operation="open" profile="system_tor" name="/var/lib/tor/" pid=9747 comm="tor" requested_mask="r" denied_mask="r" fsuid=120 ouid=120
previous build worked without trouble
Also file tor-service-defaults-torrc-instances can be totally removed from package.
**Trac**:
**Username**: Ricky_MartinTor: 0.2.8.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/18351Relay directory failures and logging are too aggressive2020-06-13T14:54:34ZMatthew FinkelRelay directory failures and logging are too aggressiveAs mentioned in #18348, when a relay exhausts the available directory authorities, it now produces the below warning. It seems there are three points to consider when taken in the context of a relay:
1) The actual log message is not corr...As mentioned in #18348, when a relay exhausts the available directory authorities, it now produces the below warning. It seems there are three points to consider when taken in the context of a relay:
1) The actual log message is not correct, this completely unrelated to the firewall/reachability logic
2) I'm not certain, but I believe this is likely a normal situation on the network
3) Logging this at warn seems overly aggressive for something a user can't control
1) and 2) are actually the same point. See the attached debug log extract (including some additional messages I added). Nearly all the dir auths are marked as down due to 304 responses. Should tor be less aggressive about avoiding a directory authority after a 304? I worry, if this is normal behavior, then the network's load is not be distributed among all the directories. It seems like they're being picked off, one-by-one. Especially when this is in the logs - unless I'm reading it wrong they're identical requests despite the first one succeeding (possibly another bug?):
```
--
Feb 20 05:26:37.000 [info] directory_send_command: Downloading consensus from 128.31.0.39:9131 using /tor/status-vote/current/consensus-microdesc/0232AF+14C131+23D15D+49015F+805509+D586D1+E8A9C4+ED03BB+EFCBE7.z
Feb 20 05:26:39.000 [debug] connection_dir_client_reached_eof: Received response from directory server '128.31.0.39:9131': 200 "OK" (purpose: 14)
--
Feb 20 05:27:37.000 [info] directory_send_command: Downloading consensus from 131.188.40.189 using /tor/status-vote/current/consensus-microdesc/0232AF+14C131+23D15D+49015F+805509+D586D1+E8A9C4+ED03BB+EFCBE7.z
Feb 20 05:27:38.000 [debug] connection_dir_client_reached_eof: Received response from directory server '131.188.40.189:80': 304 "Not modified" (purpose: 14)
Feb 20 05:29:37.000 [info] directory_send_command: Downloading consensus from 199.254.238.52 using /tor/status-vote/current/consensus-microdesc/0232AF+14C131+23D15D+49015F+805509+D586D1+E8A9C4+ED03BB+EFCBE7.z
Feb 20 05:29:41.000 [debug] connection_dir_client_reached_eof: Received response from directory server '199.254.238.52:80': 304 "Not modified" (purpose: 14)
```
In the attached log, router_set_status() records when a node is marked down. The preceeding lines should say why. router_pick_trusteddirserver_impl() records which nodes we disqualified when we want to send a new request.
Scary backtrace:
```
[warn] router_picked_poor_directory_log: Bug: Firewall denied all OR and Dir addresses for all relays when searching for a directory. (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
[warn] Bug: Node search initiated by. Stack trace: (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
[warn] Bug: 0x1157e08 <log_backtrace+0x48> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
[warn] Bug: 0x10af86b <hid_serv_responsible_for_desc_id+0xdeb> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
[warn] Bug: 0x10a6a86 <router_pick_trusteddirserver+0x76> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
[warn] Bug: 0x1126183 <directory_get_from_dirserver+0x293> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
[warn] Bug: 0x10ad45a <launch_descriptor_downloads+0x4ba> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
[warn] Bug: 0x10ad21a <launch_descriptor_downloads+0x27a> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
[warn] Bug: 0x10adb6b <update_consensus_router_descriptor_downloads+0x6cb> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
[warn] Bug: 0x10ac7f6 <update_all_descriptor_downloads+0x66> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
[warn] Bug: 0x10602c8 <directory_info_has_arrived+0x48> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
[warn] Bug: 0x1128890 <connection_dir_reached_eof+0x1160> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
[warn] Bug: 0x1107f5b <connection_handle_read+0xb3b> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
[warn] Bug: 0x105f044 <connection_add_impl+0x214> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
[warn] Bug: 0x801aa538e <event_base_loop+0x81e> at /usr/local/lib/libevent-2.0.so.5 (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
[warn] Bug: 0x1060cc5 <do_main_loop+0x5c5> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
[warn] Bug: 0x1062fcf <tor_main+0xdf> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
[warn] Bug: 0x105ed49 <main+0x19> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
[warn] Bug: 0x105ec41 <_start+0x1a1> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
```Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/18348Tor conflates IPv4 Dir port with IPv6 OR Port2020-06-13T16:03:34ZMatthew FinkelTor conflates IPv4 Dir port with IPv6 OR PortSince #17840 tor prefers IPv6 addresses for client connections when they're available. This is a significant improvement but is not always correct in the network as it is now. Unfortunately, this affects a relays dirconns, too. The prima...Since #17840 tor prefers IPv6 addresses for client connections when they're available. This is a significant improvement but is not always correct in the network as it is now. Unfortunately, this affects a relays dirconns, too. The primary problem arises when a relay attempt a descriptor upload/fetch with a directory authority with an IPv6 OR port.
Currently all configuration options allow configuring IPv6 OR ports, but none specify dir ports. When a client attempts a dir port connection, it implicitly assumes the dir port is listening on the same ip address as the OR port.
Currently most of the dir auths Dir ports are only listening on their ipv4 address, including the dir auths with ipv6 OR addresses. An easy (but not necessary correct) solution is Dir Auth Op configure their dirauths so they accept ipv6 connections on the dir port. A better solution is tor knows when a dir port is ipv4 or ipv6 and chooses the correct corresponding ip address.
Now, as a relay, in fascist_firewall_allows_dir_server() we choose the destination's ipv4 address. However, when we subsequently call directory_choose_address_routerstatus() we don't remember which address we prefer:
```
} else {
/* We use an IPv6 address if we have one and we prefer it.
* Use the preferred address and port if they are reachable, otherwise,
* use the alternate address and port (if any).
*/
have_or = fascist_firewall_choose_address_rs(status,
FIREWALL_OR_CONNECTION, 0,
use_or_ap);
}
have_dir = fascist_firewall_choose_address_rs(status,
FIREWALL_DIR_CONNECTION, 0,
use_dir_ap);
```
Therefore directory_initiate_command_rend() uses the ipv6 address by default.
As an example (with additional debug messages):
```
Feb 19 16:57:33.000 [info] router_upload_dir_desc_to_dirservers: Uploading relay descriptor to directory authorities
Feb 19 16:57:33.000 [info] directory_post_to_dirservers: Uploading an extrainfo too (length 980)
Feb 19 16:57:33.000 [debug] directory_initiate_command_rend: anonymized 0, use_begindir 0.
Feb 19 16:57:33.000 [debug] directory_initiate_command_rend: Initiating server descriptor upload
Feb 19 16:57:33.000 [debug] connection_connect: Connecting to [scrubbed]:9131.
Feb 19 16:57:33.000 [debug] connection_connect_sockaddr: Connection to socket in progress (sock 32).
Feb 19 16:57:33.000 [debug] connection_add_impl: new conn type Directory, socket 32, address 128.31.0.39, n_conns 36.
Feb 19 16:57:33.000 [info] directory_post_to_dirservers: Uploading an extrainfo too (length 980)
Feb 19 16:57:33.000 [debug] directory_initiate_command_rend: anonymized 0, use_begindir 0.
Feb 19 16:57:33.000 [debug] directory_initiate_command_rend: Initiating server descriptor upload
Feb 19 16:57:33.000 [debug] connection_connect: Connecting to [scrubbed]:80.
Feb 19 16:57:33.000 [debug] connection_connect_sockaddr: Connection to socket in progress (sock 33).
Feb 19 16:57:33.000 [debug] connection_add_impl: new conn type Directory, socket 33, address 2001:858:2:2:aabb:0:563b:1526, n_conns 37.
...
Feb 19 16:57:33.000 [debug] conn_read_callback: socket 33 wants to read.
Feb 19 16:57:33.000 [debug] connection_handle_read_impl: Closing conn after error: Connection refused (61)
Feb 19 16:57:33.000 [info] connection_close_immediate: fd 33, type Directory, state connecting, 3298 bytes on outbuf.
Feb 19 16:57:33.000 [debug] conn_close_if_marked: Cleaning up connection (fd -1).
Feb 19 16:57:33.000 [info] connection_dir_request_failed: Setting dir 2001:858:2:2:aabb:0:563b:1526 as down after failed request.
Feb 19 16:57:33.000 [debug] router_set_status: Setting 86.59.21.38 as running: 0
Feb 19 16:57:33.000 [debug] router_set_status: Marking router $847B1F850344D7876491A54892F904934E4EB85D~tor26 at 86.59.21.38 as down.
Feb 19 16:57:33.000 [debug] connection_remove: removing socket -1 (type Directory), n_conns now 47
```
(this issue is only in master, not in any released version)
To make matters worse (and the reason I found this), eventually after most of the ipv6-enabled dir auths are marked as down due to the connection being refused, relays later get this scary thing:
```
Feb 19 09:26:53.000 [warn] router_picked_poor_directory_log: Bug: Firewall denied all OR and Dir addresses for all relays when searching for a directo
ry. (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
Feb 19 09:26:53.000 [warn] Bug: Node search initiated by. Stack trace: (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
Feb 19 09:26:53.000 [warn] Bug: 0x1157ff8 <log_backtrace+0x48> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
Feb 19 09:26:53.000 [warn] Bug: 0x10af99c <hid_serv_responsible_for_desc_id+0xebc> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
Feb 19 09:26:53.000 [warn] Bug: 0x10a6ae6 <router_pick_trusteddirserver+0x76> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
Feb 19 09:26:53.000 [warn] Bug: 0x1126333 <directory_get_from_dirserver+0x293> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
Feb 19 09:26:53.000 [warn] Bug: 0x10ad4ba <launch_descriptor_downloads+0x4ba> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
Feb 19 09:26:53.000 [warn] Bug: 0x10ad27a <launch_descriptor_downloads+0x27a> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
Feb 19 09:26:53.000 [warn] Bug: 0x10adbcb <update_consensus_router_descriptor_downloads+0x6cb> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
Feb 19 09:26:53.000 [warn] Bug: 0x10ac856 <update_all_descriptor_downloads+0x66> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
Feb 19 09:26:53.000 [warn] Bug: 0x10602c8 <directory_info_has_arrived+0x48> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
Feb 19 09:26:53.000 [warn] Bug: 0x1128a40 <connection_dir_reached_eof+0x1160> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
Feb 19 09:26:53.000 [warn] Bug: 0x110810b <connection_handle_read+0xb3b> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
Feb 19 09:26:53.000 [warn] Bug: 0x105f044 <connection_add_impl+0x214> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
Feb 19 09:26:53.000 [warn] Bug: 0x801aa538e <event_base_loop+0x81e> at /usr/local/lib/libevent-2.0.so.5 (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
Feb 19 09:26:53.000 [warn] Bug: 0x1060cc5 <do_main_loop+0x5c5> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
Feb 19 09:26:53.000 [warn] Bug: 0x1062fcf <tor_main+0xdf> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
Feb 19 09:26:53.000 [warn] Bug: 0x105ed49 <main+0x19> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
Feb 19 09:26:53.000 [warn] Bug: 0x105ec41 <_start+0x1a1> at /usr/local/bin/tor (on Tor 0.2.8.1-alpha-dev 1f679d4ae11cd976)
```
Because we already asked the useful dir auths for descriptors and those requests are still outstanding, so we don't have any viable directories remaining. (Ignore the mention of hid_serv_responsible_for_desc_id+0xbfb, it is actually router_pick_trusteddirserver_impl()).Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/18286tor 0.2.8.1-alpha-dev - dumping core on test, tor binary dumps core as well2020-06-13T14:56:02ZTractor 0.2.8.1-alpha-dev - dumping core on test, tor binary dumps core as wellW/NetBSD 6_1_Stable i386, OpenSSL 1.1.0-pre3-dev, Libevent 2.1.5-beta
tor 0.2.8.1-alpha-dev (latest git root sources as of 2016.02.08)
# pwd
/usr/local/src/tor
# gmake test
gmake all-am
gmake[1]: Entering directory '/usr/local/src/tor'...W/NetBSD 6_1_Stable i386, OpenSSL 1.1.0-pre3-dev, Libevent 2.1.5-beta
tor 0.2.8.1-alpha-dev (latest git root sources as of 2016.02.08)
# pwd
/usr/local/src/tor
# gmake test
gmake all-am
gmake[1]: Entering directory '/usr/local/src/tor'
gmake[1]: Leaving directory '/usr/local/src/tor'
./src/test/test
Memory fault (core dumped)
Makefile:7219: recipe for target 'test' failed
gmake: *** [test] Error 139
#
-------------------------------------------------
I tried to compile with debug symbols, but think it is not working:
# gdb src/test/test test.core
GNU gdb (GDB) 7.3.1
Reading symbols from /usr/local/src/tor/src/test/test...done.
[New process 1]
Cannot access memory at address 0xffffff55
(gdb) bt
#0 0x006852d5 in OBJ_cleanup ()
#1 0xbb8e3880 in ?? ()
#2 0xbbbe66e0 in ?? ()
#3 0xbb8e389a in ?? ()
#4 0xbbbff510 in ?? ()
#5 0xbbbf322b in ?? ()
#6 0x00000003 in ?? ()
#7 0xbfbfebf8 in ?? ()
#8 0x00000000 in ?? ()
(gdb)
**Trac**:
**Username**: yancmTor: 0.2.8.x-finalYawning AngelYawning Angelhttps://gitlab.torproject.org/legacy/trac/-/issues/18261socket listening defer code segfaults when no user is set2020-06-13T14:54:07Zweasel (Peter Palfrader)socket listening defer code segfaults when no user is set```
<weasel> + if (port->is_unix_addr && !geteuid() && strcmp(options->User, "root"))
<weasel> + continue;
<weasel> is options->User guaranteed to be set?
```
Nope, it's not, as nickm, arma, and weasel concur:
Tor 0.2.8.1-alpha-...```
<weasel> + if (port->is_unix_addr && !geteuid() && strcmp(options->User, "root"))
<weasel> + continue;
<weasel> is options->User guaranteed to be set?
```
Nope, it's not, as nickm, arma, and weasel concur:
Tor 0.2.8.1-alpha-dev (git-1f5cdf2b6c72ae89) died: Caught signal 11
```
#2 0x00007f2079b2994c in crash_handler (sig=<optimized out>, si=<optimized out>, ctx_=<optimized out>) at ../src/common/backtrace.c:144
#3 <signal handler called>
#4 0x00007f2079ada53e in retry_listener_ports (control_listeners_only=<optimized out>, new_conns=<optimized out>, ports=<optimized out>, old_conns=<optimized out>) at ../src/or/connection.c:2401
```Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/18253(Sandbox) Caught a bad syscall attempt (syscall chown)2020-06-13T14:55:11Zweasel (Peter Palfrader)(Sandbox) Caught a bad syscall attempt (syscall chown)On 0.2.8.1-alpha:
```
tor --defaults-torrc /usr/share/tor/tor-service-defaults-torrc -f /etc/tor/torrc --RunAsDaemon 0
```
```
tail -F /var/log/tor/log
[..]
Feb 05 23:15:14.000 [notice] Bootstrapped 0%: Starting
Feb 05 23:15:14.000 [not...On 0.2.8.1-alpha:
```
tor --defaults-torrc /usr/share/tor/tor-service-defaults-torrc -f /etc/tor/torrc --RunAsDaemon 0
```
```
tail -F /var/log/tor/log
[..]
Feb 05 23:15:14.000 [notice] Bootstrapped 0%: Starting
Feb 05 23:15:14.000 [notice] Bootstrapped 80%: Connecting to the Tor network
Feb 05 23:15:14.000 [notice] Bootstrapped 85%: Finishing handshake with first hop
Feb 05 23:15:15.000 [warn] sandbox_intern_string(): Bug: No interned sandbox parameter found for /var/run/tor (on Tor 0.2.8.1-alpha )
Feb 05 23:15:15.000 [notice] Opening Control listener on /var/run/tor/control
============================================================ T= 1454710515
(Sandbox) Caught a bad syscall attempt (syscall chown)
tor(+0x14ca96)[0x7f76431b4a96]
/lib/x86_64-linux-gnu/libc.so.6(chown+0x7)[0x7f764155bf07]
/lib/x86_64-linux-gnu/libc.so.6(chown+0x7)[0x7f764155bf07]
tor(retry_all_listeners+0x80e)[0x7f764314d14e]
tor(+0x3cf1d)[0x7f76430a4f1d]
tor(+0x56108)[0x7f76430be108]
tor(+0x3d34c)[0x7f76430a534c]
/usr/lib/x86_64-linux-gnu/libevent-2.0.so.5(+0x12584)[0x7f76426f3584]
/usr/lib/x86_64-linux-gnu/libevent-2.0.so.5(event_base_loop+0x7fc)[0x7f76426f13dc]
tor(do_main_loop+0x274)[0x7f76430a8d34]
tor(tor_main+0x1a55)[0x7f76430ac355]
tor(main+0x19)[0x7f76430a4979]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7f76414a0b45]
tor(+0x3c9c9)[0x7f76430a49c9]
```
This doesn't happen on 0.2.7.xTor: 0.2.8.x-finalNick MathewsonNick Mathewson