Trac issueshttps://gitlab.torproject.org/legacy/trac/-/issues2020-06-13T15:53:45Zhttps://gitlab.torproject.org/legacy/trac/-/issues/34384AppVeyor builds are failing2020-06-13T15:53:45ZAlexander Færøyahf@torproject.orgAppVeyor builds are failingOur AppVeyor builds are failing. It looks like the issue is related to us not updating Pacman before we install our dependencies.Our AppVeyor builds are failing. It looks like the issue is related to us not updating Pacman before we install our dependencies.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/34357Reject relays running 0.4.12020-06-13T15:53:43ZNick MathewsonReject relays running 0.4.1Now that 0.4.1 has reached end-of-life, it's time for directory authorities to stop accepting relays running it.
See #32672 for the last time we did this.
Looking at the graphs, I don't see a significant change in the drop-off rate for...Now that 0.4.1 has reached end-of-life, it's time for directory authorities to stop accepting relays running it.
See #32672 for the last time we did this.
Looking at the graphs, I don't see a significant change in the drop-off rate for deprecated versions in between when we announced that they were deprecated, and when we finally removed them. Maybe this time we should just send out an announcment, wait a month, then reject the relays?Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/34232Make summarize_protover_flags() handle NULL and empty string the same2020-06-13T15:53:34ZteorMake summarize_protover_flags() handle NULL and empty string the samesummarize_protover_flags(NULL, NULL) doesn't set protocols_known, but summarize_protover_flags("", "") does.
While this situation probably won't happen in practice, it could be a source of subtle bugs.
And we have a general guideline t...summarize_protover_flags(NULL, NULL) doesn't set protocols_known, but summarize_protover_flags("", "") does.
While this situation probably won't happen in practice, it could be a source of subtle bugs.
And we have a general guideline that functions should treat NULL and "" in similar ways. (Or the difference should be clearly documented.)
So we should ignore "" protovers, the same way we ignore NULL protovers. (Relays with empty protovers won't end up in the consensus, and clients can't use them for anything. So this change should have no real impact.)Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/34167PublishServerDescriptor via IPv62020-06-13T15:53:30ZTracPublishServerDescriptor via IPv6let my relay node configurable by what protocol the upload to authorities is done. similiar to
```
PublishServerDescriptor
```
option.
perhaps add option like
```
PublishServerDescriptorProtocol auto|4|6|or
# This option specifi...let my relay node configurable by what protocol the upload to authorities is done. similiar to
```
PublishServerDescriptor
```
option.
perhaps add option like
```
PublishServerDescriptorProtocol auto|4|6|or
# This option specifies how descriptors Tor will publish when acting as
# a relay. You can
# choose multiple arguments, separated by commas.
#
# If this option is set to 4, Tor will publish its
# descriptors to any directories over IPv4. (This is useful if you're not IPv6 connectable
# out your server, or if you're using a IPv6 Translation Tunnel)
# Otherwise, Tor will publish its
# descriptors via IPv6. The default is "auto", which
# means "if running as a relay or bridge, publish descriptors to the
# appropriate authorities over what's reachable". Other possibilities are "or", meaning
# "publish as if you're a OnionService", "publish over onion circuit".
```
_may Sponsor55 or prop311-prop313 already covered if i missed or could be relevant for_
**Trac**:
**Username**: ϲypherpunksTor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/34139Build Tor without warnings or test failures with OpenSSL 3.0.02020-06-13T15:53:30ZNick MathewsonBuild Tor without warnings or test failures with OpenSSL 3.0.0According to the OpenSSL release strategy [release-strat] they're planning to release openssl 3.0.0 in early Q4 of this year.
Currently, many of the APIs that Tor uses are deprecated in OpenSSL 3.0.0-alpha [openssl-3]. It's still poss...According to the OpenSSL release strategy [release-strat] they're planning to release openssl 3.0.0 in early Q4 of this year.
Currently, many of the APIs that Tor uses are deprecated in OpenSSL 3.0.0-alpha [openssl-3]. It's still possible to build Tor with it, but you get a lot of deprecated-item warnings. We should fix those warnings before OpenSSL 3 is released.
Further, if we build without fatal warnings, there are some test failures. We should see if they are tor bugs or new openssl bugs, and fix them in the first case or report them in the second.
I don't think we necessarily need to backport this: OpenSSL 1.1 will be supported until 2023-09-11 [release-strat], whereas support for 0.3.5 is scheduled to end on 2020-02-02.
[release-strat] https://www.openssl.org/policies/releasestrat.html
[openssl-3] https://www.openssl.org/blog/blog/2020/04/23/OpenSSL3.0Alpha1/Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/34137Make sure inform_testing_reachability() reports the correct ports2020-06-13T15:53:29ZteorMake sure inform_testing_reachability() reports the correct portsIn #33222, we added IPv6 ORPort support to inform_testing_reachability(). But that function checks the ORPorts and DirPort itself, rather than logging the reachability checks that were actually launched.
We'd like to pass flags so that ...In #33222, we added IPv6 ORPort support to inform_testing_reachability(). But that function checks the ORPorts and DirPort itself, rather than logging the reachability checks that were actually launched.
We'd like to pass flags so that it logs the actual reachability tests that are being run. (Rather than re-checking the relay's own routerinfo, which may have changed since the most recent reachability checks were launched.)
But inform_testing_reachability() is called when:
* the first circuit finishes building, or
* tor is reconfigured, and some circuits have already finished building.
So we need to do a bit of a refactor.
The refactor should preserve this behaviour:
* don't log until after the first circuit has finished building (rather than logging as soon as we start building reachability circuits)
And introduce this new behaviour:
* log the ports that were tested recently, rather than the ports that are currently available.
As an alternative, we could move some of the logging into the functions that actually launch the checks. And elevate some of those logs to notice level. (Note that these checks can be launched from at least 4 different locations in tor's code.)Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/34088circuit_build_times_update_alpha(): Bug: Could not determine largest build time2020-06-13T15:53:26Zs7rcircuit_build_times_update_alpha(): Bug: Could not determine largest build timeI don't think this is HSv3 ONLY related, but I can only make it happen while building large amounts of rendezvous circuits with reasonable concurrency (over 50). I am not assigning the tor-hs keyword for this reason, but sticking it to s...I don't think this is HSv3 ONLY related, but I can only make it happen while building large amounts of rendezvous circuits with reasonable concurrency (over 50). I am not assigning the tor-hs keyword for this reason, but sticking it to same master parent ticket.
All lines are the same. It is seen for like 130-160 times during the build of little over 100.000 rendezvous circuits.
```
Apr 03 04:05:33.000 [warn] circuit_build_times_update_alpha(): Bug: Could not determine largest build time (0). Xm is 20025ms and we've abandoned 0 out of 136 circuits. (on Tor 0.4.4.0-alpha-dev )
Apr 03 04:05:33.000 [warn] circuit_build_times_update_alpha(): Bug: Could not determine largest build time (0). Xm is 20025ms and we've abandoned 0 out of 137 circuits. (on Tor 0.4.4.0-alpha-dev )
Apr 03 04:05:33.000 [warn] circuit_build_times_update_alpha(): Bug: Could not determine largest build time (0). Xm is 20025ms and we've abandoned 0 out of 138 circuits. (on Tor 0.4.4.0-alpha-dev )
Apr 03 04:05:33.000 [warn] circuit_build_times_update_alpha(): Bug: Could not determine largest build time (0). Xm is 20025ms and we've abandoned 0 out of 139 circuits. (on Tor 0.4.4.0-alpha-dev )
Apr 03 04:05:33.000 [warn] circuit_build_times_update_alpha(): Bug: Could not determine largest build time (0). Xm is 20025ms and we've abandoned 0 out of 140 circuits. (on Tor 0.4.4.0-alpha-dev )
```
EDIT: All lines are the same except it always abandons 0 out of N circuits and N is always different of course, increasing with +1 most of the times until the Bug warn disappears.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/34083Client rendezvous circuit is no longer in circuit_wait but in pending_entry_c...2020-06-13T15:53:22Zs7rClient rendezvous circuit is no longer in circuit_wait but in pending_entry_connectionsWhen you are creating many rendezvous client circuits with a reasonable concurrency, you get tons of messages in the log file marked as bug like this:
```
May 01 08:55:43.000 [warn] connection_ap_attach_pending(): Bug: 0x55d877d94940 is...When you are creating many rendezvous client circuits with a reasonable concurrency, you get tons of messages in the log file marked as bug like this:
```
May 01 08:55:43.000 [warn] connection_ap_attach_pending(): Bug: 0x55d877d94940 is no longer in circuit_wait. Its current state is waiting for rendezvous desc. Why is it on pending_entry_connections? (on Tor 0.4.4.0-alpha-dev )
May 01 08:55:43.000 [warn] connection_ap_attach_pending(): Bug: 0x55d8789c16c0 is no longer in circuit_wait. Its current state is waiting for rendezvous desc. Why is it on pending_entry_connections? (on Tor 0.4.4.0-alpha-dev )
May 01 08:55:43.000 [warn] connection_ap_attach_pending(): Bug: 0x55d875eef5a0 is no longer in circuit_wait. Its current state is waiting for rendezvous desc. Why is it on pending_entry_connections? (on Tor 0.4.4.0-alpha-dev )
May 01 08:55:43.000 [warn] connection_ap_attach_pending(): Bug: 0x55d876063640 is no longer in circuit_wait. Its current state is waiting for rendezvous desc. Why is it on pending_entry_connections? (on Tor 0.4.4.0-alpha-dev )
May 01 08:55:43.000 [warn] connection_ap_attach_pending(): Bug: 0x55d877b92960 is no longer in circuit_wait. Its current state is waiting for rendezvous desc. Why is it on pending_entry_connections? (on Tor 0.4.4.0-alpha-dev )
May 01 08:55:43.000 [warn] connection_ap_attach_pending(): Bug: 0x55d8764ae550 is no longer in circuit_wait. Its current state is waiting for rendezvous desc. Why is it on pending_entry_connections? (on Tor 0.4.4.0-alpha-dev )
May 01 08:55:43.000 [warn] connection_ap_attach_pending(): Bug: 0x55d878a83f00 is no longer in circuit_wait. Its current state is waiting for rendezvous desc. Why is it on pending_entry_connections? (on Tor 0.4.4.0-alpha-dev )
May 01 08:55:43.000 [warn] connection_ap_attach_pending(): Bug: 0x55d877854530 is no longer in circuit_wait. Its current state is waiting for rendezvous desc. Why is it on pending_entry_connections? (on Tor 0.4.4.0-alpha-dev )
May 01 08:55:43.000 [warn] connection_ap_attach_pending(): Bug: 0x55d878cac3a0 is no longer in circuit_wait. Its current state is waiting for rendezvous desc. Why is it on pending_entry_connections? (on Tor 0.4.4.0-alpha-dev )
May 01 08:55:43.000 [warn] connection_ap_attach_pending(): Bug: 0x55d875b8d290 is no longer in circuit_wait. Its current state is waiting for rendezvous desc. Why is it on pending_entry_connections? (on Tor 0.4.4.0-alpha-dev )
May 01 08:55:43.000 [warn] connection_ap_attach_pending(): Bug: 0x55d8788a4d70 is no longer in circuit_wait. Its current state is waiting for rendezvous desc. Why is it on pending_entry_connections? (on Tor 0.4.4.0-alpha-dev )
May 01 08:55:43.000 [warn] connection_ap_attach_pending(): Bug: 0x55d878144a30 is no longer in circuit_wait. Its current state is waiting for rendezvous desc. Why is it on pending_entry_connections? (on Tor 0.4.4.0-alpha-dev )
May 01 08:55:43.000 [warn] connection_ap_attach_pending(): Bug: 0x55d877a2dc30 is no longer in circuit_wait. Its current state is waiting for rendezvous desc. Why is it on pending_entry_connections? (on Tor 0.4.4.0-alpha-dev )
```
No sense to send more lines since they are all the same, but just with different circuit ID. The number of such messages exceeds 1000 in a total say at least 100.000 built rendezvous circuits.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/34082Master ticket for client side rendezvous circuit related bugs that cause reac...2020-06-13T15:53:22Zs7rMaster ticket for client side rendezvous circuit related bugs that cause reachability problems in HSv3 landThis is the master ticket for some reachability issues I discovered while stress testing my onionbalance v3 setup. They all occurred while handling HSv3 services.
At least two of them always occur together, but handling them as separate...This is the master ticket for some reachability issues I discovered while stress testing my onionbalance v3 setup. They all occurred while handling HSv3 services.
At least two of them always occur together, but handling them as separate tickets for now and keeping this master ticket to glue them together, since they all mention different stack traces.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/34068Decide how to handle control port events for IPv6 reachability self-tests2020-06-13T15:53:19ZteorDecide how to handle control port events for IPv6 reachability self-testsThe control spec has two reachability self-test events.
Here is how they are specified:
```
CHECKING_REACHABILITY
"ORADDRESS=IP:port"
"DIRADDRESS=IP:port"
We're going to start testing the reachability of our extern...The control spec has two reachability self-test events.
Here is how they are specified:
```
CHECKING_REACHABILITY
"ORADDRESS=IP:port"
"DIRADDRESS=IP:port"
We're going to start testing the reachability of our external OR port
or directory port.
REACHABILITY_SUCCEEDED
"ORADDRESS=IP:port"
"DIRADDRESS=IP:port"
We successfully verified the reachability of our external OR port or
directory port (depending on which of ORADDRESS or DIRADDRESS is
given.)
```
And here is what tor actually sends:
```
CHECKING_REACHABILITY ORADDRESS=IPv4:port
CHECKING_REACHABILITY DIRADDRESS=IPv4:port
REACHABILITY_SUCCEEDED ORADDRESS=IPv4:port
REACHABILITY_SUCCEEDED DIRADDRESS=IPv4:port
```
When we add IPv6 reachability events, we could break some (buggy) control parsers with:
```
CHECKING_REACHABILITY ORADDRESS=[IPv6]:port
REACHABILITY_SUCCEEDED ORADDRESS=[IPv6]:port
```
How should we handle this change?Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/34064Add an AssumeReachable consensus parameter2020-06-13T15:53:17ZteorAdd an AssumeReachable consensus parameterLike #33224, we want a network-wide consensus parameter that makes relays and bridges assume they are reachable.
We'll be modifying the IPv4 and IPv6 reachability code, so we need to be able to respond quickly to IPv4 reachability bugs ...Like #33224, we want a network-wide consensus parameter that makes relays and bridges assume they are reachable.
We'll be modifying the IPv4 and IPv6 reachability code, so we need to be able to respond quickly to IPv4 reachability bugs as well.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/34021Bundling .a files together in a single libtor.a file2020-06-13T15:53:16ZAlexander Færøyahf@torproject.orgBundling .a files together in a single libtor.a fileFor iOS, there is currently a manual step in the build process where all .a files from `tor.git` is added to the Tor.framework build-system.
Would it make sense for us to add a single libtor.a file for people to include as part of the b...For iOS, there is currently a manual step in the build process where all .a files from `tor.git` is added to the Tor.framework build-system.
Would it make sense for us to add a single libtor.a file for people to include as part of the build process for Tor?
I assume this is largely related to also having a libtor.so provided directly by the build-system.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/34010Use io_uring when available2020-06-13T15:53:16ZTracUse io_uring when availableIo_uring is a new subsystem for asynchronous transfers of network or disk/storage i/o.
https://lwn.net/Articles/810414/
It has a great potential in handling concurrent connections and transfers over multiple sockets or disk files.
Sa...Io_uring is a new subsystem for asynchronous transfers of network or disk/storage i/o.
https://lwn.net/Articles/810414/
It has a great potential in handling concurrent connections and transfers over multiple sockets or disk files.
Samba has implemented this on disk io as side, and ib my home / nas setting, it almost doubled total throughput on concurrent reads.
In network situations, it is said to be able to scale to 3-5x performance.
Liburing is a library to be able to utilise this subsystem. I think that Tor really should look at io_uring due to the massive concurrency of a relay.
In my own experience running a relay on a low end hardware for two years and the low end hardware was never able to fill the fiber connection. It seems to be quite a lot of internal overhead, perhaps io_uring could really help.
**References**
https://lwn.net/Articles/810414/
https://lwn.net/Articles/776428/
https://git.kernel.dk/cgit/liburing/
https://github.com/axboe/liburing
**Other projects using io_uring**
https://wiki.samba.org/index.php/Samba_4.12_Features_added/changed#.27io_uring.27_vfs_module
https://github.com/ceph/ceph/pull/27392
**Trac**:
**Username**: torryTor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/33978*Proxy and accept *:80 can't coexist2020-06-13T15:53:14Zcypherpunks*Proxy and accept *:80 can't coexistIf *Proxy is 127.0.0.1:1234 and accept is only *:80, Tor won't connect to proxy because its port is not 80.
Tor should connect to *Proxy then set restriction later.If *Proxy is 127.0.0.1:1234 and accept is only *:80, Tor won't connect to proxy because its port is not 80.
Tor should connect to *Proxy then set restriction later.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/33905Adjust "large number of connections to other relays" warnings2020-06-13T15:53:08ZteorAdjust "large number of connections to other relays" warningsIn #33880, we adjusted the limits MIN_RELAY_CONNECTIONS_TO_WARN, and the default number of connections allowed per relay.
In #33048, we will allow relays to extend via IPv6 as well as IPv4. So we should make these changes:
* warn if any...In #33880, we adjusted the limits MIN_RELAY_CONNECTIONS_TO_WARN, and the default number of connections allowed per relay.
In #33048, we will allow relays to extend via IPv6 as well as IPv4. So we should make these changes:
* warn if any relay has more than 4 connections (that is, more than 2 sides multiplied by 2 IP addresses, if there is disagreement over canonical connections).
* ignore 2 connections per relay.
Clients should only extend between two relays that support the Relay=3 protocol version. So we shouldn't need to backport this change.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/33894make (retroactive) proposal for DoS subsystem2020-06-13T15:53:04ZRoger Dingledinemake (retroactive) proposal for DoS subsystemIn #24902, dgoulet speaks of a ddos-design.txt document.
But there is no actual proposal for the overall DoS subsystem.
If we have the document around, and we just never published it, this is a great chance to notice, clean it up a bit...In #24902, dgoulet speaks of a ddos-design.txt document.
But there is no actual proposal for the overall DoS subsystem.
If we have the document around, and we just never published it, this is a great chance to notice, clean it up a bit, and call it proposal three-hundred-and-something. (And then maybe turn some of it into one of the spec files if that makes sense, but, one step at a time here. :)
Motivated by this month's tor-dev thread where all we have to show for the DoS subsystem design is a trac ticket number and a changelog entry.Tor: unspecifiedDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/legacy/trac/-/issues/33880Confusing "Your relay has a very large number of connections to other relays"...2020-06-13T15:53:02ZRoger DingledineConfusing "Your relay has a very large number of connections to other relays" relay messageA new relay operator reports this complaint in their logs, showing up hourly:
```
Your relay has a very large number of connections to other relays. Is your
outbound address the same as your relay address? Found 13 connections to 8
relay...A new relay operator reports this complaint in their logs, showing up hourly:
```
Your relay has a very large number of connections to other relays. Is your
outbound address the same as your relay address? Found 13 connections to 8
relays. Found 13 current canonical connections, in 0 of which we were a
non-canonical peer. 5 relays had more than 1 connection, 0 had more than 2, and
0 had more than 4 connections.
```
I checked, and their outbound address was the same as their relay address.
Upon investigation, it looks like the redundant connections are to directory authorities.
My theory is that the change from #17592 (which went into 0.3.1.1-alpha, commit d5a151a0) is responsible: while before that canonical relay-to-relay connections would expire after either side first reached its randomized "15 to 22.5 minute" timeout, now they expire after either side reaches its "45 to 75 minute" timeout. And since directory authorities test reachability every 1280 seconds (around 21.3 minutes), that means it is expected that most relays will have duplicate canonical connections with directory authorities.
Possible fixes:
(A) Change the notice-level log to make it clearer that it's not scary, or at least it's not actionable. Maybe that means making it info-level so nobody will see it. Probably not the best option, assuming there *are* cases where we do want relay operators to hear it.
(B) In channel_check_for_duplicates(), change `MIN_RELAY_CONNECTIONS_TO_WARN 5` to a high enough number that even if we have 2 canonical conns per authority, plus a bit more, the log message still doesn't trigger.
(C) In channel_check_for_duplicates(), skip over connections to directory authorities in some way, since we know they will be special.
(D) Make connections to or from directory authorities expire quicker, on the theory that they don't really need the same level of padding protection as other connections.
(E) Your idea here?
I'd be fine with any of B,C,D. Whichever one can be done with an easy, short, and non-invasive patch is my favorite. Maybe that's "B, raise it to 30"? That would make the message trigger when we have connections to more than 30 relays and also we have more than 45 connections open. Or we could pick the more conservative "raise it to 40", on the theory that small numbers are more likely to have edge cases and less indicative of major network problems anyway.
And while we're at it, it might be smart to say in the log message what action we want the relay operator take, e.g. "Please report:".Tor: unspecifiedNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/legacy/trac/-/issues/33861vanguards: circ_max_megabytes applied to all connection2020-06-13T15:53:00Zcypherpunksvanguards: circ_max_megabytes applied to all connection```
# This means that applications that require large data submission (eg
# SecureDrop or onionshare) should set this much higher
# (or set to 0 to disable):
circ_max_megabytes = 8
```
My site is less than 4MB so above config is okay.
...```
# This means that applications that require large data submission (eg
# SecureDrop or onionshare) should set this much higher
# (or set to 0 to disable):
circ_max_megabytes = 8
```
My site is less than 4MB so above config is okay.
I thought vanguards only applies this limit to:
1. My onion service <--- Tor user (incoming)
2. My onion service ---> Tor user (outgoing)
However your vanguards is breaking other connections such as:
1. apt with Tor[1]
2. wget download over Tor to clearnet site
3. curl POST something over Tor to clearnet site
Problem 1. I don't want to stop vanguards just for apt and other thing.
Problem 2. I don't want to increase circ_max value just for this.
So could you please add a switch to limit only my-onion-site related connection and ignore else?
say,
```
# If true, vanguards will not apply max_mega limit non-onion connections.
# If false(default) vanguards will apply max_mega limit to all Tor connections.
# If your circ_max_megabytes is already 0, this settings does nothing.
circ_max_mega_ignore_clearnet_destination = true
```Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/33860Finish test_onionskin_answer()2020-06-13T15:53:00ZteorFinish test_onionskin_answer()In #33633, we finished unit tests for test_extend() and helpers.
But we didn't have time to finish test_onionskin_answer().
Let's try to test each of the cases of each `if` statement in onionskin_answer(). It's ok to mock the functions...In #33633, we finished unit tests for test_extend() and helpers.
But we didn't have time to finish test_onionskin_answer().
Let's try to test each of the cases of each `if` statement in onionskin_answer(). It's ok to mock the functions that are called by onionskin_answer().Tor: unspecifiedNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/legacy/trac/-/issues/33850log rotation for /var/log/tor/debug.log did not close handle to old file afte...2020-06-13T15:52:59ZTraclog rotation for /var/log/tor/debug.log did not close handle to old file after compressionso / ran full after a week or so. I found the culprits with:
find /proc/*/fd -ls | grep '(deleted)'
Version: tor 0.4.2.7-1~d8.jessie+1
Config:
Standard torrc with one Hidden service, using an HTTPSProxy and notice+debug logging enabl...so / ran full after a week or so. I found the culprits with:
find /proc/*/fd -ls | grep '(deleted)'
Version: tor 0.4.2.7-1~d8.jessie+1
Config:
Standard torrc with one Hidden service, using an HTTPSProxy and notice+debug logging enabled:
Log notice file /var/log/tor/notices.log
Log debug file /var/log/tor/debug.log
**Trac**:
**Username**: MaKoTorTor: unspecified