Tor issueshttps://gitlab.torproject.org/tpo/core/tor/-/issues2023-06-08T17:51:54Zhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40169Circuit Build Timeout code needs cleanup2023-06-08T17:51:54ZMike PerryCircuit Build Timeout code needs cleanupThere's two places where we time out circuits: `circuit_expire_building` and `circuit_build_times_handle_completed_hop()`. `circuit_expire_building` is filled with 19 years of cruft and complexity, and only operates on the *second* resol...There's two places where we time out circuits: `circuit_expire_building` and `circuit_build_times_handle_completed_hop()`. `circuit_expire_building` is filled with 19 years of cruft and complexity, and only operates on the *second* resolution, instead of milliseconds.
These probably only affect timeout in rare cases -- https://gitlab.torproject.org/tpo/core/tor/-/issues/40157 seems to show that with fixes from https://gitlab.torproject.org/tpo/core/tor/-/issues/40168, then we get very close to the target 20% timeout. But there's so much old cruft here that we should clean it up anyway. It might affect UX very poorly in some edge cases.
This is especially true for onion services, which rely primarily on `circuit_expire_building()`. There's likely many bad performance consequences of this, for them.Sponsor 61 - Making the Tor network faster & more reliable for users in Internet-repressive placesMike PerryMike Perryhttps://gitlab.torproject.org/tpo/core/tor/-/issues/19162Make it even harder to become HSDir2023-03-13T09:57:24ZGeorge KadianakisMake it even harder to become HSDirIn legacy/trac#8243 we started requiring `Stable` flag for becoming HSDirs, but this is still not hard enough for motivated adversaries. Hence we need to make it even harder for a relay to become HSDir, so that only relays that have been...In legacy/trac#8243 we started requiring `Stable` flag for becoming HSDirs, but this is still not hard enough for motivated adversaries. Hence we need to make it even harder for a relay to become HSDir, so that only relays that have been around for long get the flag. After prop224 gets deployed, there will be less incentive for adversaries to become HSDirs since they won't be able to harvest onion addresses.
Until then, our current plan is to increase the bandwidth and uptime required to become an HSDir to something almost unreasonable. For example requiring an uptime of over 6 months, or maybe requiring that the relay is in the top 1/4th of uptimes on the network.Tor: unspecifiedRoger DingledineRoger Dingledinehttps://gitlab.torproject.org/tpo/core/tor/-/issues/40767Investigate high circuit build error rates in simulation2023-04-12T14:46:43Zgabi-250Investigate high circuit build error rates in simulationWe ran some shadow simulations to debug/repro the issue from #40570, and @jnewsome noticed the onion service clients have consistently high [circuit build failure rates](https://gitlab.torproject.org/tpo/core/tor/-/issues/40570#note_2883...We ran some shadow simulations to debug/repro the issue from #40570, and @jnewsome noticed the onion service clients have consistently high [circuit build failure rates](https://gitlab.torproject.org/tpo/core/tor/-/issues/40570#note_2883257).
We should figure out what causes these circuit build failures.https://gitlab.torproject.org/tpo/core/tor/-/issues/40766Introduce additional HS client timeouts2023-04-12T14:46:37Zgabi-250Introduce additional HS client timeoutsToday tor terminates any circuits that take too long to build (`circuit_build_times_handle_completed_hop`, `circuit_expire_building`). In addition to this circuit built timeout, we might want to introduce timeouts for circuits that were ...Today tor terminates any circuits that take too long to build (`circuit_build_times_handle_completed_hop`, `circuit_expire_building`). In addition to this circuit built timeout, we might want to introduce timeouts for circuits that were built successfully but are stuck waiting for:
* `INTRODUCE_ACK` (for intro circuits)
* `RENDEZVOUS_ESTABLISHED` (for rend circuits)
cc @dgoulet who suggested this potential improvement for c-tor/artihttps://gitlab.torproject.org/tpo/core/tor/-/issues/40717Additional metricsport stats for various stages of onionservice handshake2023-12-07T14:41:35ZMike PerryAdditional metricsport stats for various stages of onionservice handshakeIf we export additional onion service metrics such as time measurements on the HSDIR, INTRO, and REND stages of circuit setup for both client and service side, and the number of timeouts/failures there, it would help to uncover the root ...If we export additional onion service metrics such as time measurements on the HSDIR, INTRO, and REND stages of circuit setup for both client and service side, and the number of timeouts/failures there, it would help to uncover the root cause of issues like https://gitlab.torproject.org/tpo/core/tor/-/issues/40570 and related reliability and connectivity issues with onion services.
We can also export congestion control info from https://gitlab.torproject.org/tpo/core/tor/-/issues/40708 to the onionservice metrics set, too, which can help us with tuning congestion control for onion services.
We can then hook up the onionperf onion service instances to our grafana dashboard, and gather more detailed stats that way, as a supplement to the metrics that get graphed on the metrics website.https://gitlab.torproject.org/tpo/core/tor/-/issues/40716Impelement conflux for onion services2022-11-28T14:01:05ZMike PerryImpelement conflux for onion servicesConflux is traffic splitting, and will result in increased throughput and reduced latency for onion services after a connection has been established, by routing traffic over multiple paths, or via the lowest latency path to a service.
T...Conflux is traffic splitting, and will result in increased throughput and reduced latency for onion services after a connection has been established, by routing traffic over multiple paths, or via the lowest latency path to a service.
This ticket is for the onion service pieces of conflux (https://gitlab.torproject.org/tpo/core/tor/-/issues/40593).
We will not be implementing the onion services pieces of conflux as part of that ticket. It can be done later, if any onion service sponsors care about latency or throughput.
The pieces for onion services are:
- **Negotiation**
- [ ] Protover Advertisement for Onions (24h)
- [ ] Rend circuit linking (40h)
This is specified in https://gitlab.torproject.org/tpo/core/torspec/-/blob/main/proposals/329-traffic-splitting.txt, but we probably want to allow onion services to configure their scheduler by manually choosing either BLEST, or LowRTT, since different kinds of onion services may want to optimize for either throughput or latency.
There may be some additional work wrt making sure linked edge conns work properly, if they are handled differently for the onion service case.
Also, some shadow validation and performance testing will be needed. Maybe 40h or so of dev time (though much longer wall-clock time).https://gitlab.torproject.org/tpo/core/tor/-/issues/40702Single Onion Service Rends become 7 hop after retry2023-02-09T16:22:00ZMike PerrySingle Onion Service Rends become 7 hop after retryIn `retry_service_rendezvous_point()`, if a rend connect fails for a non-anonymous rend, we promote it to a 7-hop slow rend for some reason.
This will impact non-anonymous onions who want performance, especially during the DoS.
David n...In `retry_service_rendezvous_point()`, if a rend connect fails for a non-anonymous rend, we promote it to a 7-hop slow rend for some reason.
This will impact non-anonymous onions who want performance, especially during the DoS.
David notes that this decision to fall back to full anonymous mode in the event of timeout or failure was explicitly written just in case a non-anonymous onion service was also behind a restrictive firewall, and that firewall was the thing that happened to cause a timeout. There is also a comment that explains this, believe it or not. Back then, decision making in C-Tor was a bit more...special.
I bet if we get funders who actually care about single onion performance, they would prefer that their single onions not randomly double in latency on a timeout or failure, just to support the case where some single onion out there might be behind a firewall that they don't know about. Such a funder might suggest that we provide some other option for people behind firewalls to use, instead of this madness.
But I look forward to more research.https://gitlab.torproject.org/tpo/core/tor/-/issues/40543Onion Service dies randomly after ~1-5 days2022-10-24T20:47:49ZmaqpOnion Service dies randomly after ~1-5 days### Summary
Creating an Onion Service with Stem seems stays up randomly for approximately 1-5 days, it then goes permanently down.
### Steps to reproduce:
1. Use Tor+Stem+Flask to start an ephemeral Onion Service web server
2. Wait ...### Summary
Creating an Onion Service with Stem seems stays up randomly for approximately 1-5 days, it then goes permanently down.
### Steps to reproduce:
1. Use Tor+Stem+Flask to start an ephemeral Onion Service web server
2. Wait up to 5 days
3. See that at some point it's no longer possible to connect to the Onion Service
I wrote a [script](https://gist.github.com/maqp/0e5dcf542ebb97baf98d198115e931ea) that reproduces the issue, but as per the current discussion on `tor-talk` mailing list [Tor users from Finland jumped from 25 000 to 200 000], the script may have had an unintentional side-effect of creating new users, so running the script as it is, may mess up metrics graphs. Thus it definitely needs review (and most likely revision) before use.
### What is the current bug behavior?
Onion Service goes down unpredictably and does not recover.
### What is the expected behavior?
Onion Service stays up until it's turned off by the host.
### Environment
OS
- Ubuntu 21.04
- Lubuntu 21.04
Tor
- Tor version: 0.4.5.6
- Tor install method: APT
Stem
- Stem version: 1.8.0
- Stem install method: PIP
### Relevant logs and/or screenshots:
Below are the logs from two instances of the bug-reproducing script. As the delay before the service goes permanently down is random, I had the script collect data about the server up-times. Note that the timestamp of `Main: Client reports` line is intentionally always one hour after the associated `server last seen` timestamp.
The two interesting things here are, the Onion Service stays up randomly for 1..5 days. Since it's more or less random, it's hard to be sure about the range. The more interesting thing here is the "server last seen" timestamps seem to be often on the hour, but not always (I didn't test against general online connectivity so it's hard to say if those anomalies are related to DSL availability)
Test instance 1
18-12-21 22:02:02 - Main: Starting bug testing.
20-12-21 01:00:52 - Main: Client reports
Server first seen : 18-12-21 22:02:10
Server last seen : 20-12-21 00:00:00
Server's been down for : 01h 00m 37.7s
Restarting with new Onion Service
21-12-21 04:02:10 - Main: Client reports
Server first seen : 20-12-21 01:01:02
Server last seen : 21-12-21 03:00:02
Server's been down for : 01h 00m 06.6s
Restarting with new Onion Service
26-12-21 16:01:15 - Main: Client reports
Server first seen : 21-12-21 04:02:18
Server last seen : 26-12-21 15:00:00
Server's been down for : 01h 00m 02.3s
Restarting with new Onion Service
27-12-21 18:42:49 - Main: Client reports
Server first seen : 26-12-21 16:01:22
Server last seen : 27-12-21 17:40:43
Server's been down for : 01h 00m 04.2s
Restarting with new Onion Service
28-12-21 22:01:05 - Main: Client reports
Server first seen : 27-12-21 18:42:56
Server last seen : 28-12-21 21:00:00
Server's been down for : 01h 00m 01.6s
Restarting with new Onion Service
30-12-21 01:01:20 - Main: Client reports
Server first seen : 28-12-21 22:01:11
Server last seen : 30-12-21 00:00:00
Server's been down for : 01h 00m 02.7s
Restarting with new Onion Service
31-12-21 03:18:37 - Main: Client reports
Server first seen : 30-12-21 01:01:27
Server last seen : 31-12-21 02:18:24
Server's been down for : 01h 00m 06.3s
Restarting with new Onion Service
05-01-22 17:01:00 - Main: Client reports
Server first seen : 31-12-21 03:18:55
Server last seen : 05-01-22 16:00:00
Server's been down for : 01h 00m 04.2s
Restarting with new Onion Service
06-01-22 20:00:58 - Main: Client reports
Server first seen : 05-01-22 17:01:07
Server last seen : 06-01-22 19:00:01
Server's been down for : 01h 00m 21.0s
Restarting with new Onion Service
08-01-22 00:00:33 - Main: Client reports
Server first seen : 06-01-22 20:01:15
Server last seen : 07-01-22 23:00:00
Server's been down for : 01h 00m 16.7s
Restarting with new Onion Service
09-01-22 03:00:56 - Main: Client reports
Server first seen : 08-01-22 00:00:45
Server last seen : 09-01-22 02:00:01
Server's been down for : 01h 00m 03.4s
Restarting with new Onion Service
---
Test instance 2
18-12-21 22:02:03 - Main: Starting bug testing.
20-12-21 01:00:24 - Main: Client reports
Server first seen : 18-12-21 22:02:14
Server last seen : 20-12-21 00:00:00
Server's been down for : 01h 00m 03.1s
Restarting with new Onion Service
21-12-21 04:00:45 - Main: Client reports
Server first seen : 20-12-21 01:00:33
Server last seen : 21-12-21 03:00:01
Server's been down for : 01h 00m 02.7s
Restarting with new Onion Service
26-12-21 16:00:41 - Main: Client reports
Server first seen : 21-12-21 04:00:57
Server last seen : 26-12-21 15:00:01
Server's been down for : 01h 00m 06.1s
Restarting with new Onion Service
27-12-21 19:17:31 - Main: Client reports
Server first seen : 26-12-21 16:00:51
Server last seen : 27-12-21 17:57:09
Server's been down for : 01h 01m 40.2s
Restarting with new Onion Service
28-12-21 23:00:48 - Main: Client reports
Server first seen : 27-12-21 19:17:41
Server last seen : 28-12-21 22:00:00
Server's been down for : 01h 00m 02.2s
Restarting with new Onion Service
30-12-21 02:02:41 - Main: Client reports
Server first seen : 28-12-21 23:00:57
Server last seen : 30-12-21 01:00:01
Server's been down for : 01h 01m 32.1s
Restarting with new Onion Service
31-12-21 17:14:59 - Main: Client reports
Server first seen : 30-12-21 02:02:50
Server last seen : 31-12-21 16:13:50
Server's been down for : 01h 00m 55.9s
Restarting with new Onion Service
01-01-22 20:01:21 - Main: Client reports
Server first seen : 31-12-21 17:15:31
Server last seen : 01-01-22 19:00:02
Server's been down for : 01h 00m 25.6s
Restarting with new Onion Service
02-01-22 23:01:32 - Main: Client reports
Server first seen : 01-01-22 20:01:54
Server last seen : 02-01-22 22:00:03
Server's been down for : 01h 00m 46.2s
Restarting with new Onion Service
04-01-22 02:01:17 - Main: Client reports
Server first seen : 02-01-22 23:01:44
Server last seen : 04-01-22 01:00:00
Server's been down for : 01h 00m 27.1s
Restarting with new Onion Service
05-01-22 05:01:01 - Main: Client reports
Server first seen : 04-01-22 02:01:31
Server last seen : 05-01-22 04:00:00
Server's been down for : 01h 00m 24.5s
Restarting with new Onion Service
10-01-22 17:02:04 - Main: Client reports
Server first seen : 05-01-22 05:01:12
Server last seen : 10-01-22 16:00:03
Server's been down for : 01h 00m 15.1s
Restarting with new Onion Service
11-01-22 20:01:28 - Main: Client reports
Server first seen : 10-01-22 17:02:16
Server last seen : 11-01-22 19:00:00
Server's been down for : 01h 01m 04.7s
Restarting with new Onion Service
12-01-22 23:02:50 - Main: Client reports
Server first seen : 11-01-22 20:02:08
Server last seen : 12-01-22 22:00:40
Server's been down for : 01h 00m 49.0s
Restarting with new Onion Service
### Possible fixes:
No idea, this might be a bug with Tor core, or it might have something to do with Stem.https://gitlab.torproject.org/tpo/core/tor/-/issues/40475integrate v3 onion service key generation into tor2024-03-05T19:11:18ZNathan Freitasintegrate v3 onion service key generation into torCurrently, a separate openssl command is needed to generate the keys used for v3 onion service client authentication. This makes it difficult for a mobile application, which generally doesn't have the ability to run openssl commands in a...Currently, a separate openssl command is needed to generate the keys used for v3 onion service client authentication. This makes it difficult for a mobile application, which generally doesn't have the ability to run openssl commands in a shell, to enable client authentication on v3 onions it is hosting.
This is specifically relevant to work on Onion Share mobile apps for iOS and Android.
Please let me know what options we might have in C, Rust or Go, to more generate these keys, or if it is likely/possible to integrate this step into tor itself, as it was with v2 keys.https://gitlab.torproject.org/tpo/core/tor/-/issues/40108ADD_ONION command returns "Missing 'Port' argument" when null terminator is p...2022-09-01T21:42:49ZrichardADD_ONION command returns "Missing 'Port' argument" when null terminator is present at endSo I naively sent:
"ADD_ONION ED25519-V3:$(base64PrivateKey)\0"
over the control port including the NULL terminator from my C string. Rather than yelling at me for sending non-printable characters (or something similar), the command...So I naively sent:
"ADD_ONION ED25519-V3:$(base64PrivateKey)\0"
over the control port including the NULL terminator from my C string. Rather than yelling at me for sending non-printable characters (or something similar), the command returned "Missing 'Port' argument"
Seems weird.
EDIT:
tor --version -> "Tor version 0.4.3.6"https://gitlab.torproject.org/tpo/core/tor/-/issues/40090ONION_CLIENT_AUTH_ADD persistence error unhelpfully vague2022-02-28T19:41:25ZDamian JohnsonONION_CLIENT_AUTH_ADD persistence error unhelpfully vagueI tried to add an integ test for persisting hidden service credentials to disk (calling ONION_CLIENT_AUTH_ADD with a "Permanent" flag) but the error response I received from tor is unhelpfully vague...
> Unable to store creds for "yvhz3...I tried to add an integ test for persisting hidden service credentials to disk (calling ONION_CLIENT_AUTH_ADD with a "Permanent" flag) but the error response I received from tor is unhelpfully vague...
> Unable to store creds for "yvhz3ofkv7gwf5hpzqvhonpr3gbax2cc7dee3xcnt7dmtlx2gu7vyvid"
It's possible that there's an issue on my end, or also possible that the feature doesn't work. Unfortunately this response is too nebulous for me to troubleshoot.https://gitlab.torproject.org/tpo/core/tor/-/issues/40064hs: Implement self reachability test2021-09-27T16:40:27ZDavid Gouletdgoulet@torproject.orghs: Implement self reachability testI have not thought of the anonymity problem that could bring if any, just more of a pragmatic approach of how that could be useful.
Onion services sometimes can find themselves unable to publish a descriptor for many different reasons o...I have not thought of the anonymity problem that could bring if any, just more of a pragmatic approach of how that could be useful.
Onion services sometimes can find themselves unable to publish a descriptor for many different reasons or even bugs that we keep finding. Even under severe DDoS, they become unavailable but they don't know.
What if they would regularly do self reachability tests and in case of failure, they could do some "recovery actions" but also dumping their state so it is more easily debuggable.
But also, they could export that status onto something like tor#40063 which could inform very quickly the operator which in turn could correlate with the their stats if this is a tor problem or a DDoS or network issue or anything to that end.https://gitlab.torproject.org/tpo/core/tor/-/issues/33704Understand code performance of onion services under DoS2022-10-17T19:25:12ZGeorge KadianakisUnderstand code performance of onion services under DoSWe need to do the following experiments to understand more about the performance of Tor under simulated DoS conditions:
1) Get vanilla profile (for legacy/trac#30221) [VANILLA profile] [Also get with INTRO2 rate limiting]
2) See effect...We need to do the following experiments to understand more about the performance of Tor under simulated DoS conditions:
1) Get vanilla profile (for legacy/trac#30221) [VANILLA profile] [Also get with INTRO2 rate limiting]
2) See effect of DoS on intro and guard [VANILLA profile] [Also with INTRO2 rate limiting]
3) Investigate control port experiment (with STREAM events enabled and trying to kill circs with CLOSECIRCUIT) [VANILLA profile]
wrt ​https://lists.torproject.org/pipermail/tor-dev/2019-December/014097.html
NEED: Controller script that logs circuit events and tries to kill some circuits [asn]
4) Investigate size of replay cache (legacy/trac#26294) [VANILLA profile] [asn]
5) Investigate capacity of reestablish intro circuit (legacy/trac#26294) [VANILLA profile]
6) Compare intro/rend profiles (value of prop255 / legacy/trac#17254) [VANILLA profile]
7~) Investigate horizontal scaling with OB (scale to 2/4/8 instances, extrapolate onwards.) [OBv3 profile]
8~) Investigate pinned paths with HSLayer2Node HSLayer3Node [Vanguard profile]https://gitlab.torproject.org/tpo/core/tor/-/issues/33129Tor node that is not part of the consensus should not be used as rendezvous p...2022-10-24T20:53:07ZcypherpunksTor node that is not part of the consensus should not be used as rendezvous point with the onion serviceAccording to this article attacker is able to to chose a server that is running Tor but is not part of the Tor network as an rendezvous point with the onion service so that he can discover in to which family onion service`s guard node be...According to this article attacker is able to to chose a server that is running Tor but is not part of the Tor network as an rendezvous point with the onion service so that he can discover in to which family onion service`s guard node belongs and than use that information to ddos Tor nodes in that family so that onion service drops that guard node and instead chose his Tor node as a guard node.
https://www.hackerfactor.com/blog/index.php?/archives/868-Deanonymizing-Tor-Circuits.htmlhttps://gitlab.torproject.org/tpo/core/tor/-/issues/31632hs-v3: Service doesn't re-upload descriptor on circuit failure2021-06-23T17:19:23ZDavid Gouletdgoulet@torproject.orghs-v3: Service doesn't re-upload descriptor on circuit failureI'm observing, quite often actually, a service posting its descriptor to an HSDir but the circuit collapses due to remote reason `CHANNEL_CLOSED`.
This is possible for many reasons where a link between two relays failed/disconnected/clo...I'm observing, quite often actually, a service posting its descriptor to an HSDir but the circuit collapses due to remote reason `CHANNEL_CLOSED`.
This is possible for many reasons where a link between two relays failed/disconnected/closed/...
However, we do not retry the upload after that which means that we can end up with HSDir(s) without our descriptor even though we think they are there.
Solution is unclear but it appears that we probably want to hook this case into `hs_circ_cleanup()` which is called from the mark for close function.https://gitlab.torproject.org/tpo/core/tor/-/issues/31223Research approaches for improving the availability of services under DoS2022-10-17T19:25:12ZGeorge KadianakisResearch approaches for improving the availability of services under DoSWe've been improving the health of the network during onion service DoS, but not the onion service availability. This is a task for looking at this angle.
During the related Stockholm session we looked into various approaches that could...We've been improving the health of the network during onion service DoS, but not the onion service availability. This is a task for looking at this angle.
During the related Stockholm session we looked into various approaches that could help us towards that goal. Here are some of them:
- Introducing application-layer anonymous tokens that allow legit clients to get priority over DoS attacker
- PoW approaches like argon2
- CAPTCHA approaches like introducing a token server giving reCAPTCHA tokens
- Hiding introduction points by rate limiting how quickly clients can find them. Valet nodes?
- Having intros check that clients don't use the same IP over and over. Proof-of-existence?
- Pay bitcoin to introduce
Each of the above solutions has problems and this is a ticket to investigate at least the most promising of them, and attempt to move forward with something.https://gitlab.torproject.org/tpo/core/tor/-/issues/30482unexpected warning: Invalid signature for service descriptor signing key:...2022-09-01T21:39:46Ztoralfunexpected warning: Invalid signature for service descriptor signing key: expiredI do wonder about
```
# tail -n 2 /tmp/notice2.log
May 12 10:42:13.000 [notice] DoS mitigation since startup: 10 circuits killed with too many cells. 13604 circuits rejected, 12 marked addresses. 106 connections closed. 1917 single hop c...I do wonder about
```
# tail -n 2 /tmp/notice2.log
May 12 10:42:13.000 [notice] DoS mitigation since startup: 10 circuits killed with too many cells. 13604 circuits rejected, 12 marked addresses. 106 connections closed. 1917 single hop clients refused.
May 12 14:30:03.000 [warn] Invalid signature for service descriptor signing key: expired
```
b/c it looks ok:
```
# tor --key-expiration sign -f /etc/tor/torrc2
May 12 16:27:26.845 [notice] Tor 0.4.0.5 running on Linux with Libevent 2.1.8-stable, OpenSSL LibreSSL 2.8.3, Zlib 1.2.11, Liblzma 5.2.4, and Libzstd N/A.
May 12 16:27:26.845 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning
May 12 16:27:26.845 [notice] Read configuration file "/etc/tor/torrc2".
May 12 16:27:26.849 [notice] Included configuration file or directory at recursion level 1: "/etc/tor/torrc.d/00_common".
May 12 16:27:26.849 [notice] Based on detected system memory, MaxMemInQueues is set to 8192 MB. You can override this by setting MaxMemInQueues by hand.
May 12 16:27:26.858 [notice] We were built to run on a 64-bit CPU, with OpenSSL 1.0.1 or later, but with a version of OpenSSL that apparently lacks accelerated support for the NIST P-224 and P-256 groups. Building openssl with such support (using the enable-ec_nistp_64_gcc_128 option when configuring it) would make ECDH much faster.
May 12 16:27:26.973 [notice] Your Tor server's identity key fingerprint is 'zwiebeltoralf2 509EAB4C5D10C9A9A24B4EA0CE402C047A2D64E6'
May 12 16:27:26.973 [notice] The signing certificate stored in /var/lib/tor/data2/keys/ed25519_signing_cert is valid until 2019-08-10 04:00:00.
signing-cert-expiry: 2019-08-10 04:00:00
```https://gitlab.torproject.org/tpo/core/tor/-/issues/29927Tor protocol errors causing silent dropped cells2022-09-01T21:39:46ZMike PerryTor protocol errors causing silent dropped cellsWhile testing vanguards, I've got some mystery cases client side where circuits are getting closed with END_CIRC_REASON_TORPROTOCOL, but Tor is not emmitting any log lines that correspond to this, even at debug level.
This is happening ...While testing vanguards, I've got some mystery cases client side where circuits are getting closed with END_CIRC_REASON_TORPROTOCOL, but Tor is not emmitting any log lines that correspond to this, even at debug level.
This is happening for circuits with purpose CIRCUIT_PURPOSE_C_REND_READY_INTRO_ACKED. Additionally, all circuits seem able to fail during construction with END_CIRC_REASON_TORPROTOCOL, with no Tor log messages even at debug loglevel. Possibly more ntor handshake failures, similar to legacy/trac#29700?
Finally, CIRCUIT_PURPOSE_C_INTRODUCE_ACKED circuits are getting closed with a END_CIRC_REASON_FINISHED after receiving an invalid cell, seemingly after they are done being used.
See also https://github.com/mikeperry-tor/vanguards/issues/37
The vanguards addon now outputs this bug number at INFO log level when this happens.https://gitlab.torproject.org/tpo/core/tor/-/issues/29802Document the v3 onion service key files in the tor man page2022-09-01T21:29:27ZteorDocument the v3 onion service key files in the tor man pageThe tor man page is missing the names of the key files for v3 onion services.The tor man page is missing the names of the key files for v3 onion services.https://gitlab.torproject.org/tpo/core/tor/-/issues/27299hsv3: Clarify timing sources around hsv3 code2022-02-07T19:38:03ZGeorge Kadianakishsv3: Clarify timing sources around hsv3 codeA big source of bugs and confusions (e.g. legacy/trac#26980, legacy/trac#26930) in the HSv3 code stem from the fact that it uses various timing sources to compute time periods, SRV, etc. Some parts of the code use `time(NULL)`, others us...A big source of bugs and confusions (e.g. legacy/trac#26980, legacy/trac#26930) in the HSv3 code stem from the fact that it uses various timing sources to compute time periods, SRV, etc. Some parts of the code use `time(NULL)`, others use the current consensus valid-after, and others use the voting-schedule.
The code is currently not clear in which timing source is used in each case. As an example, some functions take as input `now` but they only use it to get a live consensus to use its valid-after, but that may confuse a reader that the `now` is used as the time source (e.g. `should_rotate_descriptors()` that caused the legacy/trac#26930 confusion).
We should try to clarify and improve the function signatures around the HSv3 codebase on this regard.