Trac issueshttps://gitlab.torproject.org/legacy/trac/-/issues2020-06-13T17:44:18Zhttps://gitlab.torproject.org/legacy/trac/-/issues/27167track "first" OR_CONN2020-06-13T17:44:18ZTaylor Yutrack "first" OR_CONNRight now the first stages of the "first" OR_CONN get reported as `BOOTSTRAP_STATUS_CONN_DIR` and `BOOTSTRAP_STATUS_HANDSHAKE` (the latter is a special bootstrap phase that gets translated into `BOOTSTRAP_STATUS_HANDSHAKE_DIR` or `BOOTST...Right now the first stages of the "first" OR_CONN get reported as `BOOTSTRAP_STATUS_CONN_DIR` and `BOOTSTRAP_STATUS_HANDSHAKE` (the latter is a special bootstrap phase that gets translated into `BOOTSTRAP_STATUS_HANDSHAKE_DIR` or `BOOTSTRAP_STATUS_HANDSHAKE_OR` depending on how much progress was previously reported. The logic in functions that report these events should be moved up to a new abstraction so lower level code has to track less high-level state.
This also eliminates some logic in `control_event_bootstrap()` that tries to figure out whether a given handshake attempt corresponds to a directory connection or an application circuit connection.Tor: 0.4.0.x-finalTaylor YuTaylor Yuhttps://gitlab.torproject.org/legacy/trac/-/issues/28591Accept a future consensus for bootstrap2020-06-13T16:06:33ZteorAccept a future consensus for bootstrap#24661 allows tor to bootstrap when the client's clock is ahead of the network by up to 1 day.
But clients can't bootstrap when the client's clock is behind the network by more than a few hours:
https://trac.torproject.org/projects/tor/...#24661 allows tor to bootstrap when the client's clock is ahead of the network by up to 1 day.
But clients can't bootstrap when the client's clock is behind the network by more than a few hours:
https://trac.torproject.org/projects/tor/ticket/24661#comment:18Tor: 0.3.5.x-finalteorteorhttps://gitlab.torproject.org/legacy/trac/-/issues/24804Run an opt-in process for relay operators to become fallbacks in 20182020-06-13T16:06:25ZteorRun an opt-in process for relay operators to become fallbacks in 2018This involves mailing tor-relays and asking if stable relay operators want to become fallbacks.This involves mailing tor-relays and asking if stable relay operators want to become fallbacks.Tor: 0.4.0.x-finalColin ChildsColin Childshttps://gitlab.torproject.org/legacy/trac/-/issues/28654Allow relays to serve future consensuses2020-06-13T15:34:50ZteorAllow relays to serve future consensusesLike #28591 for clients, we should allow relays to serve future consensuses.Like #28591 for clients, we should allow relays to serve future consensuses.Tor: 0.4.0.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/27169monitor bootstrap directory info progress separately2020-06-13T15:34:23ZTaylor Yumonitor bootstrap directory info progress separatelyAbstract out the current monitoring of bootstrap directory information progress, so we can track it state more independently. This allows us to defer reporting that we have sufficient directory information until we know that we can actua...Abstract out the current monitoring of bootstrap directory information progress, so we can track it state more independently. This allows us to defer reporting that we have sufficient directory information until we know that we can actually connect to a relay or bridge at all.
This also allows us to eliminate or simplify special case logic in `control_event_bootstrap()` that handles incremental progress during descriptor downloads.Tor: 0.3.5.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/28319accept a reasonably live consensus for path selection2020-06-13T15:33:46Zteoraccept a reasonably live consensus for path selectionWhen I fixed guard selection in #24661, tor said:
```
Nov 05 15:29:55.000 [notice] I learned some more directory information, but not enough to build a circuit: We have no recent usable consensus.
```
Maybe this is a logging issue, mayb...When I fixed guard selection in #24661, tor said:
```
Nov 05 15:29:55.000 [notice] I learned some more directory information, but not enough to build a circuit: We have no recent usable consensus.
```
Maybe this is a logging issue, maybe it's another constraint we need to fix.
See the full log in:
https://trac.torproject.org/projects/tor/ticket/24661#comment:13Tor: 0.4.0.x-finalteorteorhttps://gitlab.torproject.org/legacy/trac/-/issues/23605expired consensus causes guard selection to stall at BOOTSTRAP PROGRESS=802020-06-13T15:33:33ZTaylor Yuexpired consensus causes guard selection to stall at BOOTSTRAP PROGRESS=80Tor can report `BOOTSTRAP_STATUS_CONN_OR` (PROGRESS=80, "Connecting to the Tor network") when it actually can do no such thing. In some situations (e.g., clock skew) this causes progress to get stuck at 80% indefinitely, resulting in ve...Tor can report `BOOTSTRAP_STATUS_CONN_OR` (PROGRESS=80, "Connecting to the Tor network") when it actually can do no such thing. In some situations (e.g., clock skew) this causes progress to get stuck at 80% indefinitely, resulting in very poor user experience.
Right now `update_router_have_minimum_dir_info()` reports the `BOOTSTRAP_STATUS_CONN_OR` event if there's a "reasonably live" consensus and enough descriptors downloaded. A client with a clock skewed several hours into the future can get stalled here indefinitely due to inability to select a guard: if the client's clock is skewed, it will never have a live consensus. (Guard selection seems to require a non-expired consensus, rather than a reasonably live consensus at least during bootstrap.)
We should either relax the guard selection consensus liveness requirement, or avoid reporting `BOOTSTRAP_STATUS_CONN_OR` when we have no reasonable chance of actually connecting to a guard for building application circuits.
Arguably we shouldn't start downloading descriptors until we have a non-expired consensus either, because that gets represented as a considerable chunk of the progress bar (40%->80%) in a way that could be misleading to a user. Making that change without additional work would cause bootstrap to get stuck at 40% instead of 80%, which might be an improvement. This can already happen if the client's clock is skewed several hours in the past.Tor: 0.4.0.x-finalTaylor YuTaylor Yuhttps://gitlab.torproject.org/legacy/trac/-/issues/28255verify guard selection consensus expiry constraints2020-06-13T15:33:32ZTaylor Yuverify guard selection consensus expiry constraintsThe hypothesis in #23605 is that bootstrapping can get stuck at #23605 if there is enough clock skew for the consensus to be expired but still "reasonably live". Let's verify this and try to record more details.The hypothesis in #23605 is that bootstrapping can get stuck at #23605 if there is enough clock skew for the consensus to be expired but still "reasonably live". Let's verify this and try to record more details.Tor: 0.4.0.x-finalteorteorhttps://gitlab.torproject.org/legacy/trac/-/issues/27103report initial OR_CONN as the earliest bootstrap phases2020-06-13T15:31:22ZTaylor Yureport initial OR_CONN as the earliest bootstrap phasesWe should always make the earliest bootstrap phases be our first connection to any OR, regardless of whether we already have enough directory info to start building circuits.
When starting to boostrap with existing directory info, there...We should always make the earliest bootstrap phases be our first connection to any OR, regardless of whether we already have enough directory info to start building circuits.
When starting to boostrap with existing directory info, there might not be a need to make an initial connection to a bridge or fallback directory server to download directory info. This means that the initial OR_CONN to a bridge or guard displays on a progress bar as 80%, when in fact a fairly "early" dependency (the initial connection to any OR) could be failing.
Intuitively, starting Tor Browser and seeing the progress bar hang at 80% for a very long time is frustrating and misleading. A user who sees the progress bar hang at at 5% or 10% has a much better idea of what's going on.
Existing directory info can be reflected in the progress bar as a rapid jump after the initial OR_CONN succeeds. This seems less likely to frustrate users.Tor: 0.4.0.x-finalTaylor YuTaylor Yuhttps://gitlab.torproject.org/legacy/trac/-/issues/27308report bootstrap phase when we actually start, not just unblock something2020-06-13T15:30:07ZTaylor Yureport bootstrap phase when we actually start, not just unblock somethingRight now many bootstrap events get reported when the preceding task has completed. This makes it somewhat harder to tell what has gone wrong if bootstrap progress stalls.
[edit: The following isn't necessarily the best way to fix this...Right now many bootstrap events get reported when the preceding task has completed. This makes it somewhat harder to tell what has gone wrong if bootstrap progress stalls.
[edit: The following isn't necessarily the best way to fix this. It might be better to figure out how to report starting something when actually starting it.]
We should add completion milestones to bootstrap reporting. This makes bootstrap reporting more future-proof. If in the future we add a time-consuming task with (no bootstrap reporting) between two existing bootstrap tasks, it will be a little more obvious what's going on.
For example, say we have task X followed by task Z, but then we add a lengthy task Y without adding bootstrap reporting to it. In the old scheme without completion milestones, if Y stalls, the user sees:
* starting X
* starting Z
* [hang]
The user thinks Z has already started when no such thing has happened because Y is still in progress. If we add completion milestones, the user will see:
* starting X
* finished X
* starting Z
* finishing Z
in a normal bootstrap. If something gets stuck in task Y, the user will see:
* starting X
* finished X
* [hang]
This will make it more clear that something got stuck in between tasks.
In a one-line display like Tor Launcher, the completion milestones will normally flash by quickly and not be very visible to users. Completion milestones might make the NOTICE logs a bit more verbose.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/27102gather feedback re decoupling bootstrap progress numbers from BOOTSTRAP_STATU...2020-06-13T15:29:17ZTaylor Yugather feedback re decoupling bootstrap progress numbers from BOOTSTRAP_STATUS enum valuesIf we start reporting intermediate bootstrap phases, for example when reporting PT status when connecting to the Tor network through a PT bridge (#25502), there aren't many numbers remaining to insert between some existing phases (if we ...If we start reporting intermediate bootstrap phases, for example when reporting PT status when connecting to the Tor network through a PT bridge (#25502), there aren't many numbers remaining to insert between some existing phases (if we stick to integers).
We should decouple these so we don't have to cram everything into a tiny portion of the progress bar. It also doesn't make sense to report progress phases that we will never need to execute.
Alternatively, renumber the enums to give us more space toward the beginning of the progress bar.Tor: 0.4.0.x-finalTaylor YuTaylor Yuhttps://gitlab.torproject.org/legacy/trac/-/issues/27100report connection to PT SOCKS proxy separately from OR connection2020-06-13T15:29:16ZTaylor Yureport connection to PT SOCKS proxy separately from OR connectionRight now when acting as a PT client, we don't distinguish between connecting to the SOCKS port of a PT proxy and connecting to the OR that's behind the proxy. This means that we lose some intermediate progress reporting that can help u...Right now when acting as a PT client, we don't distinguish between connecting to the SOCKS port of a PT proxy and connecting to the OR that's behind the proxy. This means that we lose some intermediate progress reporting that can help users understand what might be going wrong.Tor: 0.4.0.x-finalTaylor YuTaylor Yuhttps://gitlab.torproject.org/legacy/trac/-/issues/26846prop289: Leave unused random bytes in relay cell payload2020-06-13T15:28:16ZDavid Gouletdgoulet@torproject.orgprop289: Leave unused random bytes in relay cell payloadThis is section 3.3 of proposal 289 which is, in short, to add randomness to some relay cell payload.This is section 3.3 of proposal 289 which is, in short, to add randomness to some relay cell payload.Tor: 0.4.1.x-finalNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/legacy/trac/-/issues/26842prop289: Add consensus parameters to control new SENDME behavior2020-06-13T15:28:15ZDavid Gouletdgoulet@torproject.orgprop289: Add consensus parameters to control new SENDME behaviorThis is phase two and phase three of proposal 289. Meaning, this ticket is for implementing those switch.
In phase two, we flip a switch in the consensus, and everybody starts sending payload version 1 sendmes. Payload version 0 sendmes...This is phase two and phase three of proposal 289. Meaning, this ticket is for implementing those switch.
In phase two, we flip a switch in the consensus, and everybody starts sending payload version 1 sendmes. Payload version 0 sendmes are still accepted.
In phase three, we flip a different switch in the consensus, and everybody starts refusing payload version 0 sendmes.Tor: 0.4.1.x-finalDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/legacy/trac/-/issues/26841prop289: Have tor handle the new SENDME cell format and validate2020-06-13T15:28:14ZDavid Gouletdgoulet@torproject.orgprop289: Have tor handle the new SENDME cell format and validateFirst is to properly parse the cell and then second, validate it against the expected digest (#26839).
As of an initial deployment phase, only version 1 cell should be validated and the version 0 is accepted as is.First is to properly parse the cell and then second, validate it against the expected digest (#26839).
As of an initial deployment phase, only version 1 cell should be validated and the version 0 is accepted as is.Tor: 0.4.1.x-finalDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/legacy/trac/-/issues/26840prop289: Modify SENDME cell to have a version and payload2020-06-13T15:28:14ZDavid Gouletdgoulet@torproject.orgprop289: Modify SENDME cell to have a version and payloadTo implement prop289, we need the SENDME cell (empty payload right now) to have a version and payload for the bytes inserted into them.
1) We should have a trunnel definition with a proper specification of the cell.
2) Have the code in...To implement prop289, we need the SENDME cell (empty payload right now) to have a version and payload for the bytes inserted into them.
1) We should have a trunnel definition with a proper specification of the cell.
2) Have the code interface to construct those new SENDME cells but don't put them on the wire just yet.
For (2), we could put them on the wire right now but we should make sure tor will accept them (validating empty payload vs not looking at the payload at all).Tor: 0.4.1.x-finalDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/legacy/trac/-/issues/26839prop289: Make a relay remember last cell digests before SENDME2020-06-13T15:28:13ZDavid Gouletdgoulet@torproject.orgprop289: Make a relay remember last cell digests before SENDMEFrom proposal 289, this would be phase one and quoting:
In phase one, both sides begin remembering their expected digests, and they learn how to parse sendme payloads. When they receive a sendme with payload version 1, they verify its d...From proposal 289, this would be phase one and quoting:
In phase one, both sides begin remembering their expected digests, and they learn how to parse sendme payloads. When they receive a sendme with payload version 1, they verify its digest and tear down the circuit if it's wrong. But they continue to send and accept payload version 0 sendmes.Tor: 0.4.1.x-finalDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/legacy/trac/-/issues/26549Revision counter for v3 ephemeral hidden service is lost2020-06-13T15:27:20ZTracRevision counter for v3 ephemeral hidden service is lostWhen a controller is using a client to provide two or more v3 ephemeral hidden services, with the private keys managed by the controller, and there's a client session where the controller activates one of the hidden services but not the ...When a controller is using a client to provide two or more v3 ephemeral hidden services, with the private keys managed by the controller, and there's a client session where the controller activates one of the hidden services but not the others, the revision counters for the other hidden services are lost. This prevents the other services from being activated in future sessions because their descriptors are rejected by the HSDirs.
This happens because increment_descriptor_revision_counter() in hs_service.c calls update_revision_counters_in_state(), which loops over all the services currently being provided by the client, saves their counters, and removes any other counters from the state file. Thus if any hidden service is activated during a session, the revision counters of any services not activated during that session are lost.
Steps to reproduce:
* Use `ADD_ONION NEW:ED25519-V3 ...` to create two hidden services
* Save the private keys
* Shut down and restart tor
* Use `ADD_ONION ED25519-V3:<private_key_1> ...` to activate the first service
* Shut down and restart tor
* Use `SETEVENTS HS_DESC` to register for HS descriptor events
* Use `ADD_ONION ED25519-V3:<private_key_1> ...` to activate the first service
* The descriptor should be published successfully
* Use `ADD_ONION ED25519-V3:<private_key_2> ...` to activate the second service
* The controller receives `HS_DESC_FAILED` events with `REASON=UPLOAD_REJECTED`
It looks like this bug is related to #25552. I don't know whether the solution to that ticket will fix it.
**Trac**:
**Username**: akwizgranTor: 0.3.5.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/26532Combine ipv4.h and ipv6.h into address.h?2020-06-13T15:27:16ZNick MathewsonCombine ipv4.h and ipv6.h into address.h?Suggested during a review. I'm not sure about this; I could go either way.Suggested during a review. I'm not sure about this; I could go either way.Tor: 0.3.5.x-finalNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/legacy/trac/-/issues/26526Split all address.h functions that can invoke the resolver.2020-06-13T15:27:15ZNick MathewsonSplit all address.h functions that can invoke the resolver.These should have consistent names, and either have their own header, or share resolve.h. Having them in the same place as functions that just do name parsing is not good practice.These should have consistent names, and either have their own header, or share resolve.h. Having them in the same place as functions that just do name parsing is not good practice.Tor: 0.3.5.x-finalNick MathewsonNick Mathewson