The Tor Project issueshttps://gitlab.torproject.org/groups/tpo/-/issues2024-01-31T21:37:28Zhttps://gitlab.torproject.org/tpo/core/arti/-/issues/1247Rename OnionService/RunningOnionService2024-01-31T21:37:28Zgabi-250Rename OnionService/RunningOnionServiceWe need to pick better names for `OnionService` and/or `RunningOnionService` (the current naming is provisional, and comes from #1227)We need to pick better names for `OnionService` and/or `RunningOnionService` (the current naming is provisional, and comes from #1227)Arti: Onion service supportNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/tpo/core/arti/-/issues/1227OnionService API questions2024-01-22T19:04:10Zgabi-250OnionService API questionsI think we will need to rethink the `OnionService` API a bit:
* we will need some way of stopping a running service (and presumably also the tasks spawned by `launch`). Even if we implemented `stop()`, restarting the stopped service w...I think we will need to rethink the `OnionService` API a bit:
* we will need some way of stopping a running service (and presumably also the tasks spawned by `launch`). Even if we implemented `stop()`, restarting the stopped service wouldn't be possible, because `launch()` consumes `unlaunched`
* the `onion_name()` function (that implements the `arti hss onion-name` command) needs to be moved to `OnionService`. Currently, to construct an `OnionService`, you need to provide some objects that aren't needed when building `OnionService` in the "unbootrapped", unlaunchable "CLI mode" (e.g. a `Runtime`, a `StateMgr`)
* @Diziet notes that one possibility would be to rewrite `OnionService` using the [typestate pattern](https://gitlab.torproject.org/tpo/core/arti/-/merge_requests/1837#note_2977530)
cc @nickm @DizietArti: Onion service supportgabi-250gabi-250https://gitlab.torproject.org/tpo/core/arti/-/issues/1209Reject inappropriate ipt_mgr configuration changes; Accept and implement the ...2024-02-21T14:59:44ZIan Jacksoniwj@torproject.orgReject inappropriate ipt_mgr configuration changes; Accept and implement the ones that we can.For example, changing the state_dir would result in terrible lossage. Some checks need to be added to the reconfigure logic.
MUST because unbounded lossage might result.
----
Expanding this ticket to note that there are also places w...For example, changing the state_dir would result in terrible lossage. Some checks need to be added to the reconfigure logic.
MUST because unbounded lossage might result.
----
Expanding this ticket to note that there are also places where we _could_ implement certain changes at runtime, but we do not. I will note those with `TODO #1209` as well. We can be flexible about what we implement now and what we forbid, but we should make sure that every change is either supported or forbidden.
—@nickmArti: Onion service supportIan Jacksoniwj@torproject.orgIan Jacksoniwj@torproject.orghttps://gitlab.torproject.org/tpo/core/arti/-/issues/1202Remove support for configurable keystore directory2024-02-21T15:22:18ZNick MathewsonRemove support for configurable keystore directoryWe eventually want to make our default keystore dependent on our state directory, not on ARTI_LOCAL_DATA. But the right ways to do so (see #1185) are a bit tricky and need more design. So until we have #1185 figured out, we should remo...We eventually want to make our default keystore dependent on our state directory, not on ARTI_LOCAL_DATA. But the right ways to do so (see #1185) are a bit tricky and need more design. So until we have #1185 figured out, we should remove the `keystore_dir` configuration option.Arti: Onion service supportgabi-250gabi-250https://gitlab.torproject.org/tpo/core/arti/-/issues/1183Probable change to HSS non-key storage layout2024-02-01T11:38:43ZIan Jacksoniwj@torproject.orgProbable change to HSS non-key storage layoutWe have code for storing state for hidden services including IPT state, replay logs, and keys.
But the non-key state is a bit ad-hoc and unprincipled and has a strange filesystem layout (as seen in `find` on an arti state directory). S...We have code for storing state for hidden services including IPT state, replay logs, and keys.
But the non-key state is a bit ad-hoc and unprincipled and has a strange filesystem layout (as seen in `find` on an arti state directory). See some of the notes in !1853. We want to improve this.
Probably this will be improved as a side effect of produding a new internal API as requested by #1163.Arti: Onion service supportIan Jacksoniwj@torproject.orgIan Jacksoniwj@torproject.orghttps://gitlab.torproject.org/tpo/core/arti/-/issues/1163Something a bit like CheckedDir but a bit like StateMgr2024-02-08T17:02:34ZIan Jacksoniwj@torproject.orgSomething a bit like CheckedDir but a bit like StateMgrExample use case. Replace these three arguments to `OnionService::new` with a single argument:
```
statemgr: S,
state_dir: &Path,
state_mistrust: &fs_mistrust::Mistrust,
```
`CheckedDir` has some of the necessar...Example use case. Replace these three arguments to `OnionService::new` with a single argument:
```
statemgr: S,
state_dir: &Path,
state_mistrust: &fs_mistrust::Mistrust,
```
`CheckedDir` has some of the necessary pieces. It is not suitable because: it is quite unergonomic; and, it encapsulates a "needs to be private" boolean (which wants to vary according to the particular use, so ought not to be present here).
`tor_persist::FsStateMgr` also has some of the necessary pieces. But it is not suitable because: it is just for the json file storage arragements, and doesn't give access to the underlying filesystem path; and, its locking behaviour is unsuitable.
The type to use here should have the following properties:
* It should embody a `PathBuf` (and possibly other info) and be `'static`.
* Using it should ensure that appropriate `fs_mistrust` checks are done (see below)
* Locking against concurrent use should be included as a feature, and exclusive access should be part of the type (not a runtime property)
* It should be possible to get a `DynStorageHandle` (or similar within it).
* It should be possible to get `Path` and `PathBuf` within it.
* Probably it should be possible to get the inner `PathBuf` (maybe with an `_unchecked` name on the method).
* Presumably, the users are to supply the subpath or subdirectory, and are responsible for ensuring these subdirs are unique across every user. For example, it's the job of tor-hsservice to include both "hs" and the HS nickname in subpaths.
Open questions:
1. Are we going to use this for `cache_dir` too? If so should the type imply that the thing is, ultimately, `state_dir`? Ie should there be a type parameter for that? Perhaps that's overkill and we can just have the different variable names `state_dir` and `cache_dir` (many users have little use for a `cache_dir`).
2. Should mistrust checking be done (a) when this thing is created from `[storage.permissions]` and `state_dir` or (b) when it is used (or both)? I think favour (a), which implies that the mistrust check is done during startup and individual sub-directories are not mistrust-checked; except that we need to handle need-to-be-private directories too, which could only be done with (b).
3. Can we use this for the keymgr? I think using it for the keymgr might help with #1162, but certainly we should consider the interaction.Arti: Onion service supportIan Jacksoniwj@torproject.orgIan Jacksoniwj@torproject.orghttps://gitlab.torproject.org/tpo/core/arti/-/issues/1150Write an interim how-to for running an onion service2023-12-12T20:16:21ZNick MathewsonWrite an interim how-to for running an onion serviceWe should have documentation somewhere about how to build Arti so you can run an onion service, how to run an onion service, what features there are (and are not) and what you should expect when you do this.We should have documentation somewhere about how to build Arti so you can run an onion service, how to run an onion service, what features there are (and are not) and what you should expect when you do this.Arti: Onion service supportNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/tpo/core/arti/-/issues/1143crash: current wallclock time not within specified time period?!2023-12-11T11:26:37ZIan Jacksoniwj@torproject.orgcrash: current wallclock time not within specified time period?!I was trying to repro #1142 with the extra error message from !1780 and this happened:
`2023-11-30T10:39:20Z WARN tor_hsservice::svc::publish: the publisher reactor has shut down: error: Internal error: internal error (bug) at /volatil...I was trying to repro #1142 with the extra error message from !1780 and this happened:
`2023-11-30T10:39:20Z WARN tor_hsservice::svc::publish: the publisher reactor has shut down: error: Internal error: internal error (bug) at /volatile/rustcargo/Rustup/Arti/arti/crates/tor-hsservice/src/svc/publish/reactor.rs:1375:13: current wallclock time not within specified time period?!`
My system clock is ntp-synchronised and was yesterday too. My state directory was the one from yesterday. I will preserve its current state in case it's useful.
Full log below.
<details>
```
rustcargo@zealot:/volatile/rustcargo/Rustup/Arti/arti$ target/debug/arti -l debug proxy 2>&1 | tee log
2023-11-30T10:39:08Z INFO arti: Starting Arti 1.1.10 in SOCKS proxy mode on localhost port 9150 ...
2023-11-30T10:39:08Z DEBUG arti::process: Increased process file limit to 4096
2023-11-30T10:39:08Z DEBUG tor_guardmgr::sample: Guard set loaded. n_guards=50 n_confirmed=13
2023-11-30T10:39:08Z DEBUG tor_guardmgr::sample: Guard set loaded. n_guards=0 n_confirmed=0
2023-11-30T10:39:08Z DEBUG tor_guardmgr::sample: Guard set loaded. n_guards=1 n_confirmed=0
2023-11-30T10:39:08Z DEBUG tor_guardmgr::sample: Updated primary guards. old=[] new=[GuardId(RelayIds { ed_identity: Some(Ed25519Identity { owZGf6CpH56ez6MpXU0hWdPwgTwrcqsWQSLrFrJ1XHM }), rsa_identity: Some(RsaIdentity { $799ecf332deca02c49de21ff022f7e2dbecda771 }) }), GuardId(RelayIds { ed_identity: Some(Ed25519Identity { rSXeb/ZAJCmtsrw4nwox2x4T2geH0zRaFDu5WSdt5/8 }), rsa_identity: Some(RsaIdentity { $b13c2c569f3fd0c530b7d96e5ff7933df7a0e834 }) }), GuardId(RelayIds { ed_identity: Some(Ed25519Identity { DrtvSq5B9PjKix9I1b6OtcoZTD+BgFeMLWaNAXpd1k8 }), rsa_identity: Some(RsaIdentity { $fc6f665e3c0637976dff2e128e2da2684e6633aa }) })]
2023-11-30T10:39:08Z INFO arti_client::client: Using keystore from "/home/rustcargo/.local/share/arti/keystore"
2023-11-30T10:39:08Z DEBUG arti_client::status: 0%: connecting to the internet; not downloading
2023-11-30T10:39:08Z DEBUG arti_client::status: 0%: connecting to the internet; not downloading
2023-11-30T10:39:08Z DEBUG arti_client::status: 0%: connecting to the internet; not downloading
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: Hs service ztest: IptLocalId(c74300e60a0f90232cc7f29c4b95ffd5108f1b930abae74a7bccaedd29afd09d) establishing previous IPT at relay ed25519:9mTkWlWndy+NAoChGXnUfP2g4E0zSGOxT0m0PWq+u5s $1d851c4bd54c5923328debd4ab1ff7640a2b4e54
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: Hs service ztest: IptLocalId(cc5d8233adbbe3399e2305b3db11be44951fb5e018cfd1ad3fe2b4ebe6bf313a) establishing previous IPT at relay ed25519:esI8hyLcA3AaxHU3PCOHRlryg9m9uEjGnPv8xQBd0bw $ecab9a5832f7f2913ad8ea429685e10f7c035d06
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: Hs service ztest: IptLocalId(f9055afd0854f07ef7906ecea0a633a7ce49f5b03f836cbdf3e7226b61cc8d85) establishing previous IPT at relay ed25519:nVr4hiv6nNH3ZTR9BUDpVRpTJchdAUx977CLqziHLmw $b7b35afd69a1f8c0453a45fdc92b28824f34f402
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: Hs service ztest: IptLocalId(b8866c883a85c378c3a5556669532077280e8b4ffab3b784a9002aed489435b0) establishing previous IPT at relay ed25519:33GIrEt61yYIE/FHGBZIKcNyiQzfsv3IiIHCtPWyH4c $86c1b3da62eff05ad52040c6f569939319cadf26
2023-11-30T10:39:08Z DEBUG tor_hsservice::svc::publish::reactor: starting descriptor publisher reactor
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: no good IPTs
2023-11-30T10:39:08Z DEBUG arti::reload_cfg: Entering FS event loop
2023-11-30T10:39:08Z DEBUG arti_client::client: It appears we have the lock on our state files.
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: IptLocalId(c74300e60a0f90232cc7f29c4b95ffd5108f1b930abae74a7bccaedd29afd09d) status update IptStatus { status: Establishing, n_faults: 0, wants_to_retire: Ok(()) }
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: no good IPTs
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: IptLocalId(cc5d8233adbbe3399e2305b3db11be44951fb5e018cfd1ad3fe2b4ebe6bf313a) status update IptStatus { status: Establishing, n_faults: 0, wants_to_retire: Ok(()) }
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: no good IPTs
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: IptLocalId(f9055afd0854f07ef7906ecea0a633a7ce49f5b03f836cbdf3e7226b61cc8d85) status update IptStatus { status: Establishing, n_faults: 0, wants_to_retire: Ok(()) }
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: no good IPTs
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: IptLocalId(b8866c883a85c378c3a5556669532077280e8b4ffab3b784a9002aed489435b0) status update IptStatus { status: Establishing, n_faults: 0, wants_to_retire: Ok(()) }
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: no good IPTs
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: IptLocalId(c74300e60a0f90232cc7f29c4b95ffd5108f1b930abae74a7bccaedd29afd09d) status update IptStatus { status: Establishing, n_faults: 0, wants_to_retire: Ok(()) }
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: no good IPTs
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: no good IPTs
2023-11-30T10:39:09Z DEBUG arti::reload_cfg: Config reload event Rescan: reloading configuration.
2023-11-30T10:39:09Z INFO arti::reload_cfg: Successfully reloaded configuration.
2023-11-30T10:39:09Z DEBUG arti_client::status: 19%: connecting to the internet; directory is fetching authority certificates (0/8)
2023-11-30T10:39:09Z DEBUG arti_client::status: 27%: connecting to the internet; directory is fetching authority certificates (8/8)
2023-11-30T10:39:15Z DEBUG tor_dirmgr::state: Consensus now usable, with 0 microdescriptors missing. The current consensus is fresh until 2023-11-29 19:00:00.0 +00:00:00, and valid until 2023-11-29 21:00:00.0 +00:00:00. I've picked 2023-11-29 20:04:34.170147666 +00:00:00 as the earliest time to replace it.
2023-11-30T10:39:15Z INFO tor_dirmgr: Marked consensus usable.
2023-11-30T10:39:15Z DEBUG arti_client::status: 77%: connecting to the internet; directory is fetching microdescriptors (7787/7787)
2023-11-30T10:39:15Z INFO tor_dirmgr: Loaded a good directory from cache.
2023-11-30T10:39:15Z INFO arti: Sufficiently bootstrapped; system SOCKS now functional.
2023-11-30T10:39:15Z INFO arti::socks: Listening on [::1]:9150.
2023-11-30T10:39:15Z INFO arti::socks: Listening on 127.0.0.1:9150.
2023-11-30T10:39:15Z DEBUG tor_chanmgr::factory: Attempting to open a new channel to [94.23.172.32:444 ed25519:owZGf6CpH56ez6MpXU0hWdPwgTwrcqsWQSLrFrJ1XHM $799ecf332deca02c49de21ff022f7e2dbecda771]
2023-11-30T10:39:15Z DEBUG tor_chanmgr::transport::default: Connecting to 94.23.172.32:444
2023-11-30T10:39:15Z DEBUG arti_client::status: 84%: handshaking with Tor relays; directory is fetching microdescriptors (7787/7787)
2023-11-30T10:39:15Z DEBUG tor_proto::channel::handshake: Chan 0: starting Tor handshake with Direct([94.23.172.32:444])
2023-11-30T10:39:15Z DEBUG tor_proto::channel::handshake: Chan 0: Completed handshake with owZGf6CpH56ez6MpXU0hWdPwgTwrcqsWQSLrFrJ1XHM [$799ecf332deca02c49de21ff022f7e2dbecda771]
2023-11-30T10:39:15Z DEBUG arti_client::status: 92%: connecting successfully; directory is fetching microdescriptors (7787/7787)
2023-11-30T10:39:16Z INFO tor_guardmgr::guard: We have found that guard [94.23.172.32:444 ed25519:owZGf6CpH56ez6MpXU0hWdPwgTwrcqsWQSLrFrJ1XHM $799ecf332deca02c49de21ff022f7e2dbecda771] is usable.
2023-11-30T10:39:16Z DEBUG arti_client::status: 92%: connecting successfully; directory is fetching microdescriptors (7787/7787)
2023-11-30T10:39:16Z DEBUG arti_client::status: 92%: connecting successfully; directory is fetching microdescriptors (7787/7787)
2023-11-30T10:39:16Z DEBUG tor_chanmgr::factory: Attempting to open a new channel to [85.208.144.164:443+ ed25519:rSXeb/ZAJCmtsrw4nwox2x4T2geH0zRaFDu5WSdt5/8 $b13c2c569f3fd0c530b7d96e5ff7933df7a0e834]
2023-11-30T10:39:16Z DEBUG tor_chanmgr::transport::default: Connecting to 85.208.144.164:443
2023-11-30T10:39:16Z DEBUG tor_proto::channel::handshake: Chan 1: starting Tor handshake with Direct([85.208.144.164:443])
2023-11-30T10:39:16Z DEBUG tor_dirmgr::state: Consensus now usable, with 0 microdescriptors missing. The current consensus is fresh until 2023-11-29 19:00:00.0 +00:00:00, and valid until 2023-11-29 21:00:00.0 +00:00:00. I've picked 2023-11-29 20:22:47.99599487 +00:00:00 as the earliest time to replace it.
2023-11-30T10:39:16Z INFO tor_dirmgr: Marked consensus usable.
2023-11-30T10:39:16Z DEBUG arti_client::status: 42%: connecting successfully; directory is fetching authority certificates (8/8)
2023-11-30T10:39:16Z INFO tor_dirmgr: Directory is complete. attempt=1
2023-11-30T10:39:16Z INFO tor_dirmgr::bootstrap: 1: Downloading a consensus. attempt=2
2023-11-30T10:39:17Z DEBUG tor_proto::channel::handshake: Chan 1: Completed handshake with rSXeb/ZAJCmtsrw4nwox2x4T2geH0zRaFDu5WSdt5/8 [$b13c2c569f3fd0c530b7d96e5ff7933df7a0e834]
2023-11-30T10:39:17Z INFO tor_guardmgr::guard: We have found that guard [85.208.144.164:443+ ed25519:rSXeb/ZAJCmtsrw4nwox2x4T2geH0zRaFDu5WSdt5/8 $b13c2c569f3fd0c530b7d96e5ff7933df7a0e834] is usable.
2023-11-30T10:39:17Z DEBUG arti_client::status: 42%: connecting successfully; directory is fetching authority certificates (8/8)
2023-11-30T10:39:17Z INFO tor_dirmgr: Applying a consensus diff
2023-11-30T10:39:18Z DEBUG tor_circmgr: Preeemptive circuit was created for Preemptive { port: Some(TargetPort { ipv6: false, port: 80 }), circs: 2 }
2023-11-30T10:39:18Z DEBUG tor_circmgr: Preeemptive circuit was created for Preemptive { port: Some(TargetPort { ipv6: false, port: 443 }), circs: 2 }
2023-11-30T10:39:18Z DEBUG tor_circmgr: Preeemptive circuit was created for Preemptive { port: None, circs: 2 }
2023-11-30T10:39:18Z DEBUG arti_client::status: 42%: connecting successfully; directory is fetching authority certificates (8/8)
2023-11-30T10:39:18Z DEBUG tor_hsservice::svc::ipt_establish: ztest: Successfully established introduction point with ed25519:33GIrEt61yYIE/FHGBZIKcNyiQzfsv3IiIHCtPWyH4c $86c1b3da62eff05ad52040c6f569939319cadf26
2023-11-30T10:39:18Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: IptLocalId(b8866c883a85c378c3a5556669532077280e8b4ffab3b784a9002aed489435b0) status update IptStatus { status: Good(GoodIptDetails { link_specifiers: [EncodedLinkSpec { lstype: LinkSpecType(ORPORT_V4), body: [82, 165, 10, 171, 1, 187] }, EncodedLinkSpec { lstype: LinkSpecType(RSAID), body: [134, 193, 179, 218, 98, 239, 240, 90, 213, 32, 64, 198, 245, 105, 147, 147, 25, 202, 223, 38] }, EncodedLinkSpec { lstype: LinkSpecType(ED25519ID), body: [223, 113, 136, 172, 75, 122, 215, 38, 8, 19, 241, 71, 24, 22, 72, 41, 195, 114, 137, 12, 223, 178, 253, 200, 136, 129, 194, 180, 245, 178, 31, 135] }], ipt_kp_ntor: PublicKey(MontgomeryPoint([204, 182, 219, 177, 242, 128, 174, 179, 83, 113, 218, 181, 92, 55, 248, 226, 151, 247, 174, 132, 17, 175, 171, 236, 208, 22, 220, 252, 241, 217, 189, 1])) }), n_faults: 0, wants_to_retire: Ok(()) }
2023-11-30T10:39:18Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: 1 good IPTs, < target 3, waiting up to 10081ms for IptLocalId(c74300e60a0f90232cc7f29c4b95ffd5108f1b930abae74a7bccaedd29afd09d)
2023-11-30T10:39:18Z DEBUG tor_hsservice::svc::ipt_establish: ztest: Successfully established introduction point with ed25519:nVr4hiv6nNH3ZTR9BUDpVRpTJchdAUx977CLqziHLmw $b7b35afd69a1f8c0453a45fdc92b28824f34f402
2023-11-30T10:39:18Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: IptLocalId(f9055afd0854f07ef7906ecea0a633a7ce49f5b03f836cbdf3e7226b61cc8d85) status update IptStatus { status: Good(GoodIptDetails { link_specifiers: [EncodedLinkSpec { lstype: LinkSpecType(ORPORT_V4), body: [51, 159, 195, 41, 0, 143] }, EncodedLinkSpec { lstype: LinkSpecType(RSAID), body: [183, 179, 90, 253, 105, 161, 248, 192, 69, 58, 69, 253, 201, 43, 40, 130, 79, 52, 244, 2] }, EncodedLinkSpec { lstype: LinkSpecType(ED25519ID), body: [157, 90, 248, 134, 43, 250, 156, 209, 247, 101, 52, 125, 5, 64, 233, 85, 26, 83, 37, 200, 93, 1, 76, 125, 239, 176, 139, 171, 56, 135, 46, 108] }, EncodedLinkSpec { lstype: LinkSpecType(ORPORT_V6), body: [32, 1, 11, 200, 18, 1, 5, 18, 218, 94, 211, 255, 254, 108, 130, 65, 0, 143] }], ipt_kp_ntor: PublicKey(MontgomeryPoint([104, 119, 228, 216, 96, 36, 198, 173, 95, 139, 228, 85, 199, 215, 220, 40, 88, 136, 12, 153, 18, 53, 242, 103, 187, 47, 45, 40, 241, 200, 225, 87])) }), n_faults: 0, wants_to_retire: Ok(()) }
2023-11-30T10:39:18Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: 2 good IPTs, < target 3, waiting up to 10081ms for IptLocalId(c74300e60a0f90232cc7f29c4b95ffd5108f1b930abae74a7bccaedd29afd09d)
2023-11-30T10:39:18Z DEBUG tor_hsservice::svc::ipt_establish: ztest: Successfully established introduction point with ed25519:9mTkWlWndy+NAoChGXnUfP2g4E0zSGOxT0m0PWq+u5s $1d851c4bd54c5923328debd4ab1ff7640a2b4e54
2023-11-30T10:39:18Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: IptLocalId(c74300e60a0f90232cc7f29c4b95ffd5108f1b930abae74a7bccaedd29afd09d) status update IptStatus { status: Good(GoodIptDetails { link_specifiers: [EncodedLinkSpec { lstype: LinkSpecType(ORPORT_V4), body: [88, 151, 194, 12, 35, 41] }, EncodedLinkSpec { lstype: LinkSpecType(RSAID), body: [29, 133, 28, 75, 213, 76, 89, 35, 50, 141, 235, 212, 171, 31, 247, 100, 10, 43, 78, 84] }, EncodedLinkSpec { lstype: LinkSpecType(ED25519ID), body: [246, 100, 228, 90, 85, 167, 119, 47, 141, 2, 128, 161, 25, 121, 212, 124, 253, 160, 224, 77, 51, 72, 99, 177, 79, 73, 180, 61, 106, 190, 187, 155] }], ipt_kp_ntor: PublicKey(MontgomeryPoint([37, 218, 239, 208, 74, 104, 35, 68, 209, 217, 164, 239, 228, 77, 149, 203, 82, 18, 243, 26, 124, 16, 83, 223, 70, 81, 170, 47, 54, 215, 144, 90])) }), n_faults: 0, wants_to_retire: Ok(()) }
2023-11-30T10:39:18Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: 3 good IPTs, >= target 3, publishing
2023-11-30T10:39:18Z DEBUG tor_hsservice::svc::ipt_establish: ztest: Successfully established introduction point with ed25519:esI8hyLcA3AaxHU3PCOHRlryg9m9uEjGnPv8xQBd0bw $ecab9a5832f7f2913ad8ea429685e10f7c035d06
2023-11-30T10:39:18Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: IptLocalId(cc5d8233adbbe3399e2305b3db11be44951fb5e018cfd1ad3fe2b4ebe6bf313a) status update IptStatus { status: Good(GoodIptDetails { link_specifiers: [EncodedLinkSpec { lstype: LinkSpecType(ORPORT_V4), body: [185, 148, 1, 169, 3, 82] }, EncodedLinkSpec { lstype: LinkSpecType(RSAID), body: [236, 171, 154, 88, 50, 247, 242, 145, 58, 216, 234, 66, 150, 133, 225, 15, 124, 3, 93, 6] }, EncodedLinkSpec { lstype: LinkSpecType(ED25519ID), body: [122, 194, 60, 135, 34, 220, 3, 112, 26, 196, 117, 55, 60, 35, 135, 70, 90, 242, 131, 217, 189, 184, 72, 198, 156, 251, 252, 197, 0, 93, 209, 188] }], ipt_kp_ntor: PublicKey(MontgomeryPoint([229, 114, 109, 61, 249, 175, 154, 51, 61, 134, 254, 39, 172, 199, 210, 131, 95, 162, 208, 27, 34, 11, 42, 211, 40, 40, 40, 224, 89, 125, 119, 62])) }), n_faults: 0, wants_to_retire: Ok(()) }
2023-11-30T10:39:18Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: 4 good IPTs, >= target 3, publishing
2023-11-30T10:39:19Z DEBUG tor_dirmgr::state: Consensus now usable, with 448 microdescriptors missing. The current consensus is fresh until 2023-11-30 11:00:00.0 +00:00:00, and valid until 2023-11-30 13:00:00.0 +00:00:00. I've picked 2023-11-30 12:13:57.742181929 +00:00:00 as the earliest time to replace it.
2023-11-30T10:39:19Z INFO tor_dirmgr: Marked consensus usable.
2023-11-30T10:39:19Z DEBUG arti_client::status: 42%: connecting successfully; directory is fetching authority certificates (8/8); next directory is fetching microdescriptors (7373/7800)
2023-11-30T10:39:19Z INFO tor_dirmgr::bootstrap: 1: Downloading microdescriptors (we are missing 427). attempt=2
2023-11-30T10:39:20Z WARN tor_hsservice::svc::publish: the publisher reactor has shut down: error: Internal error: internal error (bug) at /volatile/rustcargo/Rustup/Arti/arti/crates/tor-hsservice/src/svc/publish/reactor.rs:1375:13: current wallclock time not within specified time period?!
Captured( 0: tor_error::internal::ie_backtrace::capture
at crates/tor-error/src/internal.rs:23:18
1: tor_error::internal::Bug::new_inner
at crates/tor-error/src/internal.rs:107:24
2: tor_error::internal::Bug::new
at crates/tor-error/src/internal.rs:96:9
3: tor_hsservice::svc::publish::reactor::Reactor<R,M>::generate_revision_counter::{{closure}}
at crates/tor-hsservice/src/svc/publish/reactor.rs:1375:13
4: core::option::Option<T>::ok_or_else
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/core/src/option.rs:1239:25
5: tor_hsservice::svc::publish::reactor::Reactor<R,M>::generate_revision_counter
at crates/tor-hsservice/src/svc/publish/reactor.rs:1374:22
6: tor_hsservice::svc::publish::reactor::Reactor<R,M>::upload_all::{{closure}}
at crates/tor-hsservice/src/svc/publish/reactor.rs:1000:36
7: tor_hsservice::svc::publish::reactor::Reactor<R,M>::run_once::{{closure}}
at crates/tor-hsservice/src/svc/publish/reactor.rs:665:39
8: tor_hsservice::svc::publish::reactor::Reactor<R,M>::run::{{closure}}
at crates/tor-hsservice/src/svc/publish/reactor.rs:600:68
9: tor_hsservice::svc::publish::Publisher<R,M>::launch::{{closure}}
at crates/tor-hsservice/src/svc/publish.rs:108:37
10: <futures_task::future_obj::LocalFutureObj<T> as core::future::future::Future>::poll
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-task-0.3.29/src/future_obj.rs:84:18
11: <futures_task::future_obj::FutureObj<T> as core::future::future::Future>::poll
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-task-0.3.29/src/future_obj.rs:127:9
12: tokio::runtime::task::core::Core<T,S>::poll::{{closure}}
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/core.rs:328:17
13: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/loom/std/unsafe_cell.rs:16:9
tokio::runtime::task::core::Core<T,S>::poll
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/core.rs:317:13
14: tokio::runtime::task::harness::poll_future::{{closure}}
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/harness.rs:485:19
15: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/core/src/panic/unwind_safe.rs:271:9
16: std::panicking::try::do_call
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/panicking.rs:504:40
17: __rust_try
18: std::panicking::try
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/panicking.rs:468:19
19: std::panic::catch_unwind
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/panic.rs:142:14
20: tokio::runtime::task::harness::poll_future
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/harness.rs:473:18
21: tokio::runtime::task::harness::Harness<T,S>::poll_inner
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/harness.rs:208:27
22: tokio::runtime::task::harness::Harness<T,S>::poll
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/harness.rs:153:15
23: tokio::runtime::task::raw::poll
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/raw.rs:276:5
24: tokio::runtime::task::raw::RawTask::poll
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/raw.rs:200:18
25: tokio::runtime::task::LocalNotified<S>::run
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/mod.rs:408:9
26: tokio::runtime::scheduler::multi_thread::worker::Context::run_task::{{closure}}
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/scheduler/multi_thread/worker.rs:576:13
27: tokio::runtime::coop::with_budget
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/coop.rs:107:5
tokio::runtime::coop::budget
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/coop.rs:73:5
tokio::runtime::scheduler::multi_thread::worker::Context::run_task
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/scheduler/multi_thread/worker.rs:575:9
28: tokio::runtime::scheduler::multi_thread::worker::Context::run
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/scheduler/multi_thread/worker.rs:538:24
29: tokio::runtime::scheduler::multi_thread::worker::run::{{closure}}::{{closure}}
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/scheduler/multi_thread/worker.rs:491:21
30: tokio::runtime::context::scoped::Scoped<T>::set
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/context/scoped.rs:40:9
31: tokio::runtime::context::set_scheduler::{{closure}}
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/context.rs:176:26
32: std::thread::local::LocalKey<T>::try_with
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/thread/local.rs:270:16
33: std::thread::local::LocalKey<T>::with
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/thread/local.rs:246:9
34: tokio::runtime::context::set_scheduler
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/context.rs:176:9
35: tokio::runtime::scheduler::multi_thread::worker::run::{{closure}}
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/scheduler/multi_thread/worker.rs:486:9
36: tokio::runtime::context::runtime::enter_runtime
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/context/runtime.rs:65:16
37: tokio::runtime::scheduler::multi_thread::worker::run
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/scheduler/multi_thread/worker.rs:478:5
38: tokio::runtime::scheduler::multi_thread::worker::Launch::launch::{{closure}}
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/scheduler/multi_thread/worker.rs:447:45
39: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/blocking/task.rs:42:21
40: tokio::runtime::task::core::Core<T,S>::poll::{{closure}}
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/core.rs:328:17
41: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/loom/std/unsafe_cell.rs:16:9
tokio::runtime::task::core::Core<T,S>::poll
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/core.rs:317:13
42: tokio::runtime::task::harness::poll_future::{{closure}}
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/harness.rs:485:19
43: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/core/src/panic/unwind_safe.rs:271:9
44: std::panicking::try::do_call
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/panicking.rs:504:40
45: __rust_try
46: std::panicking::try
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/panicking.rs:468:19
47: std::panic::catch_unwind
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/panic.rs:142:14
48: tokio::runtime::task::harness::poll_future
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/harness.rs:473:18
49: tokio::runtime::task::harness::Harness<T,S>::poll_inner
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/harness.rs:208:27
50: tokio::runtime::task::harness::Harness<T,S>::poll
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/harness.rs:153:15
51: tokio::runtime::task::raw::poll
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/raw.rs:276:5
52: tokio::runtime::task::raw::RawTask::poll
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/raw.rs:200:18
53: tokio::runtime::task::UnownedTask<S>::run
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/mod.rs:445:9
54: tokio::runtime::blocking::pool::Task::run
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/blocking/pool.rs:159:9
55: tokio::runtime::blocking::pool::Inner::run
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/blocking/pool.rs:513:17
56: tokio::runtime::blocking::pool::Spawner::spawn_thread::{{closure}}
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/blocking/pool.rs:471:13
57: std::sys_common::backtrace::__rust_begin_short_backtrace
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/sys_common/backtrace.rs:154:18
58: std::thread::Builder::spawn_unchecked_::{{closure}}::{{closure}}
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/thread/mod.rs:529:17
59: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/core/src/panic/unwind_safe.rs:271:9
60: std::panicking::try::do_call
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/panicking.rs:504:40
61: __rust_try
62: std::panicking::try
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/panicking.rs:468:19
63: std::panic::catch_unwind
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/panic.rs:142:14
std::thread::Builder::spawn_unchecked_::{{closure}}
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/thread/mod.rs:528:30
64: core::ops::function::FnOnce::call_once{{vtable.shim}}
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/core/src/ops/function.rs:250:5
65: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/alloc/src/boxed.rs:2007:9
<alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/alloc/src/boxed.rs:2007:9
std::sys::unix::thread::Thread::new::thread_start
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/sys/unix/thread.rs:108:17
66: start_thread
at /build/glibc-6iIyft/glibc-2.28/nptl/pthread_create.c:486:8
67: clone
at /build/glibc-6iIyft/glibc-2.28/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:95
)
2023-11-30T10:39:20Z DEBUG tor_hsservice::svc::publish::reactor: reupload task channel closed!
2023-11-30T10:39:21Z INFO tor_dirmgr: Directory is complete. attempt=2
2023-11-30T10:39:21Z DEBUG arti_client::status: 100%: connecting successfully; directory is usable, fresh until 2023-11-30 11:00:00 UTC, and valid until 2023-11-30 13:00:00 UTC
^C
```Arti: Onion service supportgabi-250gabi-250https://gitlab.torproject.org/tpo/core/arti/-/issues/1142arti hss generates many hsdir 400 errors2023-12-04T18:38:40ZIan Jacksoniwj@torproject.orgarti hss generates many hsdir 400 errorsEmpirically (using my wip IPT persistence branch, although that shouldn't matter for this) I sometimes get a lot of 400 errors from hsdirs, if I restart my instance of Arti. Log below the cut.
It seems likely that this is a bug in some...Empirically (using my wip IPT persistence branch, although that shouldn't matter for this) I sometimes get a lot of 400 errors from hsdirs, if I restart my instance of Arti. Log below the cut.
It seems likely that this is a bug in something we are doing.
<details>
```
2023-11-29T18:59:21Z WARN tor_hsservice::svc::publish::reactor: failed to upload descriptor for service ztest (hsdir_id=$09bfe6a362d2ce435c45f99b56171b6da486d0f1, hsdir_rsa_id=$09bfe6a362d2ce435c45f99b56171b6da486d0f1): error: failed to publish a descriptor: Tried to upload a hidden service descriptor 3 times, but all attempts failed
Attempt 1: circuit failed: Circuit took too long to build
Attempt 2: circuit failed: Circuit took too long to build
Attempt 3: descriptor upload request failed: Request failed: HTTP status code 400
2023-11-29T18:59:21Z DEBUG tor_hsservice::svc::publish::backoff: failed to upload a hidden service descriptor: descriptor upload request failed attempt=2 can_retry=true
2023-11-29T18:59:21Z DEBUG tor_hsservice::svc::publish::reactor: successfully uploaded descriptor to HSDir nickname=ztest hsdir_id=$5d263037fc175596b3a344132b0b755eb8fb1d1c hsdir_rsa_id=$5d263037fc175596b3a344132b0b755eb8fb1d1c
2023-11-29T18:59:21Z DEBUG tor_hsservice::svc::publish::backoff: failed to upload a hidden service descriptor: descriptor upload request failed attempt=3 can_retry=true
2023-11-29T18:59:21Z WARN tor_hsservice::svc::publish::reactor: failed to upload descriptor for service ztest (hsdir_id=$c7643ef0bc0e452c293534d6429d1d7937776483, hsdir_rsa_id=$c7643ef0bc0e452c293534d6429d1d7937776483): error: failed to publish a descriptor: Tried to upload a hidden service descriptor 2 times, but all attempts failed
Attempt 1: circuit failed: Circuit took too long to build
Attempt 2: descriptor upload request failed: Request failed: HTTP status code 400
2023-11-29T18:59:22Z DEBUG tor_hsservice::svc::publish::backoff: failed to upload a hidden service descriptor: circuit failed attempt=3 can_retry=true
2023-11-29T18:59:22Z WARN tor_hsservice::svc::publish::reactor: failed to upload descriptor for service ztest (hsdir_id=$09bfe6a362d2ce435c45f99b56171b6da486d0f1, hsdir_rsa_id=$09bfe6a362d2ce435c45f99b56171b6da486d0f1): error: failed to publish a descriptor: Tried to upload a hidden service descriptor 3 times, but all attempts failed
Attempt 1: circuit failed: Circuit took too long to build
Attempt 2: circuit failed: Circuit took too long to build
Attempt 3: circuit failed: Circuit took too long to build
2023-11-29T18:59:23Z DEBUG tor_hsservice::svc::publish::backoff: failed to upload a hidden service descriptor: descriptor upload request failed attempt=3 can_retry=true
2023-11-29T18:59:23Z WARN tor_hsservice::svc::publish::reactor: failed to upload descriptor for service ztest (hsdir_id=$403d9e1ddd8e66fa8081deae17c94b2d1d1f6164, hsdir_rsa_id=$403d9e1ddd8e66fa8081deae17c94b2d1d1f6164): error: failed to publish a descriptor: Tried to upload a hidden service descriptor 2 times, but all attempts failed
Attempt 1: descriptor upload request failed: Request failed: HTTP status code 400
Attempt 2: descriptor upload request failed: Request failed: HTTP status code 400
2023-11-29T18:59:24Z DEBUG tor_proto::circuit::reactor: Circ 0.62: Truncated from hop #1. Reason: Circuit was destroyed without client truncate [DESTROYED]
2023-11-29T18:59:25Z WARN tor_hsservice::svc::publish::reactor: failed to upload descriptor for service ztest (hsdir_id=$b4f2c6b03ca4c0c551915d4ffc6ca67ee1b34130, hsdir_rsa_id=$b4f2c6b03ca4c0c551915d4ffc6ca67ee1b34130): error: failed to publish a descriptor: Tried to upload a hidden service descriptor 3 times, but all attempts failed
Attempt 1: circuit failed: Circuit took too long to build
Attempt 2: descriptor upload request failed: Request failed: HTTP status code 400
Attempt 3: descriptor upload request failed: Request failed: HTTP status code 400
2023-11-29T18:59:25Z DEBUG tor_hsservice::svc::publish::backoff: failed to upload a hidden service descriptor: descriptor upload request failed attempt=3 can_retry=true
2023-11-29T18:59:25Z WARN tor_hsservice::svc::publish::reactor: failed to upload descriptor for service ztest (hsdir_id=$21fff594cfe691a4a03b828e9597a9f74f878053, hsdir_rsa_id=$21fff594cfe691a4a03b828e9597a9f74f878053): error: failed to publish a descriptor: Tried to upload a hidden service descriptor 2 times, but all attempts failed
Attempt 1: circuit failed: Circuit took too long to build
Attempt 2: descriptor upload request failed: Request failed: HTTP status code 400
2023-11-29T18:59:25Z WARN tor_hsservice::svc::publish::reactor: failed to upload descriptor for service ztest (hsdir_id=$a15676f5f0f2ba7b1ca54446ddb46bee6f699a95, hsdir_rsa_id=$a15676f5f0f2ba7b1ca54446ddb46bee6f699a95): error: failed to publish a descriptor: Tried to upload a hidden service descriptor 3 times, but all attempts failed
Attempt 1: circuit failed: Circuit took too long to build
Attempt 2: descriptor upload request failed: Request failed: HTTP status code 400
Attempt 3: circuit failed: Circuit took too long to build
2023-11-29T18:59:25Z DEBUG tor_hsservice::svc::publish::backoff: failed to upload a hidden service descriptor: descriptor upload request failed attempt=3 can_retry=true
2023-11-29T18:59:26Z WARN tor_hsservice::svc::publish::reactor: failed to upload descriptor for service ztest (hsdir_id=$5d263037fc175596b3a344132b0b755eb8fb1d1c, hsdir_rsa_id=$5d263037fc175596b3a344132b0b755eb8fb1d1c): error: failed to publish a descriptor: Tried to upload a hidden service descriptor 2 times, but all attempts failed
Attempt 1: descriptor upload request failed: Request failed: HTTP status code 400
Attempt 2: descriptor upload request failed: Request failed: HTTP status code 400
2023-11-29T18:59:27Z DEBUG tor_hsservice::svc::publish::backoff: failed to upload a hidden service descriptor: descriptor upload request failed attempt=2 can_retry=true
2023-11-29T18:59:28Z DEBUG tor_hsservice::svc::publish::backoff: failed to upload a hidden service descriptor: descriptor upload request failed attempt=4 can_retry=true
2023-11-29T18:59:28Z WARN tor_hsservice::svc::publish::reactor: failed to upload descriptor for service ztest (hsdir_id=$1e073339919f3ad1e82755a909cf458ccc6252d1, hsdir_rsa_id=$1e073339919f3ad1e82755a909cf458ccc6252d1): error: failed to publish a descriptor: Tried to upload a hidden service descriptor 3 times, but all attempts failed
Attempt 1: descriptor upload request failed: Request failed: HTTP status code 400
Attempt 2: descriptor upload request failed: Request failed: HTTP status code 400
Attempt 3: descriptor upload request failed: Request failed: HTTP status code 400
2023-11-29T18:59:30Z DEBUG tor_hsservice::svc::publish::backoff: failed to upload a hidden service descriptor: descriptor upload request failed attempt=4 can_retry=true
2023-11-29T18:59:31Z DEBUG tor_hsservice::svc::publish::backoff: failed to upload a hidden service descriptor: descriptor upload request failed attempt=3 can_retry=true
2023-11-29T18:59:33Z WARN tor_hsservice::svc::publish::reactor: failed to upload descriptor for service ztest (hsdir_id=$b4f2c6b03ca4c0c551915d4ffc6ca67ee1b34130, hsdir_rsa_id=$b4f2c6b03ca4c0c551915d4ffc6ca67ee1b34130): error: failed to publish a descriptor: Tried to upload a hidden service descriptor 4 times, but all attempts failed
Attempt 1: circuit failed: Circuit took too long to build
Attempt 2: descriptor upload request failed: Request failed: HTTP status code 400
Attempt 3: descriptor upload request failed: Request failed: HTTP status code 400
Attempt 4: descriptor upload request failed: Request failed: HTTP status code 400
2023-11-29T18:59:34Z WARN tor_hsservice::svc::publish::reactor: failed to upload descriptor for service ztest (hsdir_id=$fe39f07ebe7870dce124ab30df3abd0700a43f75, hsdir_rsa_id=$fe39f07ebe7870dce124ab30df3abd0700a43f75): error: failed to publish a descriptor: Tried to upload a hidden service descriptor 4 times, but all attempts failed
Attempt 1: circuit failed: Circuit took too long to build
Attempt 2: circuit failed: Circuit took too long to build
Attempt 3: descriptor upload request failed: Request failed: HTTP status code 400
Attempt 4: descriptor upload request failed: Request failed: HTTP status code 400
2023-11-29T18:59:36Z WARN tor_hsservice::svc::publish::reactor: failed to upload descriptor for service ztest (hsdir_id=$21fff594cfe691a4a03b828e9597a9f74f878053, hsdir_rsa_id=$21fff594cfe691a4a03b828e9597a9f74f878053): error: failed to publish a descriptor: Tried to upload a hidden service descriptor 3 times, but all attempts failed
Attempt 1: circuit failed: Circuit took too long to build
Attempt 2: descriptor upload request failed: Request failed: HTTP status code 400
Attempt 3: descriptor upload request failed: Request failed: HTTP status code 400
```Arti: Onion service supportgabi-250gabi-250https://gitlab.torproject.org/tpo/network-health/sbws/-/issues/40194Add metric-lib to the documentation on all the repos to change when adding a ...2023-12-11T09:59:10ZjugaAdd metric-lib to the documentation on all the repos to change when adding a header KeyValuesbws: 1.9.x-finaljugajugahttps://gitlab.torproject.org/tpo/network-health/sbws/-/issues/40193Update `version` KeyValue header in the BandwidthFile2023-12-11T09:58:46ZjugaUpdate `version` KeyValue header in the BandwidthFileWe've been forgetting to update it on each BandwidthFile spec, with the last changes (
tpo/core/torspec#241) it should be now `1.9.0`We've been forgetting to update it on each BandwidthFile spec, with the last changes (
tpo/core/torspec#241) it should be now `1.9.0`sbws: 1.9.x-finaljugajugahttps://gitlab.torproject.org/tpo/tpa/team/-/issues/41399New provider requirements for deliverability2024-02-22T15:43:37Zmicahmicah@torproject.orgNew provider requirements for deliverabilityThe new [google](https://blog.google/products/gmail/gmail-security-authentication-spam-protection/) etc. requirements that [impact deliverability](https://www.validity.com/blog/gmail-yahoo-users-will-be-able-to-stop-spam-emails-with-just...The new [google](https://blog.google/products/gmail/gmail-security-authentication-spam-protection/) etc. requirements that [impact deliverability](https://www.validity.com/blog/gmail-yahoo-users-will-be-able-to-stop-spam-emails-with-just-one-click/) are coming
The [guidance that google provides](https://support.google.com/mail/answer/81126) must be satisfied by February of 2024.
There are a number of things on this that we do, and I think at least one thing that we do not do (one-click unsubscribe), but I am not the best at evaluating if we are going to be ok here.improve mail servicesanarcatanarcathttps://gitlab.torproject.org/tpo/network-health/sbws/-/issues/40192Debian package: figure out why updating the package disables the unit and timers2024-02-05T18:29:12ZjugaDebian package: figure out why updating the package disables the unit and timerswhen they were already enabled by the operator.
`override_dh_installsystemd` has `--no-enable --no-start`, which shouldn't be the cause.when they were already enabled by the operator.
`override_dh_installsystemd` has `--no-enable --no-start`, which shouldn't be the cause.sbws: 1.9.x-finaljugajugahttps://gitlab.torproject.org/tpo/core/arti/-/issues/1109Tidy up our OpenSSH key format specifications and transfer them to torspec2023-11-16T15:50:03ZIan Jacksoniwj@torproject.orgTidy up our OpenSSH key format specifications and transfer them to torspecCC @gabi-250CC @gabi-250Arti: Onion service supportIan Jacksoniwj@torproject.orgIan Jacksoniwj@torproject.orghttps://gitlab.torproject.org/tpo/tpa/team/-/issues/41364TPA-RFC-63: consider next steps for the backup server (bungei)2024-03-13T13:18:40ZanarcatTPA-RFC-63: consider next steps for the backup server (bungei)bungei filled up this week (#41361) and while we mitigated this by allocating more space to the logical volume, there is now very little space in the volume group to dodge similar bullets in the future ("only" 2.6TB):
```
root@bungei:~#...bungei filled up this week (#41361) and while we mitigated this by allocating more space to the logical volume, there is now very little space in the volume group to dodge similar bullets in the future ("only" 2.6TB):
```
root@bungei:~# vgs
VG #PV #LV #SN Attr VSize VFree
vg_bulk 1 2 0 wz--n- 72.60t 2.60t
```
tasks:
- [x] review past tickets about bungei filling up
- [x] review last years disk stats to see if there's another anomaly
- [x] evaluate costs of a server replacement (see #41536)
- [x] adopt budget for new storage server: https://gitlab.torproject.org/tpo/tpa/team/-/wikis/policy/tpa-rfc-63-storage-server-budget
- [ ] build new storage server (spin out in a different issue?) (#41557)
- [ ] evaluate possible software replacements (see #40950 for PostgreSQL)(next) cluster scalinganarcatanarcathttps://gitlab.torproject.org/tpo/core/tor/-/issues/40876Tor has extra guard connections2023-11-09T17:11:33ZMike PerryTor has extra guard connectionsWe lowered the number of directory guards to 2 in part because I suspected it was causing extra guard connections to get made and kept open, leading to fingerprinting: https://gitlab.torproject.org/tpo/network-health/team/-/issues/325
H...We lowered the number of directory guards to 2 in part because I suspected it was causing extra guard connections to get made and kept open, leading to fingerprinting: https://gitlab.torproject.org/tpo/network-health/team/-/issues/325
However, a forum user pointed out that their Tor is using 3 guards still: https://forum.torproject.org/t/tor-browser-connecting-to-3-guard-relays-simultaneously/9819
I also just checked my Tor, and it is using 4 guards...
So there definitely is some problem with Tor opening too many guard connections, and then just keeping them open for as long as it wants.Tor: 0.4.8.x-post-stableMike PerryMike Perryhttps://gitlab.torproject.org/tpo/network-health/sbws/-/issues/40189Replace links to gitweb.tpo to gitlab.tpo2023-11-27T13:19:52ZjugaReplace links to gitweb.tpo to gitlab.tposbws: 1.9.x-finaljugajugahttps://gitlab.torproject.org/tpo/core/arti/-/issues/1071API and CLI for obtaining K_hsid2023-12-14T16:29:27ZIan Jacksoniwj@torproject.orgAPI and CLI for obtaining K_hsid~~Probably this should be logged at level DEBUG at least. Without this, we'll not really be able to do an ad-hoc test since we won't know what `.onion` to try to connect to.~~
We log a newly-generated K_hsid since !1689, but there shou...~~Probably this should be logged at level DEBUG at least. Without this, we'll not really be able to do an ad-hoc test since we won't know what `.onion` to try to connect to.~~
We log a newly-generated K_hsid since !1689, but there should be:
1. An API that lets you get the K_hsid from a running hidden service (if it knows it, which it might not...)
2. A CLI operation that does the aboveArti: Onion service supportgabi-250gabi-250https://gitlab.torproject.org/tpo/network-health/sbws/-/issues/40186Document how to enable/start systemd timers, create https server for testing ...2023-11-02T18:20:11ZjugaDocument how to enable/start systemd timers, create https server for testing network and configure directory authoritiesBecause all these questions came up in the last weeks.Because all these questions came up in the last weeks.sbws: 1.8.x-finaljugajugahttps://gitlab.torproject.org/tpo/network-health/sbws/-/issues/40185New release 1.8.12023-11-02T18:19:51ZjugaNew release 1.8.1sbws: 1.8.x-finaljugajuga