The Tor Project issueshttps://gitlab.torproject.org/groups/tpo/-/issues2023-10-26T14:20:06Zhttps://gitlab.torproject.org/tpo/core/torspec/-/issues/223Convert specifications to mdbook2023-10-26T14:20:06ZNick MathewsonConvert specifications to mdbookPer [proposal 345](https://gitlab.torproject.org/tpo/core/torspec/-/blob/main/proposals/345-specs-in-mdbook.md), I want to convert our specifications to markdown and render them in mdbook.
The end result of the migration will be that:...Per [proposal 345](https://gitlab.torproject.org/tpo/core/torspec/-/blob/main/proposals/345-specs-in-mdbook.md), I want to convert our specifications to markdown and render them in mdbook.
The end result of the migration will be that:
* The torspec repository looks more [like this](https://gitlab.torproject.org/nickm/torspec/-/tree/spec_conversion?ref_type=heads).
* There is rendered torspec website looks more [like this](https://people.torproject.org/~nickm/volatile/mdbook-specs/index.html).
It looks like [spec.tpo](https://spec.torproject.org) is available as a target for this rendering; @weasel (the current spec.tpo maintainer) has approved on IRC. There is a [TPA ticket](https://gitlab.torproject.org/tpo/tpa/team/-/issues/41348) open for the admin side of the issue, and I've gotten some helpful advice there too.
On this ticket I'll be tracking the actual details of doing the migration. I won't do anything final without talking to more people, though.
Next steps here are:
* [x] Decide on the new layout we want for torspec.git.
* [x] Decide on the URL layout we want for the new spec.tpo website.
- Should we have a landing page, or should the mdbook content **be** the landing page?
- Should we leave a spot for RFCs?
* [ ] Work on [the migration scripts](https://gitlab.torproject.org/nickm/torspec-converter) and their configuration (including where to break the sections), until they produces output we like, and it gives us the layout we want.
* [ ] Develop the CI process as needed to keep the site up to date.
- Probably, publish to gitlab pages at first.
* [ ] Figure out whose approval we need for this, and see what they think.
* [ ] Write documentation as needed to explain how to edit the spec.
* [ ] Decide how to maintain redirects and permalinks.
After we've done this stuff I think we are ready to start the migration.Nick MathewsonNick Mathewsonhttps://gitlab.torproject.org/tpo/team/-/issues/121Coordinate onboarding of new people in the network-team2023-01-19T21:51:45ZGabagaba@torproject.orgCoordinate onboarding of new people in the network-teamThere are 2 people starting in Q1 2022 to work on Onion Services. The first person will be starting on January 16th.
cc @ahf @ewyatt
Template for the meeting agenda: https://gitlab.torproject.org/tpo/team/-/wikis/OnBoardingAgendaTemplateThere are 2 people starting in Q1 2022 to work on Onion Services. The first person will be starting on January 16th.
cc @ahf @ewyatt
Template for the meeting agenda: https://gitlab.torproject.org/tpo/team/-/wikis/OnBoardingAgendaTemplateAlexander Færøyahf@torproject.orgAlexander Færøyahf@torproject.org2023-01-11https://gitlab.torproject.org/tpo/network-health/metrics/onionperf/-/issues/40037Coordinate Onionperf monitoring alerts2022-03-14T09:47:03ZAna CusturaCoordinate Onionperf monitoring alertsPer @hiro's suggestion in irc, this is a ticket to review the Monit configuration we have for onionperf instances, to avoid duplicate checks.
At the moment, monit checks every 5 minutes for:
- whether the `onionperf measure` process ex...Per @hiro's suggestion in irc, this is a ticket to review the Monit configuration we have for onionperf instances, to avoid duplicate checks.
At the moment, monit checks every 5 minutes for:
- whether the `onionperf measure` process exists
- whether the `tgen client` process exists
- whether the `tgen server` process exists
- whether the tor and tgen log files are older than 10 minutes
- whether the disk space is more than 80% used
- whether the instances can reach each other on port 8080
@hiro do we have equivalent checks in prometheus? Do you think we could move to Prometheus and not use Monit?Metrics OKR Q1 - Q2 2022HiroHirohttps://gitlab.torproject.org/tpo/applications/rbm/-/issues/40062Copy input directories to containers recursively2023-10-13T05:17:30ZPier Angelo VendrameCopy input directories to containers recursivelyEarlier I've run a `firefox-android` build with `RBM_VERBOSE_LOG=1`.
I found that indeed we spend a [very long time copying files](/uploads/974c39037b48eb7f94954e4ee18194c7/android-copies.txt).
They are about 1600 files for slightly les...Earlier I've run a `firefox-android` build with `RBM_VERBOSE_LOG=1`.
I found that indeed we spend a [very long time copying files](/uploads/974c39037b48eb7f94954e4ee18194c7/android-copies.txt).
They are about 1600 files for slightly less than 500MB, so all this time is quite surprising.
My hypothesis is that it takes us this long time because we copy them one by one and we adjust their owner after each copy.
I wonder if this adds a lot of overhead (we need to setup the container and chroot to it to run `chown`).
My proposal is that we first copy directories (such as the various `gradle-dependencies-N`) recursively, and that we apply the owner only at the end of the copy.https://gitlab.torproject.org/tpo/tpa/team/-/issues/40941Copy some files for me please (tb-build-05)2022-10-26T06:10:55ZrichardCopy some files for me please (tb-build-05)I need the directory referenced by @boklm here: https://gitlab.torproject.org/tpo/applications/tor-browser-build/-/issues/40628#note_2846845 (`/home/boklm/tor-browser-build/release/signed/11.5.5`) copied somewhere I can access it like in...I need the directory referenced by @boklm here: https://gitlab.torproject.org/tpo/applications/tor-browser-build/-/issues/40628#note_2846845 (`/home/boklm/tor-browser-build/release/signed/11.5.5`) copied somewhere I can access it like in `/home/richard/temp`
Thanks!https://gitlab.torproject.org/tpo/community/policies/-/issues/11Core contributors are not getting invited to Tor meetings anymore by default2024-03-25T19:45:30ZGeorg KoppenCore contributors are not getting invited to Tor meetings anymore by defaultWe learned that our Tor meeting invitation policy is not reflecting our membership document anymore. There it says (among other things):
```
Core Contributorship has the following privileges...
* Standing invitation to the periodic Tor...We learned that our Tor meeting invitation policy is not reflecting our membership document anymore. There it says (among other things):
```
Core Contributorship has the following privileges...
* Standing invitation to the periodic Tor Meetings.
```
But that's not the case anymore.
/cc @isabelaGeorg KoppenGeorg Koppenhttps://gitlab.torproject.org/tpo/core/torspec/-/issues/56Correct and update Prop324 based on things learned in prototyping2021-07-20T23:21:05ZMike PerryCorrect and update Prop324 based on things learned in prototypingWhile testing congestion control over onion services, I noticed some omissions from the proposal that were present in the background material and literature, as well as some new heuristics I discovered from testing the prototype:
1. [x]...While testing congestion control over onion services, I noticed some omissions from the proposal that were present in the background material and literature, as well as some new heuristics I discovered from testing the prototype:
1. [x] The congestion window only should be updated with a congestion signal once per window
1. [x] If the local orconn is blocked, that should be an immediate congestion signal. Also doc that we may have too large a queue there
1. [x] If the edge connections do not have data to send, estimates of BDP should not be updated
1. [x] Westwood may have a runaway condition where max RTT continues to grow. We may want to reduce the max RTT measurement upon congestion
1. [x] I made a congestion control algorithm that directly uses the current BDP estimate as its current congestion window, and this works. We should spec it and evaluate it in Shadow.
1. [x] Update the consensus parameter list and tuning experiments section
1. [x] BDP estimation algsSponsor 61 - Making the Tor network faster & more reliable for users in Internet-repressive placesMike PerryMike Perryhttps://gitlab.torproject.org/tpo/network-health/metrics/timeline/-/issues/5Correct link to timeline project2021-06-30T22:15:15ZGeorg KoppenCorrect link to timeline projectWe moved the whole metrics namespace into network-health. We should fix the link to this project in the README.We moved the whole metrics namespace into network-health. We should fix the link to this project in the README.Georg KoppenGeorg Koppenhttps://gitlab.torproject.org/tpo/applications/tor-browser-build/-/issues/40826Correctly set appname_marfile for basebrowser in tools/signing/nightly/update...2023-04-17T20:25:06ZboklmCorrectly set appname_marfile for basebrowser in tools/signing/nightly/update-responses-base-config.ymlboklmboklmhttps://gitlab.torproject.org/tpo/tpa/team/-/issues/40682corsicum IPv4 routing issues2022-03-30T19:54:34Zanarcatcorsicum IPv4 routing issuesnagios has been seeing corsicum as down for days now:
```
10:43:10 <nsa> tor-nagios: [corsicum] corsicum is DOWN: Date/Time: Fri Mar 11 15:42:50 UTC 2022
```
routing issues do not seem to be distributed equally: i can SSH into corsicum...nagios has been seeing corsicum as down for days now:
```
10:43:10 <nsa> tor-nagios: [corsicum] corsicum is DOWN: Date/Time: Fri Mar 11 15:42:50 UTC 2022
```
routing issues do not seem to be distributed equally: i can SSH into corsicum through the jump host (perdulce) and from home, both of which use IPv6, but not from my IPv4-only office. trying from home and perdulce over IPv4 also fails.anarcatanarcathttps://gitlab.torproject.org/tpo/tpa/team/-/issues/40011corsicum should be upgraded to buster/Debian 102020-06-23T19:29:02Zweasel (Peter Palfrader)corsicum should be upgraded to buster/Debian 10weasel (Peter Palfrader)weasel (Peter Palfrader)https://gitlab.torproject.org/tpo/network-health/metrics/monitoring-and-alerting/-/issues/9Count relays by flag2022-02-25T07:34:18ZHiroCount relays by flagIn onionoo aggregated [network data script](https://gitlab.torproject.org/tpo/network-health/metrics/monitoring-and-alerting/-/blob/main/network/onionoo) we count relays and bridges by various attributes.
I think it would be interesting...In onionoo aggregated [network data script](https://gitlab.torproject.org/tpo/network-health/metrics/monitoring-and-alerting/-/blob/main/network/onionoo) we count relays and bridges by various attributes.
I think it would be interesting to start counting also by flag, so that we can monitor trends in prometheus and if needed send an alert.HiroHirohttps://gitlab.torproject.org/tpo/web/donate-static/-/issues/102Counter on donate.tp.o not displaying numbers2022-12-15T21:51:46ZmattlavCounter on donate.tp.o not displaying numbersThe YEC campaign counter at [the donate page](https://donate.torproject.org/) isn't counting - in fact it's just showing a bunch of flashing green rectangles and zeroes. Beyond that I don't really know what's up, but it doesn't look grea...The YEC campaign counter at [the donate page](https://donate.torproject.org/) isn't counting - in fact it's just showing a bunch of flashing green rectangles and zeroes. Beyond that I don't really know what's up, but it doesn't look great. Looks like a job for @kez !Jérôme Charaouilavamind@torproject.orgJérôme Charaouilavamind@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40478Coverity warning in XON/XOFF handling2021-10-05T19:28:42ZNick MathewsonCoverity warning in XON/XOFF handling```
397 /* Adjust the token bucket of this edge connection with the drain rate in
398 * the XON. Rate is in bytes from kilobit (kpbs). */
>>> CID 1492322: Integer handling issues (OVERFLOW_BEFORE_WIDEN)
>>> Potenti...```
397 /* Adjust the token bucket of this edge connection with the drain rate in
398 * the XON. Rate is in bytes from kilobit (kpbs). */
>>> CID 1492322: Integer handling issues (OVERFLOW_BEFORE_WIDEN)
>>> Potentially overflowing expression "xon_cell_get_kbps_ewma(xon) * 1000U" with type "unsigned int" (32 bits, unsigned) is evaluated using 32-bit arithmetic, and then used in a context that expects an expression of type "uint64_t" (64 bits, unsigned).
399 uint64_t rate = xon_cell_get_kbps_ewma(xon) * 1000;
400 if (rate == 0 || INT32_MAX < rate) {
401 /* No rate. */
402 rate = INT32_MAX;
403 }
404 token_bucket_rw_adjust(&conn->bucket, (uint32_t) rate, (uint32_t) rate);
```
cc @dgoulet @mikeperryTor: 0.4.7.x-freezeDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40257Crash on MetricsPort when prematurely terminating socket2021-02-08T19:28:33ZNeel Chauhanneel@neelc.orgCrash on MetricsPort when prematurely terminating socketIf I setup a `MetricsPort` and telnet into it, and then prematurely terminate the socket without doing anything, we get a crash:
Jan 23 14:03:51.000 [notice] Bootstrapped 100% (done): Done
Jan 23 14:03:56.000 [warn] conn_read_ca...If I setup a `MetricsPort` and telnet into it, and then prematurely terminate the socket without doing anything, we get a crash:
Jan 23 14:03:51.000 [notice] Bootstrapped 100% (done): Done
Jan 23 14:03:56.000 [warn] conn_read_callback: Bug: Unhandled error on read for Metrics connection (fd 10); removing (on Tor 0.4.6.0-alpha-dev 878c124e0dda4cde)
Jan 23 14:03:56.000 [warn] tor_bug_occurred_: Bug: src/core/mainloop/mainloop.c:899: conn_read_callback: This line should not have been reached. (Future instances of this warning will be silenced.) (on Tor 0.4.6.0-alpha-dev 878c124e0dda4cde)
Jan 23 14:03:56.000 [warn] Bug: Tor 0.4.6.0-alpha-dev (git-878c124e0dda4cde): Line unexpectedly reached at conn_read_callback at src/core/mainloop/mainloop.c:899. Stack trace: (on Tor 0.4.6.0-alpha-dev 878c124e0dda4cde)
Jan 23 14:03:56.000 [warn] Bug: 0x130985c <log_backtrace_impl+0x5c> at /usr/home/neel/code/tor/tor/src/app/tor (on Tor 0.4.6.0-alpha-dev 878c124e0dda4cde)
Jan 23 14:03:56.000 [warn] Bug: 0x1317d91 <tor_bug_occurred_+0x1d1> at /usr/home/neel/code/tor/tor/src/app/tor (on Tor 0.4.6.0-alpha-dev 878c124e0dda4cde)
Jan 23 14:03:56.000 [warn] Bug: 0x116a843 <conn_read_callback+0x1021103> at /usr/home/neel/code/tor/tor/src/app/tor (on Tor 0.4.6.0-alpha-dev 878c124e0dda4cde)
Jan 23 14:03:56.000 [warn] Bug: 0x80140519d <event_base_assert_ok_nolock_+0xbfd> at /usr/local/lib/libevent-2.1.so.7 (on Tor 0.4.6.0-alpha-dev 878c124e0dda4cde)
Jan 23 14:03:56.000 [warn] Bug: 0x80140112c <event_base_loop+0x58c> at /usr/local/lib/libevent-2.1.so.7 (on Tor 0.4.6.0-alpha-dev 878c124e0dda4cde)
Jan 23 14:03:56.000 [warn] Bug: 0x116cbba <do_main_loop+0x12a> at /usr/home/neel/code/tor/tor/src/app/tor (on Tor 0.4.6.0-alpha-dev 878c124e0dda4cde)
Jan 23 14:03:56.000 [warn] Bug: 0x1155f1c <tor_run_main+0x12c> at /usr/home/neel/code/tor/tor/src/app/tor (on Tor 0.4.6.0-alpha-dev 878c124e0dda4cde)
Jan 23 14:03:56.000 [warn] Bug: 0x1154871 <tor_main+0x61> at /usr/home/neel/code/tor/tor/src/app/tor (on Tor 0.4.6.0-alpha-dev 878c124e0dda4cde)
^CJan 23 14:04:02.000 [notice] Interrupt: exiting cleanly.
neel@concorde:~/code/tor/tor %Tor: 0.4.5.x-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/arti/-/issues/1143crash: current wallclock time not within specified time period?!2023-12-11T11:26:37ZIan Jacksoniwj@torproject.orgcrash: current wallclock time not within specified time period?!I was trying to repro #1142 with the extra error message from !1780 and this happened:
`2023-11-30T10:39:20Z WARN tor_hsservice::svc::publish: the publisher reactor has shut down: error: Internal error: internal error (bug) at /volatil...I was trying to repro #1142 with the extra error message from !1780 and this happened:
`2023-11-30T10:39:20Z WARN tor_hsservice::svc::publish: the publisher reactor has shut down: error: Internal error: internal error (bug) at /volatile/rustcargo/Rustup/Arti/arti/crates/tor-hsservice/src/svc/publish/reactor.rs:1375:13: current wallclock time not within specified time period?!`
My system clock is ntp-synchronised and was yesterday too. My state directory was the one from yesterday. I will preserve its current state in case it's useful.
Full log below.
<details>
```
rustcargo@zealot:/volatile/rustcargo/Rustup/Arti/arti$ target/debug/arti -l debug proxy 2>&1 | tee log
2023-11-30T10:39:08Z INFO arti: Starting Arti 1.1.10 in SOCKS proxy mode on localhost port 9150 ...
2023-11-30T10:39:08Z DEBUG arti::process: Increased process file limit to 4096
2023-11-30T10:39:08Z DEBUG tor_guardmgr::sample: Guard set loaded. n_guards=50 n_confirmed=13
2023-11-30T10:39:08Z DEBUG tor_guardmgr::sample: Guard set loaded. n_guards=0 n_confirmed=0
2023-11-30T10:39:08Z DEBUG tor_guardmgr::sample: Guard set loaded. n_guards=1 n_confirmed=0
2023-11-30T10:39:08Z DEBUG tor_guardmgr::sample: Updated primary guards. old=[] new=[GuardId(RelayIds { ed_identity: Some(Ed25519Identity { owZGf6CpH56ez6MpXU0hWdPwgTwrcqsWQSLrFrJ1XHM }), rsa_identity: Some(RsaIdentity { $799ecf332deca02c49de21ff022f7e2dbecda771 }) }), GuardId(RelayIds { ed_identity: Some(Ed25519Identity { rSXeb/ZAJCmtsrw4nwox2x4T2geH0zRaFDu5WSdt5/8 }), rsa_identity: Some(RsaIdentity { $b13c2c569f3fd0c530b7d96e5ff7933df7a0e834 }) }), GuardId(RelayIds { ed_identity: Some(Ed25519Identity { DrtvSq5B9PjKix9I1b6OtcoZTD+BgFeMLWaNAXpd1k8 }), rsa_identity: Some(RsaIdentity { $fc6f665e3c0637976dff2e128e2da2684e6633aa }) })]
2023-11-30T10:39:08Z INFO arti_client::client: Using keystore from "/home/rustcargo/.local/share/arti/keystore"
2023-11-30T10:39:08Z DEBUG arti_client::status: 0%: connecting to the internet; not downloading
2023-11-30T10:39:08Z DEBUG arti_client::status: 0%: connecting to the internet; not downloading
2023-11-30T10:39:08Z DEBUG arti_client::status: 0%: connecting to the internet; not downloading
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: Hs service ztest: IptLocalId(c74300e60a0f90232cc7f29c4b95ffd5108f1b930abae74a7bccaedd29afd09d) establishing previous IPT at relay ed25519:9mTkWlWndy+NAoChGXnUfP2g4E0zSGOxT0m0PWq+u5s $1d851c4bd54c5923328debd4ab1ff7640a2b4e54
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: Hs service ztest: IptLocalId(cc5d8233adbbe3399e2305b3db11be44951fb5e018cfd1ad3fe2b4ebe6bf313a) establishing previous IPT at relay ed25519:esI8hyLcA3AaxHU3PCOHRlryg9m9uEjGnPv8xQBd0bw $ecab9a5832f7f2913ad8ea429685e10f7c035d06
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: Hs service ztest: IptLocalId(f9055afd0854f07ef7906ecea0a633a7ce49f5b03f836cbdf3e7226b61cc8d85) establishing previous IPT at relay ed25519:nVr4hiv6nNH3ZTR9BUDpVRpTJchdAUx977CLqziHLmw $b7b35afd69a1f8c0453a45fdc92b28824f34f402
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: Hs service ztest: IptLocalId(b8866c883a85c378c3a5556669532077280e8b4ffab3b784a9002aed489435b0) establishing previous IPT at relay ed25519:33GIrEt61yYIE/FHGBZIKcNyiQzfsv3IiIHCtPWyH4c $86c1b3da62eff05ad52040c6f569939319cadf26
2023-11-30T10:39:08Z DEBUG tor_hsservice::svc::publish::reactor: starting descriptor publisher reactor
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: no good IPTs
2023-11-30T10:39:08Z DEBUG arti::reload_cfg: Entering FS event loop
2023-11-30T10:39:08Z DEBUG arti_client::client: It appears we have the lock on our state files.
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: IptLocalId(c74300e60a0f90232cc7f29c4b95ffd5108f1b930abae74a7bccaedd29afd09d) status update IptStatus { status: Establishing, n_faults: 0, wants_to_retire: Ok(()) }
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: no good IPTs
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: IptLocalId(cc5d8233adbbe3399e2305b3db11be44951fb5e018cfd1ad3fe2b4ebe6bf313a) status update IptStatus { status: Establishing, n_faults: 0, wants_to_retire: Ok(()) }
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: no good IPTs
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: IptLocalId(f9055afd0854f07ef7906ecea0a633a7ce49f5b03f836cbdf3e7226b61cc8d85) status update IptStatus { status: Establishing, n_faults: 0, wants_to_retire: Ok(()) }
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: no good IPTs
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: IptLocalId(b8866c883a85c378c3a5556669532077280e8b4ffab3b784a9002aed489435b0) status update IptStatus { status: Establishing, n_faults: 0, wants_to_retire: Ok(()) }
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: no good IPTs
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: IptLocalId(c74300e60a0f90232cc7f29c4b95ffd5108f1b930abae74a7bccaedd29afd09d) status update IptStatus { status: Establishing, n_faults: 0, wants_to_retire: Ok(()) }
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: no good IPTs
2023-11-30T10:39:08Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: no good IPTs
2023-11-30T10:39:09Z DEBUG arti::reload_cfg: Config reload event Rescan: reloading configuration.
2023-11-30T10:39:09Z INFO arti::reload_cfg: Successfully reloaded configuration.
2023-11-30T10:39:09Z DEBUG arti_client::status: 19%: connecting to the internet; directory is fetching authority certificates (0/8)
2023-11-30T10:39:09Z DEBUG arti_client::status: 27%: connecting to the internet; directory is fetching authority certificates (8/8)
2023-11-30T10:39:15Z DEBUG tor_dirmgr::state: Consensus now usable, with 0 microdescriptors missing. The current consensus is fresh until 2023-11-29 19:00:00.0 +00:00:00, and valid until 2023-11-29 21:00:00.0 +00:00:00. I've picked 2023-11-29 20:04:34.170147666 +00:00:00 as the earliest time to replace it.
2023-11-30T10:39:15Z INFO tor_dirmgr: Marked consensus usable.
2023-11-30T10:39:15Z DEBUG arti_client::status: 77%: connecting to the internet; directory is fetching microdescriptors (7787/7787)
2023-11-30T10:39:15Z INFO tor_dirmgr: Loaded a good directory from cache.
2023-11-30T10:39:15Z INFO arti: Sufficiently bootstrapped; system SOCKS now functional.
2023-11-30T10:39:15Z INFO arti::socks: Listening on [::1]:9150.
2023-11-30T10:39:15Z INFO arti::socks: Listening on 127.0.0.1:9150.
2023-11-30T10:39:15Z DEBUG tor_chanmgr::factory: Attempting to open a new channel to [94.23.172.32:444 ed25519:owZGf6CpH56ez6MpXU0hWdPwgTwrcqsWQSLrFrJ1XHM $799ecf332deca02c49de21ff022f7e2dbecda771]
2023-11-30T10:39:15Z DEBUG tor_chanmgr::transport::default: Connecting to 94.23.172.32:444
2023-11-30T10:39:15Z DEBUG arti_client::status: 84%: handshaking with Tor relays; directory is fetching microdescriptors (7787/7787)
2023-11-30T10:39:15Z DEBUG tor_proto::channel::handshake: Chan 0: starting Tor handshake with Direct([94.23.172.32:444])
2023-11-30T10:39:15Z DEBUG tor_proto::channel::handshake: Chan 0: Completed handshake with owZGf6CpH56ez6MpXU0hWdPwgTwrcqsWQSLrFrJ1XHM [$799ecf332deca02c49de21ff022f7e2dbecda771]
2023-11-30T10:39:15Z DEBUG arti_client::status: 92%: connecting successfully; directory is fetching microdescriptors (7787/7787)
2023-11-30T10:39:16Z INFO tor_guardmgr::guard: We have found that guard [94.23.172.32:444 ed25519:owZGf6CpH56ez6MpXU0hWdPwgTwrcqsWQSLrFrJ1XHM $799ecf332deca02c49de21ff022f7e2dbecda771] is usable.
2023-11-30T10:39:16Z DEBUG arti_client::status: 92%: connecting successfully; directory is fetching microdescriptors (7787/7787)
2023-11-30T10:39:16Z DEBUG arti_client::status: 92%: connecting successfully; directory is fetching microdescriptors (7787/7787)
2023-11-30T10:39:16Z DEBUG tor_chanmgr::factory: Attempting to open a new channel to [85.208.144.164:443+ ed25519:rSXeb/ZAJCmtsrw4nwox2x4T2geH0zRaFDu5WSdt5/8 $b13c2c569f3fd0c530b7d96e5ff7933df7a0e834]
2023-11-30T10:39:16Z DEBUG tor_chanmgr::transport::default: Connecting to 85.208.144.164:443
2023-11-30T10:39:16Z DEBUG tor_proto::channel::handshake: Chan 1: starting Tor handshake with Direct([85.208.144.164:443])
2023-11-30T10:39:16Z DEBUG tor_dirmgr::state: Consensus now usable, with 0 microdescriptors missing. The current consensus is fresh until 2023-11-29 19:00:00.0 +00:00:00, and valid until 2023-11-29 21:00:00.0 +00:00:00. I've picked 2023-11-29 20:22:47.99599487 +00:00:00 as the earliest time to replace it.
2023-11-30T10:39:16Z INFO tor_dirmgr: Marked consensus usable.
2023-11-30T10:39:16Z DEBUG arti_client::status: 42%: connecting successfully; directory is fetching authority certificates (8/8)
2023-11-30T10:39:16Z INFO tor_dirmgr: Directory is complete. attempt=1
2023-11-30T10:39:16Z INFO tor_dirmgr::bootstrap: 1: Downloading a consensus. attempt=2
2023-11-30T10:39:17Z DEBUG tor_proto::channel::handshake: Chan 1: Completed handshake with rSXeb/ZAJCmtsrw4nwox2x4T2geH0zRaFDu5WSdt5/8 [$b13c2c569f3fd0c530b7d96e5ff7933df7a0e834]
2023-11-30T10:39:17Z INFO tor_guardmgr::guard: We have found that guard [85.208.144.164:443+ ed25519:rSXeb/ZAJCmtsrw4nwox2x4T2geH0zRaFDu5WSdt5/8 $b13c2c569f3fd0c530b7d96e5ff7933df7a0e834] is usable.
2023-11-30T10:39:17Z DEBUG arti_client::status: 42%: connecting successfully; directory is fetching authority certificates (8/8)
2023-11-30T10:39:17Z INFO tor_dirmgr: Applying a consensus diff
2023-11-30T10:39:18Z DEBUG tor_circmgr: Preeemptive circuit was created for Preemptive { port: Some(TargetPort { ipv6: false, port: 80 }), circs: 2 }
2023-11-30T10:39:18Z DEBUG tor_circmgr: Preeemptive circuit was created for Preemptive { port: Some(TargetPort { ipv6: false, port: 443 }), circs: 2 }
2023-11-30T10:39:18Z DEBUG tor_circmgr: Preeemptive circuit was created for Preemptive { port: None, circs: 2 }
2023-11-30T10:39:18Z DEBUG arti_client::status: 42%: connecting successfully; directory is fetching authority certificates (8/8)
2023-11-30T10:39:18Z DEBUG tor_hsservice::svc::ipt_establish: ztest: Successfully established introduction point with ed25519:33GIrEt61yYIE/FHGBZIKcNyiQzfsv3IiIHCtPWyH4c $86c1b3da62eff05ad52040c6f569939319cadf26
2023-11-30T10:39:18Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: IptLocalId(b8866c883a85c378c3a5556669532077280e8b4ffab3b784a9002aed489435b0) status update IptStatus { status: Good(GoodIptDetails { link_specifiers: [EncodedLinkSpec { lstype: LinkSpecType(ORPORT_V4), body: [82, 165, 10, 171, 1, 187] }, EncodedLinkSpec { lstype: LinkSpecType(RSAID), body: [134, 193, 179, 218, 98, 239, 240, 90, 213, 32, 64, 198, 245, 105, 147, 147, 25, 202, 223, 38] }, EncodedLinkSpec { lstype: LinkSpecType(ED25519ID), body: [223, 113, 136, 172, 75, 122, 215, 38, 8, 19, 241, 71, 24, 22, 72, 41, 195, 114, 137, 12, 223, 178, 253, 200, 136, 129, 194, 180, 245, 178, 31, 135] }], ipt_kp_ntor: PublicKey(MontgomeryPoint([204, 182, 219, 177, 242, 128, 174, 179, 83, 113, 218, 181, 92, 55, 248, 226, 151, 247, 174, 132, 17, 175, 171, 236, 208, 22, 220, 252, 241, 217, 189, 1])) }), n_faults: 0, wants_to_retire: Ok(()) }
2023-11-30T10:39:18Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: 1 good IPTs, < target 3, waiting up to 10081ms for IptLocalId(c74300e60a0f90232cc7f29c4b95ffd5108f1b930abae74a7bccaedd29afd09d)
2023-11-30T10:39:18Z DEBUG tor_hsservice::svc::ipt_establish: ztest: Successfully established introduction point with ed25519:nVr4hiv6nNH3ZTR9BUDpVRpTJchdAUx977CLqziHLmw $b7b35afd69a1f8c0453a45fdc92b28824f34f402
2023-11-30T10:39:18Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: IptLocalId(f9055afd0854f07ef7906ecea0a633a7ce49f5b03f836cbdf3e7226b61cc8d85) status update IptStatus { status: Good(GoodIptDetails { link_specifiers: [EncodedLinkSpec { lstype: LinkSpecType(ORPORT_V4), body: [51, 159, 195, 41, 0, 143] }, EncodedLinkSpec { lstype: LinkSpecType(RSAID), body: [183, 179, 90, 253, 105, 161, 248, 192, 69, 58, 69, 253, 201, 43, 40, 130, 79, 52, 244, 2] }, EncodedLinkSpec { lstype: LinkSpecType(ED25519ID), body: [157, 90, 248, 134, 43, 250, 156, 209, 247, 101, 52, 125, 5, 64, 233, 85, 26, 83, 37, 200, 93, 1, 76, 125, 239, 176, 139, 171, 56, 135, 46, 108] }, EncodedLinkSpec { lstype: LinkSpecType(ORPORT_V6), body: [32, 1, 11, 200, 18, 1, 5, 18, 218, 94, 211, 255, 254, 108, 130, 65, 0, 143] }], ipt_kp_ntor: PublicKey(MontgomeryPoint([104, 119, 228, 216, 96, 36, 198, 173, 95, 139, 228, 85, 199, 215, 220, 40, 88, 136, 12, 153, 18, 53, 242, 103, 187, 47, 45, 40, 241, 200, 225, 87])) }), n_faults: 0, wants_to_retire: Ok(()) }
2023-11-30T10:39:18Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: 2 good IPTs, < target 3, waiting up to 10081ms for IptLocalId(c74300e60a0f90232cc7f29c4b95ffd5108f1b930abae74a7bccaedd29afd09d)
2023-11-30T10:39:18Z DEBUG tor_hsservice::svc::ipt_establish: ztest: Successfully established introduction point with ed25519:9mTkWlWndy+NAoChGXnUfP2g4E0zSGOxT0m0PWq+u5s $1d851c4bd54c5923328debd4ab1ff7640a2b4e54
2023-11-30T10:39:18Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: IptLocalId(c74300e60a0f90232cc7f29c4b95ffd5108f1b930abae74a7bccaedd29afd09d) status update IptStatus { status: Good(GoodIptDetails { link_specifiers: [EncodedLinkSpec { lstype: LinkSpecType(ORPORT_V4), body: [88, 151, 194, 12, 35, 41] }, EncodedLinkSpec { lstype: LinkSpecType(RSAID), body: [29, 133, 28, 75, 213, 76, 89, 35, 50, 141, 235, 212, 171, 31, 247, 100, 10, 43, 78, 84] }, EncodedLinkSpec { lstype: LinkSpecType(ED25519ID), body: [246, 100, 228, 90, 85, 167, 119, 47, 141, 2, 128, 161, 25, 121, 212, 124, 253, 160, 224, 77, 51, 72, 99, 177, 79, 73, 180, 61, 106, 190, 187, 155] }], ipt_kp_ntor: PublicKey(MontgomeryPoint([37, 218, 239, 208, 74, 104, 35, 68, 209, 217, 164, 239, 228, 77, 149, 203, 82, 18, 243, 26, 124, 16, 83, 223, 70, 81, 170, 47, 54, 215, 144, 90])) }), n_faults: 0, wants_to_retire: Ok(()) }
2023-11-30T10:39:18Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: 3 good IPTs, >= target 3, publishing
2023-11-30T10:39:18Z DEBUG tor_hsservice::svc::ipt_establish: ztest: Successfully established introduction point with ed25519:esI8hyLcA3AaxHU3PCOHRlryg9m9uEjGnPv8xQBd0bw $ecab9a5832f7f2913ad8ea429685e10f7c035d06
2023-11-30T10:39:18Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: IptLocalId(cc5d8233adbbe3399e2305b3db11be44951fb5e018cfd1ad3fe2b4ebe6bf313a) status update IptStatus { status: Good(GoodIptDetails { link_specifiers: [EncodedLinkSpec { lstype: LinkSpecType(ORPORT_V4), body: [185, 148, 1, 169, 3, 82] }, EncodedLinkSpec { lstype: LinkSpecType(RSAID), body: [236, 171, 154, 88, 50, 247, 242, 145, 58, 216, 234, 66, 150, 133, 225, 15, 124, 3, 93, 6] }, EncodedLinkSpec { lstype: LinkSpecType(ED25519ID), body: [122, 194, 60, 135, 34, 220, 3, 112, 26, 196, 117, 55, 60, 35, 135, 70, 90, 242, 131, 217, 189, 184, 72, 198, 156, 251, 252, 197, 0, 93, 209, 188] }], ipt_kp_ntor: PublicKey(MontgomeryPoint([229, 114, 109, 61, 249, 175, 154, 51, 61, 134, 254, 39, 172, 199, 210, 131, 95, 162, 208, 27, 34, 11, 42, 211, 40, 40, 40, 224, 89, 125, 119, 62])) }), n_faults: 0, wants_to_retire: Ok(()) }
2023-11-30T10:39:18Z DEBUG tor_hsservice::ipt_mgr: HS service ztest: 4 good IPTs, >= target 3, publishing
2023-11-30T10:39:19Z DEBUG tor_dirmgr::state: Consensus now usable, with 448 microdescriptors missing. The current consensus is fresh until 2023-11-30 11:00:00.0 +00:00:00, and valid until 2023-11-30 13:00:00.0 +00:00:00. I've picked 2023-11-30 12:13:57.742181929 +00:00:00 as the earliest time to replace it.
2023-11-30T10:39:19Z INFO tor_dirmgr: Marked consensus usable.
2023-11-30T10:39:19Z DEBUG arti_client::status: 42%: connecting successfully; directory is fetching authority certificates (8/8); next directory is fetching microdescriptors (7373/7800)
2023-11-30T10:39:19Z INFO tor_dirmgr::bootstrap: 1: Downloading microdescriptors (we are missing 427). attempt=2
2023-11-30T10:39:20Z WARN tor_hsservice::svc::publish: the publisher reactor has shut down: error: Internal error: internal error (bug) at /volatile/rustcargo/Rustup/Arti/arti/crates/tor-hsservice/src/svc/publish/reactor.rs:1375:13: current wallclock time not within specified time period?!
Captured( 0: tor_error::internal::ie_backtrace::capture
at crates/tor-error/src/internal.rs:23:18
1: tor_error::internal::Bug::new_inner
at crates/tor-error/src/internal.rs:107:24
2: tor_error::internal::Bug::new
at crates/tor-error/src/internal.rs:96:9
3: tor_hsservice::svc::publish::reactor::Reactor<R,M>::generate_revision_counter::{{closure}}
at crates/tor-hsservice/src/svc/publish/reactor.rs:1375:13
4: core::option::Option<T>::ok_or_else
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/core/src/option.rs:1239:25
5: tor_hsservice::svc::publish::reactor::Reactor<R,M>::generate_revision_counter
at crates/tor-hsservice/src/svc/publish/reactor.rs:1374:22
6: tor_hsservice::svc::publish::reactor::Reactor<R,M>::upload_all::{{closure}}
at crates/tor-hsservice/src/svc/publish/reactor.rs:1000:36
7: tor_hsservice::svc::publish::reactor::Reactor<R,M>::run_once::{{closure}}
at crates/tor-hsservice/src/svc/publish/reactor.rs:665:39
8: tor_hsservice::svc::publish::reactor::Reactor<R,M>::run::{{closure}}
at crates/tor-hsservice/src/svc/publish/reactor.rs:600:68
9: tor_hsservice::svc::publish::Publisher<R,M>::launch::{{closure}}
at crates/tor-hsservice/src/svc/publish.rs:108:37
10: <futures_task::future_obj::LocalFutureObj<T> as core::future::future::Future>::poll
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-task-0.3.29/src/future_obj.rs:84:18
11: <futures_task::future_obj::FutureObj<T> as core::future::future::Future>::poll
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-task-0.3.29/src/future_obj.rs:127:9
12: tokio::runtime::task::core::Core<T,S>::poll::{{closure}}
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/core.rs:328:17
13: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/loom/std/unsafe_cell.rs:16:9
tokio::runtime::task::core::Core<T,S>::poll
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/core.rs:317:13
14: tokio::runtime::task::harness::poll_future::{{closure}}
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/harness.rs:485:19
15: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/core/src/panic/unwind_safe.rs:271:9
16: std::panicking::try::do_call
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/panicking.rs:504:40
17: __rust_try
18: std::panicking::try
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/panicking.rs:468:19
19: std::panic::catch_unwind
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/panic.rs:142:14
20: tokio::runtime::task::harness::poll_future
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/harness.rs:473:18
21: tokio::runtime::task::harness::Harness<T,S>::poll_inner
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/harness.rs:208:27
22: tokio::runtime::task::harness::Harness<T,S>::poll
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/harness.rs:153:15
23: tokio::runtime::task::raw::poll
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/raw.rs:276:5
24: tokio::runtime::task::raw::RawTask::poll
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/raw.rs:200:18
25: tokio::runtime::task::LocalNotified<S>::run
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/mod.rs:408:9
26: tokio::runtime::scheduler::multi_thread::worker::Context::run_task::{{closure}}
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/scheduler/multi_thread/worker.rs:576:13
27: tokio::runtime::coop::with_budget
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/coop.rs:107:5
tokio::runtime::coop::budget
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/coop.rs:73:5
tokio::runtime::scheduler::multi_thread::worker::Context::run_task
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/scheduler/multi_thread/worker.rs:575:9
28: tokio::runtime::scheduler::multi_thread::worker::Context::run
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/scheduler/multi_thread/worker.rs:538:24
29: tokio::runtime::scheduler::multi_thread::worker::run::{{closure}}::{{closure}}
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/scheduler/multi_thread/worker.rs:491:21
30: tokio::runtime::context::scoped::Scoped<T>::set
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/context/scoped.rs:40:9
31: tokio::runtime::context::set_scheduler::{{closure}}
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/context.rs:176:26
32: std::thread::local::LocalKey<T>::try_with
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/thread/local.rs:270:16
33: std::thread::local::LocalKey<T>::with
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/thread/local.rs:246:9
34: tokio::runtime::context::set_scheduler
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/context.rs:176:9
35: tokio::runtime::scheduler::multi_thread::worker::run::{{closure}}
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/scheduler/multi_thread/worker.rs:486:9
36: tokio::runtime::context::runtime::enter_runtime
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/context/runtime.rs:65:16
37: tokio::runtime::scheduler::multi_thread::worker::run
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/scheduler/multi_thread/worker.rs:478:5
38: tokio::runtime::scheduler::multi_thread::worker::Launch::launch::{{closure}}
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/scheduler/multi_thread/worker.rs:447:45
39: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/blocking/task.rs:42:21
40: tokio::runtime::task::core::Core<T,S>::poll::{{closure}}
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/core.rs:328:17
41: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/loom/std/unsafe_cell.rs:16:9
tokio::runtime::task::core::Core<T,S>::poll
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/core.rs:317:13
42: tokio::runtime::task::harness::poll_future::{{closure}}
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/harness.rs:485:19
43: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/core/src/panic/unwind_safe.rs:271:9
44: std::panicking::try::do_call
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/panicking.rs:504:40
45: __rust_try
46: std::panicking::try
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/panicking.rs:468:19
47: std::panic::catch_unwind
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/panic.rs:142:14
48: tokio::runtime::task::harness::poll_future
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/harness.rs:473:18
49: tokio::runtime::task::harness::Harness<T,S>::poll_inner
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/harness.rs:208:27
50: tokio::runtime::task::harness::Harness<T,S>::poll
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/harness.rs:153:15
51: tokio::runtime::task::raw::poll
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/raw.rs:276:5
52: tokio::runtime::task::raw::RawTask::poll
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/raw.rs:200:18
53: tokio::runtime::task::UnownedTask<S>::run
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/task/mod.rs:445:9
54: tokio::runtime::blocking::pool::Task::run
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/blocking/pool.rs:159:9
55: tokio::runtime::blocking::pool::Inner::run
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/blocking/pool.rs:513:17
56: tokio::runtime::blocking::pool::Spawner::spawn_thread::{{closure}}
at /home/rustcargo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/blocking/pool.rs:471:13
57: std::sys_common::backtrace::__rust_begin_short_backtrace
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/sys_common/backtrace.rs:154:18
58: std::thread::Builder::spawn_unchecked_::{{closure}}::{{closure}}
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/thread/mod.rs:529:17
59: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/core/src/panic/unwind_safe.rs:271:9
60: std::panicking::try::do_call
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/panicking.rs:504:40
61: __rust_try
62: std::panicking::try
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/panicking.rs:468:19
63: std::panic::catch_unwind
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/panic.rs:142:14
std::thread::Builder::spawn_unchecked_::{{closure}}
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/thread/mod.rs:528:30
64: core::ops::function::FnOnce::call_once{{vtable.shim}}
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/core/src/ops/function.rs:250:5
65: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/alloc/src/boxed.rs:2007:9
<alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/alloc/src/boxed.rs:2007:9
std::sys::unix::thread::Thread::new::thread_start
at /rustc/489647f984b2b3a5ee6b2a0d46a527c8d926ceae/library/std/src/sys/unix/thread.rs:108:17
66: start_thread
at /build/glibc-6iIyft/glibc-2.28/nptl/pthread_create.c:486:8
67: clone
at /build/glibc-6iIyft/glibc-2.28/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:95
)
2023-11-30T10:39:20Z DEBUG tor_hsservice::svc::publish::reactor: reupload task channel closed!
2023-11-30T10:39:21Z INFO tor_dirmgr: Directory is complete. attempt=2
2023-11-30T10:39:21Z DEBUG arti_client::status: 100%: connecting successfully; directory is usable, fresh until 2023-11-30 11:00:00 UTC, and valid until 2023-11-30 13:00:00 UTC
^C
```Arti: Onion service supportgabi-250gabi-250https://gitlab.torproject.org/tpo/core/arti/-/issues/495Crates in Cargo.toml are no longer topologically sorted2022-06-09T15:16:15ZNick MathewsonCrates in Cargo.toml are no longer topologically sortedA fair amount of our release machinery relied on being able to get a topologically sorted list of crates by running `maint/list-crates`, which just lists the crates in the order that they appear in Cargo.toml. The topological-sorting pr...A fair amount of our release machinery relied on being able to get a topologically sorted list of crates by running `maint/list-crates`, which just lists the crates in the order that they appear in Cargo.toml. The topological-sorting property was enforced by `maint/check_toposort`.
With bfd41ddb5fefe8808c33669c508dde4325808e35 (from !549) it appears that the crates are now sorted lexically. But the following comment still appears:
```
# Please keep this list topologically sorted by dependency relation, so
# that every crate appears _before_ any other crate that depends on it.
```
We have two choices:
1. Revert the sorting of bfd41ddb5fefe8808c33669c508dde4325808e35 as it applies to the top-level Cargo.toml. Possibly run `check_toposort` in our CI so this can't happen again.
2. Replace `list-crates` and `check_toposort` with a script that performs the topological sort.
We should solve this in the next month, so that I can do the next release. :)
cc @DizietIan Jacksoniwj@torproject.orgIan Jacksoniwj@torproject.orghttps://gitlab.torproject.org/tpo/network-health/team/-/issues/237Create `load_tcp_exhaustion_total` and `load_oom_bytes_total` panels on relay...2022-05-30T09:52:55ZGeorg KoppenCreate `load_tcp_exhaustion_total` and `load_oom_bytes_total` panels on relay-01 dashboard@hiro created a dashboard with some panels for relay-01 (thanks!). We should add the missing panels.@hiro created a dashboard with some panels for relay-01 (thanks!). We should add the missing panels.Georg KoppenGeorg Koppenhttps://gitlab.torproject.org/tpo/tpa/team/-/issues/41330Create a `lox` user on rdsys-frontend-012023-10-23T18:30:48ZCecylia BocovichCreate a `lox` user on rdsys-frontend-01On the rdsys-frontend-01 machine, we're going with the plan to create a user per service and setup systemd for that user (see https://gitlab.torproject.org/tpo/anti-censorship/rdsys/-/issues/167#note_2943424)). We're planning to deploy t...On the rdsys-frontend-01 machine, we're going with the plan to create a user per service and setup systemd for that user (see https://gitlab.torproject.org/tpo/anti-censorship/rdsys/-/issues/167#note_2943424)). We're planning to deploy the lox distributor and would like a user for that service.
cc @meskio @onyinyanganarcatanarcathttps://gitlab.torproject.org/tpo/tpa/team/-/issues/40819create a dev VM for GitLab (gitlab-dev-01)2022-06-28T19:23:59Zanarcatcreate a dev VM for GitLab (gitlab-dev-01)i'm going to [hack at gitlab](https://hackweek.onionize.space/hackweek/talk/3PNGB8/) during the [hackweek](https://gitlab.torproject.org/tpo/community/hackweek/) but i don't want to break gitlab, so i need a spare VM.i'm going to [hack at gitlab](https://hackweek.onionize.space/hackweek/talk/3PNGB8/) during the [hackweek](https://gitlab.torproject.org/tpo/community/hackweek/) but i don't want to break gitlab, so i need a spare VM.anarcatanarcat