The Tor Project issueshttps://gitlab.torproject.org/groups/tpo/-/issues2024-03-22T01:07:25Zhttps://gitlab.torproject.org/tpo/applications/vpn/-/issues/48Provide links to support channels and offline documentation2024-03-22T01:07:25Zmicahmicah@torproject.orgProvide links to support channels and offline documentationWhen users have issues with TorVPN that they cannot resolve on their own, they should easily find a help meny with documentation and links to support channels so they can resolve their issues.
~~The help documentation should be availabl...When users have issues with TorVPN that they cannot resolve on their own, they should easily find a help meny with documentation and links to support channels so they can resolve their issues.
~~The help documentation should be available offline, so its possible to debug issues without needing to connect to the Internet in the clear to access it.~~
(The offline documentation portion of this issue will be moved to the cost extension)VPN pre-alpha 06cybertacybertahttps://gitlab.torproject.org/tpo/tpa/team/-/issues/40755TPA-RFC-33: monitoring system upgrade or replacement2024-03-19T20:17:23ZanarcatTPA-RFC-33: monitoring system upgrade or replacementin #29864, we've gone pretty deep in comparisons between prometheus and icinga and how the first could replace the latter.
but now we're stuck at "i like this one better than the other" because we don't have a clear set of requirements....in #29864, we've gone pretty deep in comparisons between prometheus and icinga and how the first could replace the latter.
but now we're stuck at "i like this one better than the other" because we don't have a clear set of requirements.
the task here is to write a set of requirements for the new alerting system and, ultimately, make a proposal for the replacement of the deprecated Icinga 1 deployment we have now.
* [ ] establish requirements
* [ ] approve requirements
* if replacing icinga:
* [ ] review #29864 for ideas and tasks
* [ ] decide whether we keep the prometheus1/2 distinction
* [ ] deploy alert manager on prometheus1
* [ ] reimplement the Nagios alerting commands (optional?)
* [ ] send Nagios alerts through the alertmanager (optional?)
* [ ] rewrite (non-NRPE) commands (9) as Prometheus alerts
* [ ] scrape the NRPE metrics from Prometheus (optional)
* [ ] create a dashboard and/or alerts for the NRPE metrics (optional)
* [ ] review the NRPE commands (300+) to see which one to rewrite as Prometheus alerts
* [ ] turn off the Icinga server
* [ ] remove all traces of NRPE on all nodes
* if keeping icinga
* [ ] review work from @weasel done on DSA's Puppet/Icinga integration
* [ ] deploy that module or another inciga module inside puppet
* [ ] rewrite all the checks from the `nagios-master.cfg` file into puppet (300+)
* [ ] rebuild a new Icinga 2 server
* [ ] retire the old Icinga 1 serverold service retirement 2023anarcatanarcathttps://gitlab.torproject.org/tpo/tpa/team/-/issues/40260TPA-RFC-11: SVN retirement2024-02-20T16:08:17ZanarcatTPA-RFC-11: SVN retirementDraft a proposal to retire SVN altogether. This was somewhat agreed on in #17202, and there are discussions on how exactly to do this in #32273 (how to archive it) and #32025 (how stop corpsvn specifically, the remaining live repo), but ...Draft a proposal to retire SVN altogether. This was somewhat agreed on in #17202, and there are discussions on how exactly to do this in #32273 (how to archive it) and #32025 (how stop corpsvn specifically, the remaining live repo), but this was all done before we came up with the TPA-RFC process.
Now that we have that process, it seems logical to go through with it explicitly so that all stakeholders can express their concerns about the change. I specifically plan on having a live call with sue about this, since she's the most impacted by this.
I create this ticket to track that proposal because #17202 has been sitting in the icebox forever and it's kind of hard to "grab" because it feels so big. Having "write a proposal" first seems more accessible.
draft proposal is in https://gitlab.torproject.org/tpo/tpa/team/-/wikis/policy/tpa-rfc-11-svn-retirement
Next steps:
- [x] @susan draft requirements for the sensitive file storage service
- [x] @anarcat research whether we can restrict external sharing in Nextcloud
- [x] @anarcat schedule a meeting in late june to revise the aboveold service retirement 2023anarcatanarcat2024-01-19https://gitlab.torproject.org/tpo/tpa/team/-/issues/32273archive private information from SVN2022-12-06T19:50:49Zanarcatarchive private information from SVNa common problem in the internal and corp SVN repository shutdown is "what do we do with all that stuff now". for example, the internal repository is shutdown now (#15949) but there is still information there that is valuable. or not. we...a common problem in the internal and corp SVN repository shutdown is "what do we do with all that stuff now". for example, the internal repository is shutdown now (#15949) but there is still information there that is valuable. or not. we're not sure. we think so, but maybe some of it should be destroyed.
so we need to answer the following questions:
1. which data should be kept and destroyed from the repositories?
2. where should it be kept?
so far, I went under the assertion that the answers were:
1. keep everything
2. in nextcloud
but it seems this might not be exactly right.old service retirement 2023https://gitlab.torproject.org/tpo/tpa/team/-/issues/32025Stop using corpsvn and disable it as a service2022-12-06T19:50:54ZRoger DingledineStop using corpsvn and disable it as a serviceIn legacy/trac#17202 we're going to decommission the server that runs our various svn services.
We have a plan for the public svn.tpo service: legacy/trac#15948
and we are making a plan for svninternal: legacy/trac#15949
That leaves c...In legacy/trac#17202 we're going to decommission the server that runs our various svn services.
We have a plan for the public svn.tpo service: legacy/trac#15948
and we are making a plan for svninternal: legacy/trac#15949
That leaves corpsvn, which I think is the most actively used still -- for example our accounting folks use it. This ticket is about making and finishing the plan for shutting down the corpsvn service.old service retirement 2023https://gitlab.torproject.org/tpo/tpa/team/-/issues/17202Shut down SVN and decomission the host (gayi)2022-12-06T19:51:28ZNick MathewsonShut down SVN and decomission the host (gayi)It is now 2015. Let us not have an SVN server running in 2016.
-- And is now 2020 and we are finally trying to shutdown this. Modifying this ticket to add the plan suggested by arma (with a few modifications by me).
(1) Freeze corps...It is now 2015. Let us not have an SVN server running in 2016.
-- And is now 2020 and we are finally trying to shutdown this. Modifying this ticket to add the plan suggested by arma (with a few modifications by me).
(1) Freeze corpsvn (i.e. make it read-only), and make a full checkout
of it somewhere, and have that accessible.
(2) Use Nextcloud for any other file people may need to save. *Not* move all the old files there, or at least not by default.
(3) Put together a strike team to look at the frozen corpsvn checkout,
plus the frozen internalsvn checkout. Build a list of categories (HR,
finance, grantwriting, grant manager, etc), and sort the files into
these categories, discarding as many files as possible. Figure out
where else people are storing these files currently (granthub? google
docs? their hard drive?). Make a comprehensive plan for how files of each category should be stored, and who should have read or write access per category. For example, there's no reason that HR documents should go into the same database, or even the same storage service, as grant proposals. Process started in https://bugs.torproject.org/32273
Update, 2021: there's a "forest" of tickets surrounding this, as the "tree" was lost in the Trac migration, i'll try to reconstruct related tickets:
* SVN/host shutdown (this ticket)
* [ ] #32273 - archive private information from SVN: determine what moves to where (presumably: "everything, to nextcloud")
* [x] #15949 - shutdown SVN internal (done, but the repository is still on gayi, and not archived anywhere else)
* [ ] #32025 - stop using corpsvn and disable it (still open, blocked mostly on @sue iirc)
* [x] #33537 - audit SVN accesses (led to the [access control document](https://gitlab.torproject.org/tpo/tpa/team/-/wikis/howto/svn/#access-control) and a private audit email with one minor remaining task, `Message-ID: 871rq02rvt.fsf@curie.anarc.at`, can probably be just closed)
* [x] #15948 - public SVN retirement (done, moved to the static site mirror system (#32031) and archive.org)
* [x] #31686 - textile retirement (done)
* [ ] #40260 - actual proposal (next step, blocker)
It seems the next step here is to write a policy proposal to make sure we're all on the same page ("let's move to Nextcloud") and schedule a call with Sue to make sure it works in her workflow.old service retirement 2023anarcatanarcathttps://gitlab.torproject.org/tpo/network-health/metrics/tagtor/-/issues/26Keep tags across family members in sync2024-01-16T13:50:05ZGeorg KoppenKeep tags across family members in syncWe got families included in our tagging process in #20, but we should make sure that the families stay in sync tag-wise. That is: between different tagging sessions family members might not be the same. If the family loses members then t...We got families included in our tagging process in #20, but we should make sure that the families stay in sync tag-wise. That is: between different tagging sessions family members might not be the same. If the family loses members then that's not a big deal. However, newly added relays should have all the tags their other family members have once I resume tagging relays at a later time.TagTor is completed for its scope in Sponsor 112https://gitlab.torproject.org/tpo/network-health/metrics/tagtor/-/issues/25Add filtering options for relays one is interested in2024-01-16T13:50:05ZGeorg KoppenAdd filtering options for relays one is interested inFor bad-relay work or general tagging help it would be nice if we could filter relays, eg. wrt AS, uptime, first_seen dates etc.For bad-relay work or general tagging help it would be nice if we could filter relays, eg. wrt AS, uptime, first_seen dates etc.TagTor is completed for its scope in Sponsor 112https://gitlab.torproject.org/tpo/network-health/metrics/tagtor/-/issues/23Sort entries by consensus weight by default2024-01-16T13:50:05ZGeorg KoppenSort entries by consensus weight by defaultGiven that it would help with the tagging workflow @arma described (see: #19), we should sort the entries according to consensus weight by default.Given that it would help with the tagging workflow @arma described (see: #19), we should sort the entries according to consensus weight by default.TagTor is completed for its scope in Sponsor 112HiroHirohttps://gitlab.torproject.org/tpo/network-health/metrics/tagtor/-/issues/22Group TagTor entries by family where possible2024-01-16T13:50:05ZGeorg KoppenGroup TagTor entries by family where possibleAs said in #20 for tagging purposes the particular relays are not too important, we want to have the relays grouped so they match to operators, which we could have met, talked to etc. So, instead of a per relay view we want to have a per...As said in #20 for tagging purposes the particular relays are not too important, we want to have the relays grouped so they match to operators, which we could have met, talked to etc. So, instead of a per relay view we want to have a per family one if possible. (If there are only single relays or bridges per operator then, of course, those are single relay/bridge entries only and that's fine, too)TagTor is completed for its scope in Sponsor 112HiroHirohttps://gitlab.torproject.org/tpo/network-health/metrics/tagtor/-/issues/21Add total bw and per relay bw and consensus weight to the routers display2024-03-28T11:15:30ZGeorg KoppenAdd total bw and per relay bw and consensus weight to the routers displayWe want to know how much weight relays have and their advertised bandwidth. Those could be additional columns. Additionally, I think it's worth having the advertised bw visible for all the relays under a particular filter, which could be...We want to know how much weight relays have and their advertised bandwidth. Those could be additional columns. Additionally, I think it's worth having the advertised bw visible for all the relays under a particular filter, which could be displayed below the router/bridges table.TagTor is completed for its scope in Sponsor 112HiroHirohttps://gitlab.torproject.org/tpo/network-health/metrics/tagtor/-/issues/19Interactive guided tagging user flow2024-01-16T13:50:05ZRoger DingledineInteractive guided tagging user flowIt looks like the initial tagtor flow is designed around a "user realizes they want to tag a specific relay, goes to tagtor, finds the relay, tags it" flow.
To answer overall-network analysis questions, we need another flow which aims a...It looks like the initial tagtor flow is designed around a "user realizes they want to tag a specific relay, goes to tagtor, finds the relay, tags it" flow.
To answer overall-network analysis questions, we need another flow which aims at more comprehensive tagging.
I described one approach to that flow in my July 2020 tor-relays@ post (https://lists.torproject.org/pipermail/tor-relays/2020-July/018669.html):
"the next step is to figure out the workflow for annotating relays. I
had originally imagined some sort of web-based UI where it leads me
through constructing and maintaining a list of fingerprints that I have
annotated as 'known' and a list annotated as 'unknown', and it shows
me how my lists have been doing over time, and presents me with new
not-yet-annotated relays.
[...]
One of the central functions in those scripts would be to sort the
annotated relays by network impact (some function of consensus weight,
bandwidth carried, time in network, etc), so it's easy to identify the
not-yet-annotated ones that will mean the biggest shifts. Maybe this
ordered list is something we can teach onionoo to output, and then all the
local scripts need to do is go through each relay in the onionoo list,
look them up in the local annotations list to see if they're already
annotated, and present the user with the unannotated ones."
That is, the flow I am imagining is to have tagtor *sort* the relay families by importance to the network, and then present to me the top n families that I haven't yet tagged as roger-known or roger-unknown. Then I can do a bit of work at a time, put it down, come back later, and at any moment I can be answering the highest impact questions about the network.
Then there are some secondary metrics that would be good to hear, which should pop out of the sorting, such as "what fraction of the network have I tagged in some way", "what fraction is roger-known", "what fraction is roger-unknown".
The sorting function can start really simple, like "sum of consensus weights of members of family" but we can imagine fancier ones later, like putting higher priority on big relay groups that *other people* haven't tagged yet either.TagTor is completed for its scope in Sponsor 112HiroHirohttps://gitlab.torproject.org/tpo/tpa/team/-/issues/40723port puppet-managed configs to Debian bullseye2023-09-26T20:43:33Zanarcatport puppet-managed configs to Debian bullseyeas part of the %"Debian 11 bullseye upgrade", we have a bunch of (sometimes really old) configuration that needs to be ported to the new stuff that's in bullseye. I have identified, so far:
* [ ] `/etc/apt/apt.conf.d/50unattended-upgra...as part of the %"Debian 11 bullseye upgrade", we have a bunch of (sometimes really old) configuration that needs to be ported to the new stuff that's in bullseye. I have identified, so far:
* [ ] `/etc/apt/apt.conf.d/50unattended-upgrades`: to be investigated. probably mostly whitespace changes, but also possibly missing features. complicated by the fact that this is a third party Puppet module and would require significant work to catchup with the Debian package
* [ ] `/etc/unbound/unbound.conf`: switch to `include-toplevel` after the fleet is upgraded (does not work in buster)
* [x] `/etc/sudoers`: use `@include` instead of `#include`, former added only in bullseye and later. should be split out in a `sudoers.d` file to avoid future conflicts and, generally, split in snippets per service instead of this monolithic file
* [ ] `/etc/syslog-ng/syslog-ng.conf`: silly version number logic in the template, needs to be ported to newer config or replaced with rsyslog or journald
* [x] ~~`/etc/ferm/ferm.conf`: `web-cymru-01` had diffs pending from the previous upgrade (presumably?), might be worth catching up to *buster*, that is, unless we just ditch ferm completely (#40554)~~ the latter
* [ ] `/etc/lvm/lvm.conf`: same as above
* [x] ~~`/etc/bind/named.conf.options`: TBD, on fallax~~ fallax retired
if a file is added in the above list, do not forget to add it to the [conflicts resolution list](https://gitlab.torproject.org/tpo/tpa/team/-/wikis/howto/upgrades/bullseye#conflicts-resolution) in the upgrade procedure.
more such issues could come up, but for now that's what I got. for now the diff for those has been minimized as much as possible and the proposed version from the Debian package should generally be ignored.Debian 11 bullseye upgradehttps://gitlab.torproject.org/tpo/tpa/team/-/issues/40695upgrade or rebuild hetzner-hel1-01 (nagios/icinga)2023-11-21T22:45:00Zanarcatupgrade or rebuild hetzner-hel1-01 (nagios/icinga)Nagios is going to be a particularly tricky bullseye upgrade, so it's not part of the large bullseye upgrade batches (#40690 or #40692).
We need to decide whether we keep icinga around at all or replace it with Prometheus (https://gitla...Nagios is going to be a particularly tricky bullseye upgrade, so it's not part of the large bullseye upgrade batches (#40690 or #40692).
We need to decide whether we keep icinga around at all or replace it with Prometheus (https://gitlab.torproject.org/tpo/tpa/team/-/issues/29864). if we do keep icinga, we need to decide whether we keep the current "push to git to rebuild the config" model or "puppetize the setup" (https://gitlab.torproject.org/tpo/tpa/team/-/issues/32901).Debian 11 bullseye upgradehttps://gitlab.torproject.org/tpo/tpa/team/-/issues/40694upgrade eugeni to bullseye2023-11-15T21:33:42Zanarcatupgrade eugeni to bullseyeeugeni is going to be a tricky bullseye upgrade, so it's not part of the large bullseye upgrade batches (#40690 or #40692).
we might want to decide what to do with mailman (https://gitlab.torproject.org/tpo/tpa/team/-/issues/40471) and...eugeni is going to be a tricky bullseye upgrade, so it's not part of the large bullseye upgrade batches (#40690 or #40692).
we might want to decide what to do with mailman (https://gitlab.torproject.org/tpo/tpa/team/-/issues/40471) and schleuder (https://gitlab.torproject.org/tpo/tpa/team/-/issues/40564) *before* we do the upgrade. mailman 2, in particular, is EOL so we *will* need to upgrade or replace it.
we might also want to consider the impact of the %"improve mail services" roadmap here. it's possible we might want to completely rebuild eugeni in different components instead of upgrading it.Debian 11 bullseye upgradehttps://gitlab.torproject.org/tpo/tpa/team/-/issues/40568Better monitoring for webserver response times2022-07-26T17:42:40ZJérôme Charaouilavamind@torproject.orgBetter monitoring for webserver response timesIn the wake of tpo/tpa/team#40566, it was shown that our monitoring infrastructure isn't sufficiently sensitive with respect to web server response times. We had an ongoing DoS on the static mirror hosts for days and we only noticed when...In the wake of tpo/tpa/team#40566, it was shown that our monitoring infrastructure isn't sufficiently sensitive with respect to web server response times. We had an ongoing DoS on the static mirror hosts for days and we only noticed when the response times consistently surpassed 10 seconds.
We should probably modify the existing checks or add new ones that will monitor whether the static mirror host (or even any web host) is serving pages within an acceptable delay, say 1 second.Debian 11 bullseye upgradeJérôme Charaouilavamind@torproject.orgJérôme Charaouilavamind@torproject.orghttps://gitlab.torproject.org/tpo/tpa/team/-/issues/40471upgrade mailman to mailman 32024-02-20T17:06:19Zanarcatupgrade mailman to mailman 3Mailman 2 was removed from Debian bullseye, we need to either upgrade to Mailman 3 or get rid of it. This is part of the 2022-Q1/Q2 OKRs and the %"Debian 11 bullseye upgrade" milestone.
upgrade procedure: https://docs.mailman3.org/en/l...Mailman 2 was removed from Debian bullseye, we need to either upgrade to Mailman 3 or get rid of it. This is part of the 2022-Q1/Q2 OKRs and the %"Debian 11 bullseye upgrade" milestone.
upgrade procedure: https://docs.mailman3.org/en/latest/migration.htmlDebian 11 bullseye upgradehttps://gitlab.torproject.org/tpo/tpa/team/-/issues/32901automate/puppetize Nagios installs2024-01-19T18:48:16Zanarcatautomate/puppetize Nagios installsone part of our install process is to configure Nagios, by hand, in the git repository. I usually do this by copy-pasting some similar blob of config from a possibly similar machine and hope for the best.
this is a manual step, and as p...one part of our install process is to configure Nagios, by hand, in the git repository. I usually do this by copy-pasting some similar blob of config from a possibly similar machine and hope for the best.
this is a manual step, and as part of the automation of the install process, it should be made automatic.
one way this could (and probably should) be done is by making Puppet automatically add its nodes into Nagios. this can be done using the [icinga2 module](https://github.com/Icinga/puppet-icinga2), for example. care should be taken to do a smooth transition, keeping existing configurations and just adding the Puppet ones on top, for new machines.
but this could (eventually) be retroactively added to all nodes, removing all manual configuration.
checklist:
1. [x] audit and import the module in our monorepo
1. [x] ~~enable on the nagios server, without writing any config (hopefully a noop)~~ not possible, config is overwritten by module, instead...
1. [ ] move the base configuration (`config/static`) from git into Puppet (mostly icinga.cfg and so on, because they are overwritten by the module)
1. [ ] enable a single config from puppet, as a test
1. [ ] add a new host check configuration
1. [ ] add a new service check configuration
1. [ ] add all *base* service checks for the new host (e.g. the services defined for the `computers` hostgroup, equivalent of pieces of `from-git/generated/auto-services.cfg`)
1. ~~[ ] convert legacy config into puppet (at this stage we only have the old hosts as legacy config)~~ done in third step
1. [ ] convert NRPE service definitions (`puppet:///modules/nagios/tor-nagios/generated/nrpe_tor.cfg`, generated from the git repo)
1. [ ] remove NRPE config sync from nagios to Puppet (the rsync to `pauli` in `config/Makefile`)
1. [ ] convert old hosts checks into puppet
1. [ ] convert old services checks into puppet
1. [ ] remove git hook receiver on nagios server (`/etc/ssh/userkeys/nagiosadm` key, which calls `/home/nagiosadm/bin/from-git-rw`)
It's a long way there, but getting to the state where *new* hosts are covered would already be a great improvement.Debian 11 bullseye upgradehttps://gitlab.torproject.org/tpo/core/tor/-/issues/40788Tor 0.4.7.13 died: Caught signal 112023-06-05T21:11:39ZtoralfTor 0.4.7.13 died: Caught signal 11A bug happened at a newly deployed Tor bridge (Debian 11, Hetzner VPS arm64 with obfs4:
```
May 06 17:19:53.000 [notice] Performing bandwidth self-test...done.
May 06 17:34:28.000 [notice] Received reload signal (hup). Reloading config a...A bug happened at a newly deployed Tor bridge (Debian 11, Hetzner VPS arm64 with obfs4:
```
May 06 17:19:53.000 [notice] Performing bandwidth self-test...done.
May 06 17:34:28.000 [notice] Received reload signal (hup). Reloading config and resetting internal state.
May 06 17:34:28.000 [notice] Read configuration file "/usr/share/tor/tor-service-defaults-torrc".
May 06 17:34:28.000 [notice] Read configuration file "/etc/tor/torrc".
May 06 17:34:28.000 [notice] Tor 0.4.7.13 opening log file.
============================================================ T= 1683394481
Tor 0.4.7.13 died: Caught signal 11
/usr/bin/tor(+0xec4e4)[0xaaaab60ac4e4]
linux-vdso.so.1(__kernel_rt_sigreturn+0x0)[0xffffa48b67d0]
/lib/aarch64-linux-gnu/libc.so.6(+0x752ec)[0xffffa40eb2ec]
/lib/aarch64-linux-gnu/libc.so.6(+0x77b60)[0xffffa40edb60]
/lib/aarch64-linux-gnu/libc.so.6(__libc_malloc+0x198)[0xffffa40eebf8]
/lib/aarch64-linux-gnu/libc.so.6(+0x6bf00)[0xffffa40e1f00]
/lib/aarch64-linux-gnu/libc.so.6(__vasprintf_chk+0x34)[0xffffa4157544]
/usr/bin/tor(tor_vasprintf+0x58)[0xaaaab60c73f8]
/usr/bin/tor(smartlist_add_asprintf+0x94)[0xaaaab60989c4]
/usr/bin/tor(entry_guards_update_state+0x10c)[0xaaaab619dc1c]
/usr/bin/tor(+0xbdd04)[0xaaaab607dd04]
/usr/bin/tor(+0x676d8)[0xaaaab60276d8]
/usr/bin/tor(+0x857cc)[0xaaaab60457cc]
/lib/aarch64-linux-gnu/libevent-2.1.so.7(+0x23600)[0xffffa476c600]
/lib/aarch64-linux-gnu/libevent-2.1.so.7(event_base_loop+0x50c)[0xffffa476cf84]
/usr/bin/tor(do_main_loop+0xec)[0xaaaab602b7b0]
/usr/bin/tor(tor_run_main+0x1c0)[0xaaaab6026d94]
/usr/bin/tor(tor_main+0x54)[0xaaaab6023114]
/usr/bin/tor(main+0x20)[0xaaaab6022c00]
/lib/aarch64-linux-gnu/libc.so.6(__libc_start_main+0xe8)[0xffffa4096e18]
/usr/bin/tor(+0x62c88)[0xaaaab6022c88]
May 06 17:34:41.000 [notice] Tor 0.4.7.13 opening log file.
```
Before this happened Tor + obsf4 were installed, but the wrong systemd service was restarted ("tor@default.service" instead the correct one: "tor")
```
May 06 17:19:52.000 [warn] Server managed proxy encountered a method error. (obfs4 listen tcp 0.0.0.0:443: bind: permission denied)
May 06 17:19:52.000 [warn] Managed proxy '/usr/bin/obfs4proxy' was spawned successfully, but it didn't launch any pluggable transport listeners!
May 06 17:19:52.000 [warn] Pluggable Transport process terminated with status code 65280
```Tor: 0.4.8.x-freezehttps://gitlab.torproject.org/tpo/core/tor/-/issues/40784Metricsport metrics for conflux2023-10-19T23:55:01ZMike PerryMetricsport metrics for confluxWe should add metricsport metrics for conflux, to help with monitoring things like switch frequency, reason for switching (cwnd, alg, orconn block), leg failure rates, set failures rates, leg reattachment, etc.
I think this is likely to...We should add metricsport metrics for conflux, to help with monitoring things like switch frequency, reason for switching (cwnd, alg, orconn block), leg failure rates, set failures rates, leg reattachment, etc.
I think this is likely to be a combination of @dgoulet and me doing this. I can do the ones for the algs, and he can do the ones for the pool. However, I am assigning it to me for now so as I don't lose it.
We don't need it by the first alpha, but should definitely have it by the stable.Tor: 0.4.8.x-freezeMike PerryMike Perry