The Tor Project issueshttps://gitlab.torproject.org/groups/tpo/-/issues2024-02-08T16:19:30Zhttps://gitlab.torproject.org/tpo/tpa/team/-/issues/41485automate major upgrades2024-02-08T16:19:30Zanarcatautomate major upgradeswe currently have automated upgrades for the day-to-day debian package upgrades, through unattended-upgrades (#31957). but major upgrades are not scripted, other than ad-hoc commands copy-pasted from an otherwise excellent wiki page.
we...we currently have automated upgrades for the day-to-day debian package upgrades, through unattended-upgrades (#31957). but major upgrades are not scripted, other than ad-hoc commands copy-pasted from an otherwise excellent wiki page.
we should automate this.
during the %"Debian 12 bookworm upgrade", tor weather suffered a catastrophic failure (#41388) due to a flaw in the postgresql upgrade procedure, so that should probably be our first target: automate that procedure, which would normally keep that kind of problem from occuring again (as we can do error checking better).
but ideally, we'd automate the entire procedure. See also https://wiki.debian.org/AutomatedUpgradeDebian 13 trixie upgradehttps://gitlab.torproject.org/tpo/tpa/team/-/issues/41455move ooni.torproject.org to our mirros and/or fix CAA hardening for subdomain2024-01-23T18:16:04Zanarcatmove ooni.torproject.org to our mirros and/or fix CAA hardening for subdomainIn #41386, we have tried to harden our CAA records, but this impacted the OONI folks who couldn't renew their certificates. A workaround was deployed on the subdomain, but we'd like to re-harden this bit by either:
1. make the ooni.tor...In #41386, we have tried to harden our CAA records, but this impacted the OONI folks who couldn't renew their certificates. A workaround was deployed on the subdomain, but we'd like to re-harden this bit by either:
1. make the ooni.torproject.org redirects part of our normal "vanity hosts" redirections on the static mirror system, or;
2. restrict the CAA record to a specific (set of?) let's encrypt accounts
@art, which one should we be, and what timeline should we look for this?https://gitlab.torproject.org/tpo/tpa/team/-/issues/41453evaluate gitlab optimisations for large / monorepos2024-01-22T19:44:52Zanarcatevaluate gitlab optimisations for large / monoreposWhile looking at GitLab backups (#40518), I stumbled upon this page:
https://docs.gitlab.com/ee/user/project/repository/monorepos/
It has interesting recommendations for "monorepos" which, really, they mean "large reqpositories". We sh...While looking at GitLab backups (#40518), I stumbled upon this page:
https://docs.gitlab.com/ee/user/project/repository/monorepos/
It has interesting recommendations for "monorepos" which, really, they mean "large reqpositories". We should look into those directives and see what optimizations we could make.
This is mostly for the applications' team repositories, of course, so /cc @richardhttps://gitlab.torproject.org/tpo/tpa/team/-/issues/41448datacenter evacuation / replacement options2024-01-19T18:57:24Zanarcatdatacenter evacuation / replacement optionsFirst off, we are *not* currently planning to migrate, replace, or evacuate our presence at Hetzner or any other provider. That is a massive undertaking that we would not want to embark on without a significant cost/benefit analysis. The...First off, we are *not* currently planning to migrate, replace, or evacuate our presence at Hetzner or any other provider. That is a massive undertaking that we would not want to embark on without a significant cost/benefit analysis. The last time we evaluated this (#41374), we decided to stay.
That being said, it seems to me worthwhile to keep an eye out for other ... *opportunities* in hosting servers, specifically in Europe, but this could also include locations in Asia our south america. The point is to have diversity here.
Our [hardware requirements](https://gitlab.torproject.org/tpo/tpa/team/-/wikis/doc/hardware-requirements) have been expanded to cover for hosting requirements found during the Cymru migration (#40897).
So this issue is to keep track of such ideas as they come up. Ideas should be documented as (possibly internal) comments(next) cluster scalinganarcatanarcathttps://gitlab.torproject.org/tpo/tpa/team/-/issues/41447track SSH logins by SSH key instead of usernames2023-12-14T18:51:06Zanarcattrack SSH logins by SSH key instead of usernamesWe have a handful of SSH services that all operate on the same UNIX users: `git@git.tpo` is the typical one, but I believe this also applies to `git@gitlab.tpo`. It certainly applies to root accounts as well.
Normally, when you login to...We have a handful of SSH services that all operate on the same UNIX users: `git@git.tpo` is the typical one, but I believe this also applies to `git@gitlab.tpo`. It certainly applies to root accounts as well.
Normally, when you login to a server PAM adds an entry to the `utmp` "log" keeping track of your terminal, IP address and username, and how long you're logged in (in `wtmp`). For those servers, this information is close to useless and makes audits cumbersome because you actually need to go through `auth.log` and reverse-map SSH keys instead.
Friends wrote the [ssh-key-wtmp](https://git.autistici.org/ai3/tools/ssh-key-wtmp) PAM plugin which does this. It's not packaged in Debian, it's a bunch of golang that *might* be packageable however, even though it vendors a bit of code.
The way that thing works is it hooks up in PAM and writes better logs in a separate log file. It also logs the IP address used in the connexion, alongside a Maxmind GeoIP and Tor exit list lookup.https://gitlab.torproject.org/tpo/tpa/team/-/issues/41395torspec (and other repos?) forks with disabled "use shared runners" setting2023-11-13T18:30:55ZIan Jacksoniwj@torproject.orgtorspec (and other repos?) forks with disabled "use shared runners" settingAs far as we can tell:
* In the past, the `tpo/core/torspec` repo had "use shared runners" disabled in the repo gitlab config
* Now it has CI, and the shared runners are enabled.
* By default, users who forked the tree in the past *do...As far as we can tell:
* In the past, the `tpo/core/torspec` repo had "use shared runners" disabled in the repo gitlab config
* Now it has CI, and the shared runners are enabled.
* By default, users who forked the tree in the past *don't* have "use shared runners"; nor do they have any of their own runners, obviously. So the CI for them is broken.
* The effect is that when we get MRs from those users we have to ask them to fiddle with obscure gitlab settings.
* It appears that other users' `torspec` clones are probably affected. At a guess, this problem will arise a further 10-20 times for `torspec` MRs.
* Possibly repos other than `torspec` may be affected.
It would be desirable to identify the users who:
* Don't have "use shared runners"
* Don't have any runners of their own (as a proxy for detecting anyone who might have deliberately disabled the shared runners, although we don't think there are any)
and set their "use shared runners" flag.
I'm told that this could be done with a script (great). I'm filing this ticket to note down our discoveries and intentions, pending someone having time to work on it. (Neither I, nor the TPA team, do, right now.)https://gitlab.torproject.org/tpo/tpa/team/-/issues/41375Upgrade survey-01 to PHP 8.22023-11-01T15:33:49ZJérôme Charaouilavamind@torproject.orgUpgrade survey-01 to PHP 8.2Currently, LimeSurvey supports PHP up to 8.1, whereas Debian bookworm ships (and supports) only version 8.2.
One [forum post](https://forums.limesurvey.org/forum/installation-a-update-issues/142839-limesurvey-6-2-and-php-8-2-support#251...Currently, LimeSurvey supports PHP up to 8.1, whereas Debian bookworm ships (and supports) only version 8.2.
One [forum post](https://forums.limesurvey.org/forum/installation-a-update-issues/142839-limesurvey-6-2-and-php-8-2-support#251573) suggests PHP 8.2 will be supported with the next major release, which should be LimeSurvey 7.
For the moment, we've kept PHP 7.4 on `survey-01`, supported by bullseye-security releases.Jérôme Charaouilavamind@torproject.orgJérôme Charaouilavamind@torproject.orghttps://gitlab.torproject.org/tpo/tpa/team/-/issues/41337port puppet-managed configuration files to Debian bookworm2024-02-08T16:28:16Zanarcatport puppet-managed configuration files to Debian bookwormDuring the first batch of bookworm upgrades (#41251), we found a few issues with the Puppet configs that should probably be tweaked before the next batch to remove noise.
We have some slight diffs in our Puppet-managed NTP configuration...During the first batch of bookworm upgrades (#41251), we found a few issues with the Puppet configs that should probably be tweaked before the next batch to remove noise.
We have some slight diffs in our Puppet-managed NTP configuration:
```
Notice: /Stage[main]/Ntp/File[/etc/ntpsec/ntp.conf]/content:
--- /etc/ntpsec/ntp.conf 2023-09-26 14:41:08.648258079 +0000
+++ /tmp/puppet-file20230926-35001-x7hntz 2023-09-26 14:47:56.547991158 +0000
@@ -4,13 +4,13 @@
# /etc/ntp.conf, configuration for ntpd; see ntp.conf(5) for help
-driftfile /var/lib/ntpsec/ntp.drift
+driftfile /var/lib/ntp/ntp.drift
# Leap seconds definition provided by tzdata
leapfile /usr/share/zoneinfo/leap-seconds.list
# Enable this if you want statistics to be logged.
-#statsdir /var/log/ntpsec/
+#statsdir /var/log/ntpstats/
statistics loopstats peerstats clockstats
filegen loopstats file loopstats type day enable
Notice: /Stage[main]/Ntp/File[/etc/ntpsec/ntp.conf]/content: content changed '{sha256}c5d627a596de1c67aa26dfbd472a4f07039f4664b1284cf799d4e1eb43c92c80' to '{sha256}18de87983c2f8491852390acc21c466611d6660083b0d0810bb6509470949be3'
Notice: /Stage[main]/Ntp/File[/etc/ntpsec/ntp.conf]/mode: mode changed '0644' to '0444'
Info: /Stage[main]/Ntp/File[/etc/ntpsec/ntp.conf]: Scheduling refresh of Exec[service ntpsec restart]
Info: /Stage[main]/Ntp/File[/etc/ntpsec/ntp.conf]: Scheduling refresh of Exec[service ntpsec restart]
Notice: /Stage[main]/Ntp/File[/etc/default/ntpsec]/content:
--- /etc/default/ntpsec 2023-07-29 20:51:53.000000000 +0000
+++ /tmp/puppet-file20230926-35001-d4tltp 2023-09-26 14:47:56.579990910 +0000
@@ -1,9 +1 @@
-NTPD_OPTS="-g -N"
-
-# Set to "yes" to ignore DHCP servers returned by DHCP.
-IGNORE_DHCP=""
-
-# If you use certbot to obtain a certificate for ntpd, provide its name here.
-# The ntpsec deploy hook for certbot will handle copying and permissioning the
-# certificate and key files.
-NTPSEC_CERTBOT_CERT_NAME=""
+NTPD_OPTS='-g'
Notice: /Stage[main]/Ntp/File[/etc/default/ntpsec]/content: content changed '{sha256}26bcfca8526178fc5e0df1412fbdff120a0d744cfbd023fef7b9369e0885f84b' to '{sha256}1bb4799991836109d4733e4aaa0e1754a1c0fee89df225598319efb83aa4f3b1'
Notice: /Stage[main]/Ntp/File[/etc/default/ntpsec]/mode: mode changed '0644' to '0444'
Info: /Stage[main]/Ntp/File[/etc/default/ntpsec]: Scheduling refresh of Exec[service ntpsec restart]
Info: /Stage[main]/Ntp/File[/etc/default/ntpsec]: Scheduling refresh of Exec[service ntpsec restart]
Notice: /Stage[main]/Ntp/Exec[service ntpsec restart]: Triggered 'refresh' from 4 events
```
Note that this is a "reverse diff", that is Puppet restoring the old
bullseye config, so we should apply the reverse of this in Puppet.
### sudo configuration lacks limits.conf?
Just notice this diff on all hosts:
```
--- /etc/pam.d/sudo 2021-12-14 19:59:20.613496091 +0000
+++ /etc/pam.d/sudo.dpkg-dist 2023-06-27 11:45:00.000000000 +0000
@@ -1,12 +1,8 @@
-##
-## THIS FILE IS UNDER PUPPET CONTROL. DON'T EDIT IT HERE.
-##
#%PAM-1.0
-# use the LDAP-derived password file for sudo access
-auth requisite pam_pwdfile.so pwdfile=/var/lib/misc/thishost/sudo-passwd
+# Set up user limits from /etc/security/limits.conf.
+session required pam_limits.so
-# disable /etc/password for sudo authentication, see #6367
-#@include common-auth
+@include common-auth
@include common-account
@include common-session-noninteractive
```
Why don't we have `pam_limits` setup? Historical oddity? To investigatte.Debian 12 bookworm upgradehttps://gitlab.torproject.org/tpo/tpa/team/-/issues/41321bookworm upgrades, third batch: High complexity2024-02-01T04:09:39Zanarcatbookworm upgrades, third batch: High complexityupgrade the following servers, if they still exist, one by one, carefully.
- [x] `alberti.torproject.org` - ~~bullseye upgrade was #40693~~ done in one shot, oops!
- [ ] `eugeni.torproject.org` - bullseye upgrade was #40694
- [ ] `hetz...upgrade the following servers, if they still exist, one by one, carefully.
- [x] `alberti.torproject.org` - ~~bullseye upgrade was #40693~~ done in one shot, oops!
- [ ] `eugeni.torproject.org` - bullseye upgrade was #40694
- [ ] `hetzner-hel1-01.torproject.org` - bullseye upgrade was #40695
- [ ] `pauli.torproject.org` - bullseye upgrade was #40696, puppetdb upgraded separately in #41341
Note that some of those machines might not be running bullseye yet, see the related tickets for more information. An upgrade to bullseye MUST be performed first, even if we batch-upgrade them to bookworm, as there are significant issues with skipping the bullseye upgrade. Those were found while accidentally upgrading `alberti` from buster to bookworm, see #40693.
In [TPA-RFC-57](https://gitlab.torproject.org/tpo/tpa/team/-/wikis/policy/tpa-rfc-57-bookworm-upgrades#high-complexity-individually-done) this also included the ganeti cluster upgrades, but those have been split up in separate tickets (#41254 and #41253).Debian 12 bookworm upgrade2024-02-14https://gitlab.torproject.org/tpo/tpa/team/-/issues/41312automate deployment of grafana dashboards2024-03-06T10:37:12Zanarcatautomate deployment of grafana dashboardsRight now, grafana dashboards are managed in a rather haphazard way: some dashboards are managed by Puppet and "provisioned" (ie. deployed automatically), which makes them uneditable from the web UI. To save changes, you need to save the...Right now, grafana dashboards are managed in a rather haphazard way: some dashboards are managed by Puppet and "provisioned" (ie. deployed automatically), which makes them uneditable from the web UI. To save changes, you need to save the JSON file, put it in the right location in Puppet, commit, push, and have puppet run on the host again.
To make matters worst, only some dashboards are provisioned, and published in a person repo of mine: <https://gitlab.com/anarcat/grafana-dashboards>
Surely there must be a better way. I had high hopes that <https://github.com/Beam-Connectivity/grafana-dashboard-manager> could solve this problem, but it doesn't seem very well maintained, with a bunch of bugfix PRs waiting in the queue for more than a year, with possible straight out incompatibility with recent Grafana versions. The [gdg](https://github.com/esnet/gdg) project may be a better alternative. [grizzly](https://grafana.github.io/grizzly/) and others take the inverse approach of writing dashboards as code and loading them in grafana, but I think that's much harder.
Finally, we now have *lots* of dashboards, and it's really hard to find "that right one". We should use the folder structure to sort through those (or possibly labels?), but we can't actually move provisioned dashboards directly, at least I failed to do so right off the bat...
In general, we should probably push our configuration management of Grafana a little further. There Must Be A Better Way.
Requirements:
* [ ] automatically save dashboards to configuration management or at least git versionning (fix the "oh, darn it, i need to save this dashboard to puppet" and "oops, this is not versioned in git" pain points)
* [x] sort dashboards through folders (or labels?)
* [x] cover the grafana2 server and allow collaboration with service admins
Nice to have:
* [x] public repository for our repos, to share with others: done in https://gitlab.torproject.org/tpo/tpa/grafana-dashboards
* [ ] automatically upgrade dashboard versions to newer grafana release (to reduce diff noise like https://gitlab.com/anarcat/grafana-dashboards/-/commit/22640fff18ef3235130d74456e4c3eb75863f44d)
* [x] figure out the "datasource mess", where the datasource fields get recursively expanded as some dashboards are saved
* [ ] remove the duplicate data sources (we have *three* Prometheus datasources, all pointing to the same server, on grafana1)
Next steps:
- [x] evaluate grafana dashboard manager project
- [x] review upstream literature on how to provision / version dashboards
- [x] ask around in the community for tips and ideashttps://gitlab.torproject.org/tpo/tpa/team/-/issues/41303convert Puppet's cron resources into systemd timers2023-08-25T15:05:58Zanarcatconvert Puppet's cron resources into systemd timersPuppet's built-in cron resource is kind of crap:
1. if you first set a parameter like `hour` and then remove it, it doesn't turn it into `*`, e.g. if you have `cron { 'foo': hour => 5 }` and turn that into `cron { 'foo': }`, that's a n...Puppet's built-in cron resource is kind of crap:
1. if you first set a parameter like `hour` and then remove it, it doesn't turn it into `*`, e.g. if you have `cron { 'foo': hour => 5 }` and turn that into `cron { 'foo': }`, that's a noop but should logically turn it into a job that runs every minute
2. if you add a resource to a manifest, then remove it, it doesn't get removed from the host, e.g. if you remove the `Cron['foo']` resource above, it will stay in the crontab
3. it uses the `/var/spool/crontabs/root` resource instead of the more readable and intelligible `/etc/cron.d`
4. it was removed from core puppet and moved to a contrib module (see the [deprecation notice](https://www.puppet.com/docs/puppet/6/release_notes_puppet#new_features_puppet_x-0-0-select-moved-modules-types) and tpo/tpa/team#41285))
There are two options here.
1. voxpupuli maintains what seems to look like an [excellent cron module](https://github.com/voxpupuli/puppet-cron)
2. just ditch cron and turn everything into a systemd timer (that is [what wikimedia did](https://phabricator.wikimedia.org/T273673))
The former is easier: the [`cron::job`](https://github.com/voxpupuli/puppet-cron#cronjob) resource looks backwards compatible with the old `cron` type, except that it creates the resource in `/etc/cron.d` instead of `/var/...`
But I would very much like to use systemd timers instead: they provide built-in monitoring as failing timers will raise an alarm with systemd's internal status, which then triggers monitoring (as opposed to sending us email). It could also drastically reduce the amount of noise we're going through each morning, although *that* might be a problem if we actually rely on that output. We probably would need to go through each resource by hand to evaluate anyways.
Wikimedia has this trick to list hosts with the given resource:
```
cumin R:cron
```
Obviously, all hosts currently have a cron resource. But it's not as much work as I'd imagined:
```
puppetdb=# SELECT count(*),title FROM catalog_resources WHERE type = 'Cron' GROUP BY title ORDER by count(*) DESC;
count | title
-------+---------------------------------
87 | puppet-cleanup-clientbucket
81 | prometheus-lvm-prom-collector-
9 | prometheus-postfix-queues
6 | docker-clear-old-images
5 | docker-clear-nightly-images
5 | docker-clear-cache
5 | docker-clear-dangling-images
2 | collector-service
2 | onionoo-bin
2 | onionoo-network
2 | onionoo-service
2 | onionoo-web
2 | podman-clear-cache
2 | podman-clear-dangling-images
2 | podman-clear-nightly-images
2 | podman-clear-old-images
1 | update rt-spam-blocklist hourly
1 | update torexits for apache
1 | metrics-web-service
1 | metrics-web-data
1 | metrics-web-start
1 | metrics-web-start-rserve
1 | metrics-network-data
1 | rt-externalize-attachments
1 | tordnsel-data
1 | tpo-gitlab-backup
1 | tpo-gitlab-registry-gc
1 | update KAM ruleset
(28 rows)
```
that's 28 distinct resources to update, and many of them are basically the same (e.g. all the `podman` stuff is similar). some already *must* be moved out of cron to be ran as normal services (e.g. metrics stuff).
i doubt we need the output in *any* of those and it would logged in journald anyway. in fact, it might even allow us log *more* things as we wouldn't have to deal with the resulting email...cleanup and publish the sysadmin codebasehttps://gitlab.torproject.org/tpo/tpa/team/-/issues/41254upgrade gnt-fsn Ganeti cluster to bookworm2023-09-14T18:32:29Zanarcatupgrade gnt-fsn Ganeti cluster to bookwormupgrade the gnt-fsn cluster to bookworm, which means those nodes
- [ ] fsn-node-01
- [ ] fsn-node-02
- [ ] fsn-node-03
- [ ] fsn-node-04
- [ ] fsn-node-05
- [ ] fsn-node-06
- [ ] fsn-node-07
- [ ] fsn-node-08
mind the [Ganeti upgrade p...upgrade the gnt-fsn cluster to bookworm, which means those nodes
- [ ] fsn-node-01
- [ ] fsn-node-02
- [ ] fsn-node-03
- [ ] fsn-node-04
- [ ] fsn-node-05
- [ ] fsn-node-06
- [ ] fsn-node-07
- [ ] fsn-node-08
mind the [Ganeti upgrade procedure](https://gitlab.torproject.org/tpo/tpa/team/-/wikis/howto/upgrades/bookworm#ganeti-upgrades) which still needs to be updated from the bullseye procedure. also note that [this patch](https://github.com/ganeti/ganeti/pull/1694) is necessary... be careful with this one, it might not be ready yet.
possibly consider upgrading the (smaller?) gnt-dal cluster (#41253) first.Debian 12 bookworm upgradehttps://gitlab.torproject.org/tpo/tpa/team/-/issues/41253upgrade gnt-dal Ganeti cluster to bookworm2023-09-14T18:32:29Zanarcatupgrade gnt-dal Ganeti cluster to bookwormupgrade the gnt-dal cluster to bookworm, which means those nodes
- [ ] dal-node-01
- [ ] dal-node-02
- [ ] dal-node-03
mind the [Ganeti upgrade procedure](https://gitlab.torproject.org/tpo/tpa/team/-/wikis/howto/upgrades/bookworm#ganet...upgrade the gnt-dal cluster to bookworm, which means those nodes
- [ ] dal-node-01
- [ ] dal-node-02
- [ ] dal-node-03
mind the [Ganeti upgrade procedure](https://gitlab.torproject.org/tpo/tpa/team/-/wikis/howto/upgrades/bookworm#ganeti-upgrades) which still needs to be updated from the bullseye procedure. also note that [this patch](https://github.com/ganeti/ganeti/pull/1694) is necessary... be careful with this one, it might not be ready yet.Debian 12 bookworm upgradehttps://gitlab.torproject.org/tpo/tpa/team/-/issues/41218retire vineale2024-03-26T20:51:39Zanarcatretire vinealeOnce all lagacy Git repositories have been migrated to GitLab (#41215) and the redirections moved to the static mirror system (#41216), retire `vineale`.Once all lagacy Git repositories have been migrated to GitLab (#41215) and the redirections moved to the static mirror system (#41216), retire `vineale`.legacy Git infrastructure retirement (TPA-RFC-36)2024-06-08https://gitlab.torproject.org/tpo/tpa/team/-/issues/41216move legacy git redirections to the static mirror infrastructure2024-03-27T15:29:01Zanarcatmove legacy git redirections to the static mirror infrastructureOnce all lagacy Git repositories have been migrated to GitLab (#41215), the redirections can be moved from gitweb (`vineale`) to the static mirror system.Once all lagacy Git repositories have been migrated to GitLab (#41215), the redirections can be moved from gitweb (`vineale`) to the static mirror system.legacy Git infrastructure retirement (TPA-RFC-36)2024-03-08https://gitlab.torproject.org/tpo/tpa/team/-/issues/41215forcibly migrate remaining Gitolite repositories to GitLab2024-03-26T20:51:17Zanarcatforcibly migrate remaining Gitolite repositories to GitLabAs part of the Gitolite retirement procedure (TPA-RFC-36, #41180), migrate remaining repositories from Gitolite to GitLab.
The goal is to complete this from within 12 months of the announcement, or 3 months from the original due date of...As part of the Gitolite retirement procedure (TPA-RFC-36, #41180), migrate remaining repositories from Gitolite to GitLab.
The goal is to complete this from within 12 months of the announcement, or 3 months from the original due date of this ticket, that is, before 2024-06-08, at which point the servers are supposed to be retired.
The actual list of repositories to migrate should probably be added here and users warned about the impeding change one last time. An inventory will have previously been made in #41214.
In TPA-RFC-36, the following was established:
> ## Per-repository particularities
>
> This section documents the fate of some repositories we are aware
> of. If you can think of specific changes that need to happen to
> repositories that are unusual, please do report them to TPA so they
> can be included in this proposal.
>
> ### idle repositories
>
> Repositories that did not have any new commit in the last two years
> are considered "idled" and should be migrated or archived to GitLab by
> their owners. Failing that, TPA will *archive* the repositories in the
> GitLab `legacy/` namespace before final deadline.
>
> ### user repositories
>
> There are 358 repositories under the `user/` namespace, owned by 70
> distinct users.
>
> Those repositories must be migrated to their corresponding user on the
> GitLab side.
>
> If the Gitolite user does not have a matching user on GitLab, their
> repositories will be moved under the `legacy/gitolite/user/` namespace
> in GitLab, owned by the GitLab admin doing the migration.
>
> ### "mirror" and "extern" repositories
>
> Those repositories will be migrated to, and archived in, GitLab within
> a month of the adoption of this proposal.
>
> ### Applications team repositories
>
> [See tpo/tpa/team#41181.]
So, as a task list, per repos:
- [ ] 5 `mirror` or `extern` (migrate and archive)
- [ ] 12 TPA (see #41219)
- [x] 49 migrated
- [ ] 87 `Attic` (migrate and archive)
- [ ] 97 `Other` (migrate, archive "idle" repositories)
- [ ] 288 `user` (migrate and archive)
See https://gitlab.torproject.org/tpo/tpa/team/-/issues/41214#note_2983291 for how those numbers were established.legacy Git infrastructure retirement (TPA-RFC-36)2024-03-08https://gitlab.torproject.org/tpo/tpa/team/-/issues/41198Update GitLab to 17.02023-05-30T16:36:06ZJérôme Charaouilavamind@torproject.orgUpdate GitLab to 17.0According to the maintenance policy, GitLab should release the next major version towards the end of May of next year.
Because of the apt-pinning configuration implemented in #40769 (closed), unattended-updates will not automatically in...According to the maintenance policy, GitLab should release the next major version towards the end of May of next year.
Because of the apt-pinning configuration implemented in #40769 (closed), unattended-updates will not automatically install this update.
This is a reminder ticket for the planning, announcement and execution work that must be happen manually.
A new ticket like this should be created with a due date 12 months later when this ticket is due.2024-06-06https://gitlab.torproject.org/tpo/tpa/team/-/issues/41175re-evaluate our certificate authorities and pinning at Mozilla / Google Chrome2023-05-16T19:08:56Zanarcatre-evaluate our certificate authorities and pinning at Mozilla / Google ChromeIn two years from now, look at which certificate authorities and how that affects the pins we have in Mozilla Firefox and Google Chrome, see #41154 for background and the previous instance of this.In two years from now, look at which certificate authorities and how that affects the pins we have in Mozilla Firefox and Google Chrome, see #41154 for background and the previous instance of this.2025-05-16https://gitlab.torproject.org/tpo/tpa/team/-/issues/41120chi-node-14 remote LUKS unlock failure2023-04-05T20:02:12Zanarcatchi-node-14 remote LUKS unlock failureIn the chi-node-14 move, I have failed to remotely unlock the LUKS partition. Because network was misconfigured (because of the move!), neither the initrd or mandos methods worked, so I was expecting at least the BIOS remote console to g...In the chi-node-14 move, I have failed to remotely unlock the LUKS partition. Because network was misconfigured (because of the move!), neither the initrd or mandos methods worked, so I was expecting at least the BIOS remote console to give me a prompt, but it didn't.
The SOL console *also* didn't work which meant I had to basically recover from a rescue environment. (It turns out that I could just remove the console=ttyS0 bit from the grub commandline, but I didn't find that out until much later.)
In any case, it seems to me this should be fixed so we can recover this box if the initrd unlock fails.https://gitlab.torproject.org/tpo/tpa/team/-/issues/41059Evaluate the use of Big Blue Button hosted by meet.coop2024-02-29T17:43:25ZGabagaba@torproject.orgEvaluate the use of Big Blue Button hosted by meet.coopWe have been using Big Blue Button hosted by meet.coop for 2 years. An evaluation of the service is due. When we were evaluating different tools hosted by third parties (https://gitlab.torproject.org/tpo/community/sponsor-9/-/issues/30)...We have been using Big Blue Button hosted by meet.coop for 2 years. An evaluation of the service is due. When we were evaluating different tools hosted by third parties (https://gitlab.torproject.org/tpo/community/sponsor-9/-/issues/30) we used a set of criteria that I can not find anywhere :|
From the ticket https://gitlab.torproject.org/tpo/community/sponsor-9/-/issues/30, I understand that it was something like this:
1. Regarding privacy, autonomy, and support: I'm assuming we were considering services that were not tracking our users.
2. User accounts: we can add user accounts without an extra cost.
3. Max participants: it can handle at least 40 participants per call.
4. Support: it has tech support available
5. Infra: not sure what this item was about.
@gus do you have more details about the criteria we were using somewhere?
Anything else we should consider?