TPA issueshttps://gitlab.torproject.org/groups/tpo/tpa/-/issues2023-12-06T13:15:36Zhttps://gitlab.torproject.org/tpo/tpa/dangerzone-webdav-processor/-/issues/23build and host the dangerzone docker image ourselves2023-12-06T13:15:36ZKezbuild and host the dangerzone docker image ourselvesthe `flmcode/dangerzone` docker image that we use is out of date by several months, and is vulnerable to the (low risk) [CVE-2023-39342](https://github.com/freedomofpress/dangerzone/security/advisories/GHSA-pvwq-6vpp-2632). i think this ...the `flmcode/dangerzone` docker image that we use is out of date by several months, and is vulnerable to the (low risk) [CVE-2023-39342](https://github.com/freedomofpress/dangerzone/security/advisories/GHSA-pvwq-6vpp-2632). i think this is due to FPF adopting dangerzone, and the freelookmedia team no longer maintaining the code.
[FPF doesn't publish a docker image for dangerzone](https://github.com/freedomofpress/dangerzone/issues/152#issuecomment-1088085408), so we would need to manually build and host the image ourselves. now that we have a container registry, that's something we can actually do! this particular image is pretty big (over a gigabyte) so perhaps a TPA-RFC is the right way to go with this tickethttps://gitlab.torproject.org/tpo/tpa/renovate-cron/-/issues/2Build our own image2023-05-11T18:50:32Zmicahmicah@torproject.orgBuild our own imageWhen we have docker-in-docker capability (DiND) in our runners, we can build our own renovate repository to potentially avoid supply-chain issues as detailed in [gitlab's renovate process](https://gitlab.com/gitlab-org/frontend/renovate-...When we have docker-in-docker capability (DiND) in our runners, we can build our own renovate repository to potentially avoid supply-chain issues as detailed in [gitlab's renovate process](https://gitlab.com/gitlab-org/frontend/renovate-gitlab-bot/-/blob/main/docs/process.md)https://gitlab.torproject.org/tpo/tpa/team/-/issues/40095Build Windows/Mac CI infrastructure that is usable by all teams in the near f...2022-12-07T15:24:31ZAlexander Færøyahf@torproject.orgBuild Windows/Mac CI infrastructure that is usable by all teams in the near futureThe projects we have in Tor currently utilizes a mixture of different CI systems to ensure some form of quality assurance as part of the software development process:
- Jenkins (provided by TPA)
- Gitlab CI (currently Docker builders ki...The projects we have in Tor currently utilizes a mixture of different CI systems to ensure some form of quality assurance as part of the software development process:
- Jenkins (provided by TPA)
- Gitlab CI (currently Docker builders kindly provided by the FDroid project via Hans from The Guardian Project)
- Travis CI (used by some of our projects such as tpo/core/tor.git for Linux and MacOS builds)
- Appveyor (used by tpo/core/tor.git for Windows builds)
One big benefit that we have seen with Gitlab CI is how easy it is for each project to initially configure CI for their respective project and maintain it without sysadmin/CI-admin(?) involvement. This I believe is an important requirement here to distribute the workload of actually setting this up.
None of the goals of this ticket will solve the issue that Apple have recently announced the M1 processor and we have no way of virtualizing/emulating the ARM64 macOS builds, yet. This will have to be something we look into in the future. Other organizations will have this problem too, so we might be able to piggy-bag on them.
Jenkins have been hard for the network team to maintain and weasel have been a great help there. I am not sure how Jenkins is used by other teams right now, except that I know the web teams are utilizing it to publish changes to our websites to the production servers.
Travis CI recently announced a new scheme where MacOS builds will become a more scarce resource on their platform. This mixed with the wish to have faster builds for the network team is what triggered this post. We are already on some "free software beneficial plan" where they support us with more points, but it wont be enough for the network team to go through a month of MacOS builds for our needs, unfortunately.
Appveyor is very slow, and it often leads to frustrations amongst the network team members.
It would awesome if we could somehow reserve two (ideally) "fast" Debian-based machines on TPO infrastructure to build the following:
- Run Gitlab CI runners via KVM (initially with focus on Windows x86-64 and macOS x86-64). This will replace the need for Travis CI and Appveyor. This should allow both the network team, application team, and anti-censorship team to test software on these platforms (either by building in the VMs or by fetching cross-compiled binaries on the hosts via the Gitlab CI pipeline feature). Since none(?) of our engineering staff are working full-time on MacOS and Windows, we rely quite a bit on this for QA.
- Run Gitlab CI runners via KVM for the BSD's. Same argument as above, but is much less urgent.
- Spare capacity (once we have measured it) can be used a generic Gitlab CI Docker runner in addition to the FDroid builders.
- The faster the CPU the faster the builds.
- Lots of RAM allows us to do things such as having CoW filesystems in memory for the ephemeral builders and should speed up builds due to faster I/O.
I am by no means an expert on this, but I don't believe these machines can be virtual machines as we need to spawn other virtual machines using the "full virtualization" that is provided by "modern" x86-64 CPUs. It might be "recursive" virtualization works (some cloud providers have that), but I have no idea what the implications are for that, especially with the cluster management software we use for other physical hosts in TPO.
Please let me know if I need to add more details here :-)
I have no idea what label to put this in, so folks from TPA who organize these things are welcome to figure out where this belong the best.Alexander Færøyahf@torproject.orgAlexander Færøyahf@torproject.orghttps://gitlab.torproject.org/tpo/tpa/gitlab-lobby/-/issues/18Bulk approval seems to have stopped working2023-08-10T19:10:13ZNick MathewsonBulk approval seems to have stopped workingI just tried to approve some accounts from the admin page, and nothing changed. I had to approve them one by one from the individual request pages instead.
This was roughly 1 minute ago, in case the timestamp is helpful. :)I just tried to approve some accounts from the admin page, and nothing changed. I had to approve them one by one from the individual request pages instead.
This was roughly 1 minute ago, in case the timestamp is helpful. :)https://gitlab.torproject.org/tpo/tpa/gitlab/-/issues/43Calculate estimated and spent time automatically for tickets with task lists2022-05-30T19:08:43ZGeorg KoppenCalculate estimated and spent time automatically for tickets with task listsWe work around the unavailability of the Epics feature by using task
lists to denote parent/child relationships. One of the things that is
missing in that model is an update of the estimated/spent time in the
"parent" if things change on...We work around the unavailability of the Epics feature by using task
lists to denote parent/child relationships. One of the things that is
missing in that model is an update of the estimated/spent time in the
"parent" if things change on any issue listed in the task list. I am not
sure if that works for Epics but for us it would definitely be good to
have because some of use are trying to use parent tickets and task lists
to have tickets effectively on different milestones (the ticket itself
on milestone A while the parent ticket with ticket A on the task list on
milestone B) and we want to have proper time tracking for all of our
milestones.
I've not looked closely how we could solve this issue but maybe there is
a hook/plugin we can write that could help. The amount of dependent
tasks and their open/close status are already tracked automatically,
which is good and might provide us some insight on how to bolt the
timetracking onto that.
FWIW: This is not to say that those parent tickets should only reflect
the time tracking information for their issues in the task list. It
should be possible to add additional time spent etc. Just that the
figures can't be below the sum of the respective fields of the child issues.
Nested lists should be taken into account as well. :)
@doulget, @gaba, and @sysrqb for visibility as this came up yesterday in
during work on label clean-up.https://gitlab.torproject.org/tpo/tpa/dangerzone-webdav-processor/-/issues/16Can we preserve or extract link targets?2022-06-21T18:41:26ZNick MathewsonCan we preserve or extract link targets?Hi! I'm trying to review some candidate resumes, and some of them have links to github repositories or similar. But I can't click on the links, and I don't see a way to extract them.
I guess that disabling links could be a security fe...Hi! I'm trying to review some candidate resumes, and some of them have links to github repositories or similar. But I can't click on the links, and I don't see a way to extract them.
I guess that disabling links could be a security feature? But in this case, I really would like to be able to visit candidates' github repositories. Maybe this could be an option?https://gitlab.torproject.org/tpo/tpa/team/-/issues/40202can't send email to state.gov2024-01-22T16:34:28Zanarcatcan't send email to state.govwriting to USER@state.gov gives us this error:
```
<REDACTED@state.gov>: TLSA lookup error for christopher-ew.state.gov:25
```
it's actually from multiple endpoints, my home server and riseup also see this, so this is actually an error...writing to USER@state.gov gives us this error:
```
<REDACTED@state.gov>: TLSA lookup error for christopher-ew.state.gov:25
```
it's actually from multiple endpoints, my home server and riseup also see this, so this is actually an error with state.gov, i would argue... still worth taking a look.
/cc @gaba
battle plan:
* [x] <del>confirm with state.gov folks that emails are failing because they check the eugeni TLS cert</del> state.gov is unwilling to provide more information, but we'll just go with that assertion, as it seems fair that our MX should provide publicly verifiable certificates in the standard CA infrastructure (on top of DNSSEC checks)
* [ ] if so, establish a plan to rebuild a MX with "real" TLS certificates, which is now documented in the [roadmap](https://gitlab.torproject.org/tpo/tpa/team/-/wikis/roadmap/2021)
* [ ] bypass DNSSEC checks for state.gov so *we* can send mail there
* [ ] bring up their misconfiguration on DNSSEC forums (optional)improve mail servicesanarcatanarcathttps://gitlab.torproject.org/tpo/tpa/anon_ticket/-/issues/66Cannot open any ticket "Server Error (500)".2024-03-27T00:04:15ZcypherpunksCannot open any ticket "Server Error (500)".Same problem as in tickets #61 and #63 (should merge them, along with this one). I can confirm that this problem exist at least 3 months.
As such is impossible to comment and give feedback on any ticket, thought the anon system.
Teste...Same problem as in tickets #61 and #63 (should merge them, along with this one). I can confirm that this problem exist at least 3 months.
As such is impossible to comment and give feedback on any ticket, thought the anon system.
Tested 4 projects(anon, TBO, core ) and none of the ticket I clicked showed as an anon user. The gitlab link works correctly.
Does not matter if you reach the ticket from search or list,the error is the same, as the ticket link is the same.
Neither does matter if it was created by you, another anon, or normal gitlab user. Also tested a different Anonymous Identifier.
Example link with error 500, using a random Anonymous Identifier to include publicly in this ticket:
https://anonticket.onionize.space/user/vehicular-renegade-uncommon-tyke-mower-imprint/projects/snowflake/issues/40347/details/1/
If more info is requested, will create more tickets.https://gitlab.torproject.org/tpo/tpa/ci-templates/-/issues/14check dead links during CI builds2024-03-01T04:50:25ZKezcheck dead links during CI buildsin tpo/web/donate-static#93 mattlav reported a broken link on one of our pages. we can test for broken links pretty easily using something like python's html.parser
one of the cons is that we'd have to have CI make HTTP requests for eve...in tpo/web/donate-static#93 mattlav reported a broken link on one of our pages. we can test for broken links pretty easily using something like python's html.parser
one of the cons is that we'd have to have CI make HTTP requests for every single link, which could be a decent amount of traffic with how often we run CI buildsanarcatanarcathttps://gitlab.torproject.org/tpo/tpa/team/-/issues/40539check SPF/DKIM/DMARC records on incoming mail2023-08-08T14:33:50Zanarcatcheck SPF/DKIM/DMARC records on incoming mailas part of the %"improve mail services" roadmap, I have realized that we should not only publish SPF/DKIM records (and sign outgoing mail), we should also *check* incoming mail. This is becoming critically important because Hetzner are b...as part of the %"improve mail services" roadmap, I have realized that we should not only publish SPF/DKIM records (and sign outgoing mail), we should also *check* incoming mail. This is becoming critically important because Hetzner are becoming unhappy with us backscatter-spamming people through Mailman (see message `[AbuseID:998963:1A]`).
That was a Spamcop complaint about a user that was receiving backscatter bounce through Mailman. Specifically a message that was marked "too big" and "held for moderation". That specific instance would have been solved by an SPF check, because there are fairly strict ones on that specific victim's email server:
```
account.co.za. 9476 IN TXT "v=spf1 +a +mx +ip4:136.243.12.222 +ip4:5.9.29.165 +ip4:5.9.29.168 -all"
```
So it seems like part of roadmap should also include checking incoming email, if only to limit the spam we relay through (and then hurts our reputation).
First step would be to check SPF, but we should also probably check DMARC since it may influence SPF. DKIM would be second.
* [ ] SPF checks
* [ ] DMARC checks? if necessary for SPF, definitely needed for DKIM...
* [ ] DKIM checksimprove mail serviceshttps://gitlab.torproject.org/tpo/tpa/team/-/issues/41120chi-node-14 remote LUKS unlock failure2023-04-05T20:02:12Zanarcatchi-node-14 remote LUKS unlock failureIn the chi-node-14 move, I have failed to remotely unlock the LUKS partition. Because network was misconfigured (because of the move!), neither the initrd or mandos methods worked, so I was expecting at least the BIOS remote console to g...In the chi-node-14 move, I have failed to remotely unlock the LUKS partition. Because network was misconfigured (because of the move!), neither the initrd or mandos methods worked, so I was expecting at least the BIOS remote console to give me a prompt, but it didn't.
The SOL console *also* didn't work which meant I had to basically recover from a rescue environment. (It turns out that I could just remove the console=ttyS0 bit from the grub commandline, but I didn't find that out until much later.)
In any case, it seems to me this should be fixed so we can recover this box if the initrd unlock fails.https://gitlab.torproject.org/tpo/tpa/team/-/issues/29418Clean up dist.tpo once2021-03-29T14:37:15ZLinus Nordberglinus@torproject.orgClean up dist.tpo oncecf https://trac.torproject.org/projects/tor/wiki/org/meetings/2019BrusselsAdminTeamMinutes#Cleaningunusedpackagesondist.tpocf https://trac.torproject.org/projects/tor/wiki/org/meetings/2019BrusselsAdminTeamMinutes#Cleaningunusedpackagesondist.tpohttps://gitlab.torproject.org/tpo/tpa/team/-/issues/40626cleanup the postfix code in puppet2022-04-06T20:53:24Zanarcatcleanup the postfix code in puppetour Puppet configuration in puppet is problematic in a few ways:
* main.cf hardcodes a lot of things (like `smtp_dns_support_level`, certificates, digest fingerprints, mynetworks, and more)
* master.cf and main.cf has host-specific co...our Puppet configuration in puppet is problematic in a few ways:
* main.cf hardcodes a lot of things (like `smtp_dns_support_level`, certificates, digest fingerprints, mynetworks, and more)
* master.cf and main.cf has host-specific configurations (e.g. eugeni, polyanthum) or role-specific (email::submission), instead of having this part of hiera
* transport maps are hardcoded in the module instead of (say) in a profile
* access control is difficult: it's unclear how to block a given email address (e.g. on RT/rude)
* the code is specific to our project and not reusable, and is therefore basically unmaintained (we should use an existing module instead)
so we should look at refactoring this to make it easier to expand and tweak.
The following projects are similar and we might want to collaborate with those in the future.
* [voxpupuli/postfix](https://github.com/voxpupuli/puppet-postfix) - multiple issues, see below
* [shared-puppet-modules-group/postfix](https://gitlab.com/shared-puppet-modules-group/postfix) - marked "LEGACY"
* [cirrax/postfix](https://github.com/cirrax/puppet-postfix) - used by tails/puscii
There are multiple issues with the camptocamp (now voxpupuli) module that seemed to be
deal-breakers during a quick evaluation:
* [Changes to postfix::files causes a restart and reload](https://github.com/camptocamp/puppet-postfix/issues/134) - this
is a performance concern: could be trouble for large servers
* ~~[Do not manage /etc/mailname](https://github.com/camptocamp/puppet-postfix/issues/186) - we *remove* this file in our
configuration, so that's in direct conflict~~ fixed!
* [init.pp: use the postfix default for mydestination](https://github.com/camptocamp/puppet-postfix/pull/256) - this is a
default that could be worked around (and we could just use a
template for `main.cf`)
* [main.cf empty](https://github.com/camptocamp/puppet-postfix/blob/master/files/main.cf) - main.cf is completely empty by default, which
is a major change from Debian (for example)
In general, it's unclear that the camptocamp brings enough benefit to
justify switching to it at this stage. But since it's the most popular
module and that it's actively maintained, it might be worth biting the
bullet and adapting it to our needs instead of reinventing the wheel.
We should definitely look at the cirrax module, in any case, as we heard good things about it from fellow sysadmins.cleanup and publish the sysadmin codebasehttps://gitlab.torproject.org/tpo/tpa/team/-/issues/29987clear out unowned files on servers2022-04-07T16:28:19Zanarcatclear out unowned files on serversthere is a significant number of unowned files on the servers. this is generally because a user was removed without the associated user being purged as well, but there are also odd corner cases like backup restores and so on.
In legacy/...there is a significant number of unowned files on the servers. this is generally because a user was removed without the associated user being purged as well, but there are also odd corner cases like backup restores and so on.
In legacy/trac#29682, I have done the following Cumin run to find such files, expecting to find only problems with the Munin user/group I had just removed, but instead found many more cases, mostly (300,000) surrounding deleted users:
```
cumin -p 0 -b 5 --force -o txt '*' 'find / -ignore_readdir_race -path /proc -prune -nouser -o -nogroup' | tee unowned-files
```
Next step is to decide what to do with the leftover files and document this as part of the user retirement process.anarcatanarcathttps://gitlab.torproject.org/tpo/tpa/team/-/issues/40492consider bcrypt or yescrypt for password hashing after bullseye upgrade2024-02-08T16:21:09Zanarcatconsider bcrypt or yescrypt for password hashing after bullseye upgradein #30608 we were forced to downgrade to SHA for hashing our (mail) passwords. that's really too bad, and it's basically only because `crypt(3)` doesn't support bcrypt or better (yescrypt!) in Debian buster.
once we're upgraded (basical...in #30608 we were forced to downgrade to SHA for hashing our (mail) passwords. that's really too bad, and it's basically only because `crypt(3)` doesn't support bcrypt or better (yescrypt!) in Debian buster.
once we're upgraded (basically everywhere, but we could do it only on the submission server for starters), implement the logic to build bcrypt-specific (or yescrypt?) in userdir-ldap-cgi. the caller is in `update.cgi` (grep for `Salt`) and the definition is in `Util.pm`. we should probably create a new function for more complex salts like bcrypt and yescrypt because the actual "settings" (what comes after `$y$`) are not exactly similar than for md5/sha (e.g. salts are separated from the hashed password with `$` in SHA, not so in bcrypt, from what i understand.
in any case, this needs experimentation. this is the code i had for bcrypt:
my $bcrypt = Digest->new('Bcrypt', cost=>12, salt=>rand_bits(16*8));
my $hashed_password = crypt($password, $bcrypt->settings());
note that I don't actually *trust* `rand_bits` anymore, after reading the [Data::Entropy::Algorithms](https://metacpan.org/pod/Data::Entropy::Algorithms) documentation. turns out it relies on [Data::Entropy](https://metacpan.org/pod/Data::Entropy) and *that* says:
> If nothing is done to set a source then it defaults to the use of Rijndael (AES) in counter mode (see Data::Entropy::RawSource::CryptCounter and Crypt::Rijndael), keyed using Perl's built-in rand function. This gives a data stream that looks like concentrated entropy, but really only has at most the entropy of the rand seed. Within a single run it is cryptographically difficult to detect the correlation between parts of the pseudo-entropy stream. If more true entropy is required then it is necessary to configure a different entropy source.
And *then* [rand()](https://perldoc.perl.org/functions/rand) says:
> rand is not cryptographically secure. You should not rely on it in security-sensitive situations. As of this writing, a number of third-party CPAN modules offer random number generators intended by their authors to be cryptographically secure, including: Data::Entropy, Crypt::Random, Math::Random::Secure, and Math::TrulyRandom.
and now we have inception. brilliant.Debian 12 bookworm upgradehttps://gitlab.torproject.org/tpo/tpa/team/-/issues/41405Consider changing project location for non-tpa projects2024-02-19T15:05:27Zmicahmicah@torproject.orgConsider changing project location for non-tpa projectsWe have a few projects, and are likely to get more, that are missing a good place to call home in gitlab, because they don't have a better place to go. Because of this, they end up as projects in tpo/tpa:
https://gitlab.torproject.org/t...We have a few projects, and are likely to get more, that are missing a good place to call home in gitlab, because they don't have a better place to go. Because of this, they end up as projects in tpo/tpa:
https://gitlab.torproject.org/tpo/tpa/triage-ops
https://gitlab.torproject.org/tpo/tpa/renovate-cron
https://gitlab.torproject.org/tpo/tpa/base-images(?)
As part of the Hackweek Collaborative editing project, @meskio made https://gitlab.torproject.org/meskio/archivist and it also needs a home outside of his personal project space. In thinking about where it could go, I started to wonder if there might be a better place than just tossing all these projects into the tpa space, and pinky promising that they aren't TPA's responsibility, even though they are there.
What if we made a different group for these projects, under `/tpo`, and made that the home for this kind of stuff instead? Possible names could be `/tpo/automation`, `/tpo/bots` `/tpo/ai`, `/tpo/robotinvasion`, or something more clever that you come up with :grinning:
Curious to hear what tpa's thoughts are on this, or should we just push the @meskio project into `/tpa/archivist`?anarcatanarcathttps://gitlab.torproject.org/tpo/tpa/team/-/issues/40963consider deploying kernel hardening features2022-11-16T22:51:31Zanarcatconsider deploying kernel hardening featureswe do have some minimal hardening stuff (like disabling module loading after boot and disabling user namespaces) but not much. let's see what we could improve to reduce the attack surface on our servers.
a good place to start would be t...we do have some minimal hardening stuff (like disabling module loading after boot and disabling user namespaces) but not much. let's see what we could improve to reduce the attack surface on our servers.
a good place to start would be this list:
https://git.autistici.org/ai3/float/-/blob/master/roles/float-base/templates/sysctl.conf.j2
... and talking to fellow sysadmins about what they do in production as well. this should obviously be distributed progressively. also note that our userns clone hack might be removed from debian, see https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1024186anarcatanarcathttps://gitlab.torproject.org/tpo/tpa/team/-/issues/40405consider disabling read/write work queues on SSD devices2024-02-08T16:21:05Zanarcatconsider disabling read/write work queues on SSD devicesseems like we could do a significant (twofold, [according to cloudflare](https://blog.cloudflare.com/speeding-up-linux-disk-encryption/)) performance improvement on SSD drives if we disable "work queues" in dm-crypt, by specifying `no-re...seems like we could do a significant (twofold, [according to cloudflare](https://blog.cloudflare.com/speeding-up-linux-disk-encryption/)) performance improvement on SSD drives if we disable "work queues" in dm-crypt, by specifying `no-read-workqueue` and `no-write-workqueue` in `/etc/crypttab`. this is available with kernels starting with Linux 5.9, so maybe this needs to wait until the bullseye upgrade, however.
The [arch wiki](https://wiki.archlinux.org/) has [good documentation on how to enable this][docs].
[docs]: https://wiki.archlinux.org/title/Dm-crypt/Specialties#Disable_workqueue_for_increased_solid_state_drive_(SSD)_performanceDebian 12 bookworm upgradehttps://gitlab.torproject.org/tpo/tpa/team/-/issues/40950consider replacing our bespoke postgresql backup system2024-03-13T13:15:10Zanarcatconsider replacing our bespoke postgresql backup systemi have just found out about [barman](https://pgbarman.org/) a PostgreSQL backup system which is pretty close to the bespoke system we're using at TPA. Except it's actively developed, commercially supported, packaged in Debian, and genera...i have just found out about [barman](https://pgbarman.org/) a PostgreSQL backup system which is pretty close to the bespoke system we're using at TPA. Except it's actively developed, commercially supported, packaged in Debian, and generally pretty damn solid.
Consider replacing our tool with this, Not sure what process we should use for this, but i would probably need to setup a must have/nice to have/no goal spec, and, yes, another damn RFC.
For now, I've just documented various tools I found yesterday searching around the interweb in the wiki here:
https://gitlab.torproject.org/tpo/tpa/team/-/wikis/howto/postgresql#backup-systemsanarcatanarcathttps://gitlab.torproject.org/tpo/tpa/team/-/issues/40580consider retiring build boxes2022-04-07T16:21:19Zanarcatconsider retiring build boxesin the jenkins retirement (#40218) we have decided to keep a few (three machines with debian_build_box, one of which is also a CI runner) build boxes with sbuild on it. even though we have retired Jenkins, which were its primary consumer...in the jenkins retirement (#40218) we have decided to keep a few (three machines with debian_build_box, one of which is also a CI runner) build boxes with sbuild on it. even though we have retired Jenkins, which were its primary consumer, users like @weasel and @kez may still require those boxes for two use cases:
* @kez doesn't run Debian and might need a place to build random Debian packages not currently in GitLab (which could be fixed by moving those package builds inside GitLab)
* @weasel has a similar use case although he obviously runs debian, he also needs access to the ARM builder. it's unclear whether he still requires access to the build box in the long term or why (sorry, my memory fails me here)