TPA team issueshttps://gitlab.torproject.org/tpo/tpa/team/-/issues2022-06-03T23:47:50Zhttps://gitlab.torproject.org/tpo/tpa/team/-/issues/33062investigate kreb's advice on DNS hijacking2022-06-03T23:47:50Zanarcatinvestigate kreb's advice on DNS hijackingAfter reviewing [this article about recent DNS hijacking incidents](https://krebsonsecurity.com/2019/02/a-deep-dive-on-the-recent-widespread-dns-hijacking-attacks/), I think it might be worth reviewing the recommendations given in the ar...After reviewing [this article about recent DNS hijacking incidents](https://krebsonsecurity.com/2019/02/a-deep-dive-on-the-recent-widespread-dns-hijacking-attacks/), I think it might be worth reviewing the recommendations given in the article, which are basically:
1. [x] use DNSSEC
2. [ ] Use registration features like Registry Lock that can help protect domain names records from being changed
3. [ ] Use access control lists for applications, Internet traffic and monitoring
4. [ ] Use 2-factor authentication, and require it to be used by all relevant users and subcontractors
5. [x] In cases where passwords are used, pick unique passwords and consider password managers
6. [ ] Review accounts with registrars and other providers
7. [ ] Monitor certificates by monitoring, for example, Certificate Transparency Logs (#40677)
Some of those are impractical: for example 2FA will not work for us if we have one shared account with a provider.
Others have already been done: we have a good DNSSEC deployment and manage passwords properly.
Mainly, I'm curious about investigating Registry lock and CT logs monitoring, the latter which could be added as a Nagios thing, maybe.https://gitlab.torproject.org/tpo/tpa/team/-/issues/32901automate/puppetize Nagios installs2024-01-19T18:48:16Zanarcatautomate/puppetize Nagios installsone part of our install process is to configure Nagios, by hand, in the git repository. I usually do this by copy-pasting some similar blob of config from a possibly similar machine and hope for the best.
this is a manual step, and as p...one part of our install process is to configure Nagios, by hand, in the git repository. I usually do this by copy-pasting some similar blob of config from a possibly similar machine and hope for the best.
this is a manual step, and as part of the automation of the install process, it should be made automatic.
one way this could (and probably should) be done is by making Puppet automatically add its nodes into Nagios. this can be done using the [icinga2 module](https://github.com/Icinga/puppet-icinga2), for example. care should be taken to do a smooth transition, keeping existing configurations and just adding the Puppet ones on top, for new machines.
but this could (eventually) be retroactively added to all nodes, removing all manual configuration.
checklist:
1. [x] audit and import the module in our monorepo
1. [x] ~~enable on the nagios server, without writing any config (hopefully a noop)~~ not possible, config is overwritten by module, instead...
1. [ ] move the base configuration (`config/static`) from git into Puppet (mostly icinga.cfg and so on, because they are overwritten by the module)
1. [ ] enable a single config from puppet, as a test
1. [ ] add a new host check configuration
1. [ ] add a new service check configuration
1. [ ] add all *base* service checks for the new host (e.g. the services defined for the `computers` hostgroup, equivalent of pieces of `from-git/generated/auto-services.cfg`)
1. ~~[ ] convert legacy config into puppet (at this stage we only have the old hosts as legacy config)~~ done in third step
1. [ ] convert NRPE service definitions (`puppet:///modules/nagios/tor-nagios/generated/nrpe_tor.cfg`, generated from the git repo)
1. [ ] remove NRPE config sync from nagios to Puppet (the rsync to `pauli` in `config/Makefile`)
1. [ ] convert old hosts checks into puppet
1. [ ] convert old services checks into puppet
1. [ ] remove git hook receiver on nagios server (`/etc/ssh/userkeys/nagiosadm` key, which calls `/home/nagiosadm/bin/from-git-rw`)
It's a long way there, but getting to the state where *new* hosts are covered would already be a great improvement.Debian 11 bullseye upgradehttps://gitlab.torproject.org/tpo/tpa/team/-/issues/32558TPA-RFC-47: clarify what happens to email when we retire a user2023-09-20T17:53:29ZanarcatTPA-RFC-47: clarify what happens to email when we retire a userAs part of improving the offboarding process (legacy/trac#32519), we should especially look at how email works.
Right now, when we [retire a user](https://help.torproject.org/tsa/howto/retire-a-user/), their account is first "locked" wh...As part of improving the offboarding process (legacy/trac#32519), we should especially look at how email works.
Right now, when we [retire a user](https://help.torproject.org/tsa/howto/retire-a-user/), their account is first "locked" which means their access to various services is disabled. But their email still works for 186 days (~6 months). After that date, in theory, their email aliases start completely dropping email (needs to be onfirmed).
It's unclear if that's the right policy to follow. Some people feel that an email alias should stay around forever, as it is an inalienable human right.
Others feel that certain administrative roles should be forwarded when a person leave. If, say, "Alice" (fictive name) was doing fundraising but was using `alice@torproject.org` for that work. When they leave, should we forward `alice@` to `fundraising@torproject.org`?
But then what if Alice was using their work email for private correspondance either? Maybe the fundraising team shouldn't be able to see *those* communications.
One proposal could be that the default policy is this:
1. email @torproject.org is "function" email and is destined only for torproject.org related work
2. when a person leave their position, that email gets deactivated after a 6 months delay
3. in extreme cases, some forward may be *temporarily* enabled to reset accesses or re-establish contacts with a provider or third-party
It is also possible that there could be *two* policies, one for TPI employees and one for other TPO people.anarcatanarcat2023-10-18https://gitlab.torproject.org/tpo/tpa/team/-/issues/32351review our ssl ciphers suite2024-03-25T20:15:39Zanarcatreview our ssl ciphers suiteWe currently use magic incantation from the Mozilla SSL observatory in our Apache (and now nginx, see legacy/trac#32239) installations. We should review it and see if it's still relevant. It seems we're using the suites as per the Mozill...We currently use magic incantation from the Mozilla SSL observatory in our Apache (and now nginx, see legacy/trac#32239) installations. We should review it and see if it's still relevant. It seems we're using the suites as per the Mozilla observatory, but since we're upgrading to buster, it might be worth upgrading our suite a little.
The documentation in the file mentions those URLs:
https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=apache-2.4.25&openssl=1.0.2l&hsts=yes&profile=intermediate
https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=apache-2.4.25&openssl=1.1.0&hsts=no&profile=intermediate
But that's *two* lists... maybe what we have is the merged one?
In any case, this probably needs a kick. The list was created in 2014 and last touched in 2018, according to the comments in the apache config.
Unless we have per openssl-version configs, this will have to wait until legacy/trac#29399 is done at least.
List of places we need to fix this:
- [ ] apache (watch out for WKD and GnuPG on windows, see #33751)
- [ ] nginx (`modules/profile/manifests/nginx.pp`, `modules/profile/files/gitlab/gitlab.torproject.org.conf`, see #40481)
- [ ] postfix (watch out for #33413)
- [ ] haproxy (configured in `modules/roles/templates/onionoo/haproxy.cfg.erb` but maybe other places)
- [ ] ipsec?
Todo list:
- [ ] review https://cipherli.st/
- [ ] test with https://www.ssllabs.com/ssltest/
- [ ] test mail servers with swaks
- [ ] set a baseline of supported clients
- [ ] update https://help.torproject.org/tsa/howto/tls/ with changes
- [ ] compliance monitoring, maybe with [zlint](https://github.com/zmap/zlint)anarcatanarcathttps://gitlab.torproject.org/tpo/tpa/team/-/issues/32273archive private information from SVN2022-12-06T19:50:49Zanarcatarchive private information from SVNa common problem in the internal and corp SVN repository shutdown is "what do we do with all that stuff now". for example, the internal repository is shutdown now (#15949) but there is still information there that is valuable. or not. we...a common problem in the internal and corp SVN repository shutdown is "what do we do with all that stuff now". for example, the internal repository is shutdown now (#15949) but there is still information there that is valuable. or not. we're not sure. we think so, but maybe some of it should be destroyed.
so we need to answer the following questions:
1. which data should be kept and destroyed from the repositories?
2. where should it be kept?
so far, I went under the assertion that the answers were:
1. keep everything
2. in nextcloud
but it seems this might not be exactly right.old service retirement 2023https://gitlab.torproject.org/tpo/tpa/team/-/issues/32168Add CORS wildcard to check API2021-03-29T14:39:14ZTracAdd CORS wildcard to check APIHello,
The check API doesn't have CORS set, specifically this URL:
https://check.torproject.org/api/ip
If "Access-Control-Allow-Origin: *" could be added that would be great.
Monroe Clinton
**Trac**:
**Username**: monroeclintonHello,
The check API doesn't have CORS set, specifically this URL:
https://check.torproject.org/api/ip
If "Access-Control-Allow-Origin: *" could be added that would be great.
Monroe Clinton
**Trac**:
**Username**: monroeclintonhttps://gitlab.torproject.org/tpo/tpa/team/-/issues/32025Stop using corpsvn and disable it as a service2022-12-06T19:50:54ZRoger DingledineStop using corpsvn and disable it as a serviceIn legacy/trac#17202 we're going to decommission the server that runs our various svn services.
We have a plan for the public svn.tpo service: legacy/trac#15948
and we are making a plan for svninternal: legacy/trac#15949
That leaves c...In legacy/trac#17202 we're going to decommission the server that runs our various svn services.
We have a plan for the public svn.tpo service: legacy/trac#15948
and we are making a plan for svninternal: legacy/trac#15949
That leaves corpsvn, which I think is the most actively used still -- for example our accounting folks use it. This ticket is about making and finishing the plan for shutting down the corpsvn service.old service retirement 2023https://gitlab.torproject.org/tpo/tpa/team/-/issues/31969deploy a puppet dashboard2023-11-23T21:08:17Zanarcatdeploy a puppet dashboardit would be useful to have a way to browse reports and facts in the cluster. there's a lot of information in the PuppetDB that's only visible when you inspect the database, and it would help to have a way to browse this and diagnose issu...it would be useful to have a way to browse reports and facts in the cluster. there's a lot of information in the PuppetDB that's only visible when you inspect the database, and it would help to have a way to browse this and diagnose issues with puppet.https://gitlab.torproject.org/tpo/tpa/team/-/issues/31633publish HTML documentation of our puppet source2022-04-06T20:54:12Zanarcatpublish HTML documentation of our puppet sourcethere are ways of generating HTML versions of Puppet source code, based on the docstrings littering the source code. i've done some tentative runs of this and it looks ... interesting. the utility of this is currently limited by the fact...there are ways of generating HTML versions of Puppet source code, based on the docstrings littering the source code. i've done some tentative runs of this and it looks ... interesting. the utility of this is currently limited by the fact that only 35% of the source is documented, according to `puppet strings`, but i figured I would document the efforts I've done so far already.
Koumbit uses the following Rakefile to generate the docs for their monorepo:
```
#require 'bundler/gem_tasks'
task :default do
# nothing
puts('no action')
end
task :doc do
require 'puppet-strings/tasks/generate'
# This doesn't seem to really process node files, but
# an exclude of manifests/ might be interesting.
Rake::Task['strings:generate'].invoke(
# This list of included files was taken from
# https://github.com/puppetlabs/puppet-strings#generating-documentation-with-puppet-strings
# and should correspond to what puppet-strings does by default, but spanned
# over all of the code directories in the control repos.
# It's possible that some directories might include .rb files that were not
# specified.. We'll have to fix this if we ever encounter such an issue.
'**/manifests/**/*.pp **/functions/**/*.pp **/types/**/*.pp **/tasks/**/*.pp **/lib/**/*.rb',
'false',
'false',
'markdown'
)
end
# Generate documentation only for manifests in site/
# This will help to verify if there's anything in our own code that's missing
# comments for documentation. The run will be faster and less noisy than when
# we generate everything.
# Note, though, that it will create an index only for things in site/
task :doc_site do
require 'puppet-strings/tasks/generate'
# This doesn't seem to really process node files, but
# an exclude of manifests/ might be interesting.
Rake::Task['strings:generate'].invoke(
'site/**/*.pp site/**/*.rb',
'false',
'false',
'markdown'
)
end
task :doc_clean do
system("rm -rf doc")
end
task :doc_upload, [:ftp_host, :ftp_port, :ftp_user, :ftp_pass, :ftp_dir] do |t, args|
puts "lftp -e \"mirror -R doc #{args[:ftp_dir]}\" -u #{args[:ftp_user]},#{args[:ftp_pass]} -p #{args[:ftp_port]} #{args[:ftp_host]}"
system("lftp -e \"mirror -R doc #{args[:ftp_dir]}; quit\" -u #{args[:ftp_user]},#{args[:ftp_pass]} -p #{args[:ftp_port]} #{args[:ftp_host]}")
end
```
Notice the two different jobs for `site` (private) and `modules` (public).https://gitlab.torproject.org/tpo/tpa/team/-/issues/31239automate installs2024-01-19T19:23:20Zanarcatautomate installsright now, installing machines is mostly a manual, or semi-manual process: we install debian, preferably with crypto, and then do stuff on top.
some of it is done by hand, some is done in puppet.
we should have a standardized install p...right now, installing machines is mostly a manual, or semi-manual process: we install debian, preferably with crypto, and then do stuff on top.
some of it is done by hand, some is done in puppet.
we should have a standardized install process that gives us a reproducible, identical install across platforms. then Puppet is what customizes the machine on top of that.
this ticket aims at documenting what we already have and where we could possibly go. this is one of the question we answered "no" on in the "ops questionnaire" in legacy/trac#30881. see also the automated upgrade part in legacy/trac#31957.
When we started this work, the installer had this many manual steps:
* new-machine (common trunk): 14 steps
* new-machine-hetzner-robot: +43 steps (57 total)
* new-machine-hetzner-cloud: +21 steps (35 total)
Ideally, all this would be done through an automated process, or at least scripted so that only important questions (say "hostname" and "purpose") would be answered by an operator. The plan right now is to do this with [fabric](https://gitlab.torproject.org/tpo/tpa/team/-/wikis/howto/fabric/). This is a checklist of things to do to fully automate our known install processes:
* [ ] new-machine common trunk
1. [ ] add to spreadsheet (deprecate? see https://gitlab.torproject.org/tpo/tpa/team/-/issues/29816)
2. [ ] clone ~~tsa-misc~~ fabric-tasks repo (#41484)
3. [ ] LDAP bootstrap
4. [ ] open firewall on Puppet
5. [ ] bootstrap puppet
6. [ ] reboot
7. [ ] add to nagios (puppetize, see https://gitlab.torproject.org/tpo/tpa/team/-/issues/32901)
8. [ ] add to dnswl (skip?)
* [ ] new-machine-hetzner-robot remaining:
* [ ] order server (skip?)
* [ ] extra fingerprint from email (skip?)
* [ ] document root password in `tor-passwords` (skip: switch to Trocla instead: #33332)
* [ ] new-machine-mandos (#40096)
* [ ] reboot
* [x] new-machine-hetzner-robot automated with Fabric:
1. [x] SSH with fingerprint
2. [x] set hostname
3. [x] partition disks (with fai-setup-storage)
4. [x] install system (with grml-debootstrap)
5. [x] dropbear-initramfs setup (in a grml-deboostrap hook, could be moved to fabric)
6. [x] review crypttab configuration (skip)
7. [x] review network configuration (skipped, moved to new-machine)
8. [x] rebuilt initramfs (in a grml-debootstrap hook, could also be moved to fabric)
9. [x] unmount everything
10. [x] close everything
* [ ] new-machine-hetzner-cloud remaining:
* [ ] order server
* [ ] reboot in rescue
* [ ] export SSH keys
* [ ] reboot
* [ ] add to tor-passwords
* [ ] dropbear disk unlock
* [ ] new-machine-mandos (#40096)
* [ ] reverse DNS
* [ ] new-machine-hetzner-cloud to automate with Fabric:
* [x] SSH with fingerprint (implemented in fabric!)
* [ ] partition disks (with kpartx, but could be done with fai-setup-storage?)
* [ ] setup fstab (move to grml-debootstrap?)
* [x] setup /etc/hosts (skip, move to common trunk)
* [ ] figure out why we `touch etc/udev/rules.d/75-persistent-net-generator.rules`
* [x] setup /etc/network/interfaces (skip, move to common trunk?)
* [x] setup /etc/resolv.conf (skip, move to common trunk?)
* [ ] install some more base packages (merge with grml-debootstrap? installed packages are: `isc-dhcp-client locales-all net-tools iproute2 ifupdown dialog vim netbase udev psmisc usbutils pciutils iputils-ping telnet bind9-host cryptsetup systemd systemd-sysv initscripts kbd console-setup dropbear-initramfs busybox-static linux-image-amd64 grub2 ssh acpi-support-base lldpd libpam-systemd dbus cron logrotate rsyslog`)
* [ ] figure out `etc/initramfs-tools/scripts/init-premount/local-hetzner-default-gw` hack
* [ ] generate and set root and LUKS password (move to grml-debootstrap?)
* [x] setup unattended-upgrades (skip: moved to puppet)(next) cluster scalinghttps://gitlab.torproject.org/tpo/tpa/team/-/issues/31226add validation checks in puppet2022-12-20T19:56:26Zanarcatadd validation checks in puppetwe often do "YOLO" (You Only Live Once) commits in Puppet because of silly syntax errors and typos that could be caught by automated systems. even just a simple git hook checking for syntax errors in manifests would be an improvement, bu...we often do "YOLO" (You Only Live Once) commits in Puppet because of silly syntax errors and typos that could be caught by automated systems. even just a simple git hook checking for syntax errors in manifests would be an improvement, but we could also run tests and so on.Puppet CIhttps://gitlab.torproject.org/tpo/tpa/team/-/issues/31214audit account-keyring2023-02-02T21:22:41Zanarcataudit account-keyringLook at all the keys in account-keyring, for each key:
1. if the account is locked in LDAP, remove the key
2. if the key is expired, consider locking it in LDAP
Consider automating this, or at least make it so automation wouldn't be ...Look at all the keys in account-keyring, for each key:
1. if the account is locked in LDAP, remove the key
2. if the key is expired, consider locking it in LDAP
Consider automating this, or at least make it so automation wouldn't be harder, see legacy/trac#29671.https://gitlab.torproject.org/tpo/tpa/team/-/issues/31032Use narrowly-scoped signing keys in instructions for using torproject apt rep...2022-06-27T07:08:21ZdkgUse narrowly-scoped signing keys in instructions for using torproject apt repositoryhttps://2019.www.torproject.org/docs/debian.html.en engages in a number of suboptimal practices. In particular, it should not encourage users to use `apt-key add` with an OpenPGP certificate that is not expected to certify all repositor...https://2019.www.torproject.org/docs/debian.html.en engages in a number of suboptimal practices. In particular, it should not encourage users to use `apt-key add` with an OpenPGP certificate that is not expected to certify all repositories on the machine.
See https://wiki.debian.org/DebianRepository/UseThirdParty for reasonable guidance on setting up third party APT repositories.
(at the very least: place the key someplace like `/usr/local/share/keyrings/tor-project-arhcive.gpg` and then use a `signed-by` directive in the apt repository configuration)weasel (Peter Palfrader)weasel (Peter Palfrader)https://gitlab.torproject.org/tpo/tpa/team/-/issues/30672Ask holder of torproject.be to stop serving the zone2022-04-07T16:05:57ZLinus Nordberglinus@torproject.orgAsk holder of torproject.be to stop serving the zoneWe've asked the holder of torproject.be to stop serving the zone in ticket legacy/trac#27951.
Tracking progress here.We've asked the holder of torproject.be to stop serving the zone in ticket legacy/trac#27951.
Tracking progress here.https://gitlab.torproject.org/tpo/tpa/team/-/issues/30671Ask holder of torproject.fr to stop serving the zone2022-04-07T16:05:55ZLinus Nordberglinus@torproject.orgAsk holder of torproject.fr to stop serving the zoneWe've asked the holder of torproject.fr to stop serving the zone in legacy/trac#27951.
Tracking progress here.We've asked the holder of torproject.fr to stop serving the zone in legacy/trac#27951.
Tracking progress here.https://gitlab.torproject.org/tpo/tpa/team/-/issues/30273improve inventory of hardware resources2023-11-09T15:38:05Zanarcatimprove inventory of hardware resourcesWe currently have a few hosting providers and locations where we have "stuff":
* virtual machines
* colocated servers
* raspberri pi under desk
* routers
* "cloud" things (like AWS)
* test machines
* etc
TPO machines are current...We currently have a few hosting providers and locations where we have "stuff":
* virtual machines
* colocated servers
* raspberri pi under desk
* routers
* "cloud" things (like AWS)
* test machines
* etc
TPO machines are currently documented in LDAP. But they are also in Puppet. And there's a spreadsheet (which we want to replace with something else, probably a grafana dashboard, in legacy/trac#29816). And there are many things (like AWS) which are not really tracked formally anywhere that I am aware of.
So this project is about establishing a clearer process to keep such an inventory. It should at least cover the following, TPO-managed infrastructure:
* physical servers
* virtual machines on those physical servers *or* on other cloud providers
Ideally, we would also have a unified view of this for all machines paid for by TPI, regardless of the team.
Each machine should have documentation on:
* remote console access or control panel
* cost
* location
* responsible team
* purpose
* age and lifecycle (see parent legacy/trac#29304)
The last bit is of course related to another problem, which is lifecycle management (see parent ticket legacy/trac#29304).
A lot of that stuff is currently in LDAP and maybe it should just be added there. But I wonder if it would be useful to create another system (which might eventually supersede LDAP) that would be more flexible. If that process would happen at all, we would first need to thoroughly document how hosts are integrated into LDAP and so on, of course.cleanup and publish the sysadmin codebasehttps://gitlab.torproject.org/tpo/tpa/team/-/issues/30268Write down a policy for dist.tpo2020-09-28T16:05:45ZboklmWrite down a policy for dist.tpoAs discussed in legacy/trac#30204, we should have a policy of what stays on dist, and what gets deleted.
We can put this policy on https://trac.torproject.org/projects/tor/wiki/org/operations/Infrastructure/dist.torproject.org.As discussed in legacy/trac#30204, we should have a policy of what stays on dist, and what gets deleted.
We can put this policy on https://trac.torproject.org/projects/tor/wiki/org/operations/Infrastructure/dist.torproject.org.https://gitlab.torproject.org/tpo/tpa/team/-/issues/30020switch from our custom YAML implementation to Hiera2023-10-18T15:50:45Zanarcatswitch from our custom YAML implementation to HieraWe currently use a custom-made YAML database for assigning roles to servers and other metadata. I started using Hiera for some hosts and it seems to be working well.
Hiera is officially supported in Puppet and shipped by default in Pupp...We currently use a custom-made YAML database for assigning roles to servers and other metadata. I started using Hiera for some hosts and it seems to be working well.
Hiera is officially supported in Puppet and shipped by default in Puppet 5 and later. It's the standard way of specifying metadata and class parameters for hosts. I suspect it covers most of our needs in terms of metadata and should cover most if not all of what we're currently doing with the YAML stuff in Puppet.
We should therefore switch to using Hiera instead of our homegrown solution.
This involves converting:
* [x] `if has_role('foo') { include foo }` into `classes: [ 'foo' ]` in hiera (DONE!)
* [x] the `$roles` array into Hiera (DONE!)
* [x] the `$localinfo` into Hiera (assuming all the data is there) (DONE!)
* [x] ~~hardcoded macros in the ferm module's `me.conf.erb` into exported resources (DONE, except for HOST_TPO)~~
* [x] ~~templates looping over `$allnodeinfo` into exported resources~~
* [x] ~~the `$nodeinfo` and `$allnodeinfo` arrays into Hiera (assuming we can switch from LDAP for host inventory)~~
* [ ] `./modules/torproject_org/misc/hoster.yaml`
* [x] `./modules/torproject_org/misc/local.yaml`
* [x] `./modules/ipsec/misc/config.yaml`
* [x] `./modules/roles/misc/static-components.yaml`
* [x] `./modules/roles/files/spec/spec-redirects.yaml`
Ideally, all YAML data should end up in the hiera/ directory somehow. This is the first step in making our repository public (#29387) but also using Hiera as a more elaborate inventory system (#30273).
The idea of switching from LDAP to Hiera for host inventory will definitely need to be evaluated more thoroughly before going ahead with that part of the conversion, but YAML stuff in Puppet should definitely be converted.
The general goal of this is both to allow for a better inventory system but also make it easier for people to get onboarded with Puppet. By using community standards like Hiera, we make it easier for new people to get familiar with the puppet infrastructures and do things meaningfully.
Update: `get_roles()`, `has_role()`, `yamlinfo()` and `local.yaml` are *all* gone! The main chunks remaining are now `nodeinfo()`, `allnodeinfo()`, `$nodeinfo` and `hoster.yaml`. A plan has been laid out for that replacement below. Obviously, the ipsec, static components and redirects YAML files could use a transition into Hiera as well, but those are lower priority.cleanup and publish the sysadmin codebasehttps://gitlab.torproject.org/tpo/tpa/team/-/issues/29987clear out unowned files on servers2022-04-07T16:28:19Zanarcatclear out unowned files on serversthere is a significant number of unowned files on the servers. this is generally because a user was removed without the associated user being purged as well, but there are also odd corner cases like backup restores and so on.
In legacy/...there is a significant number of unowned files on the servers. this is generally because a user was removed without the associated user being purged as well, but there are also odd corner cases like backup restores and so on.
In legacy/trac#29682, I have done the following Cumin run to find such files, expecting to find only problems with the Munin user/group I had just removed, but instead found many more cases, mostly (300,000) surrounding deleted users:
```
cumin -p 0 -b 5 --force -o txt '*' 'find / -ignore_readdir_race -path /proc -prune -nouser -o -nogroup' | tee unowned-files
```
Next step is to decide what to do with the leftover files and document this as part of the user retirement process.anarcatanarcathttps://gitlab.torproject.org/tpo/tpa/team/-/issues/29677evaluate password management options2024-03-15T16:08:54Zanarcatevaluate password management optionsduring the [org/meetings/2017Montreal/Notes/BusFactor](https://gitlab.torproject.org/legacy/trac/-/wikis/org/meetings/2017Montreal/Notes/BusFactor) session, one of the things that was discussed was the password management system that is ...during the [org/meetings/2017Montreal/Notes/BusFactor](https://gitlab.torproject.org/legacy/trac/-/wikis/org/meetings/2017Montreal/Notes/BusFactor) session, one of the things that was discussed was the password management system that is (was?) stored in SVN. Specifically:
* We need a better password management solution than the one we have in corporate SVN right now.
* We should look over if the password's in this database should be rotated.
* Figure out if the passwords for paypal have been rotated by Jon et al and ensure that it will be put in the password database. We should also look into the "paypal dongle" or 2-step authentication?
I have some experience reviewing password managers, so I might be able to provide some advice here if someone expands on the requirements and problems with the current approach.
Here are the known password managers currently in use:
* TPA has a `tor-passwords` repository which uses [weasel's pwstore](https://github.com/weaselp/pwstore/)
* administration also store passwords in SVN
* Puppet generates passwords on the fly using a puppet-specific token (this might get replaced by trocla eventually, see #30009)
* Tor browser team's "military-grade post-quantum encrypted point-to-point subspace transmission"
* each worker probably has their own individual password managers, brains, and post-it notes on screens (hopefully no!) which we don't exactly know about
Possible replacements:
* [password-store](https://www.passwordstore.org/) AKA `pass` AKA OpenPGP encrypted files in a git repository, replacement for pwstore
* [trocla](https://github.com/duritong/trocla) - already used in Puppet, see #30009
* [hiera-eyaml](https://github.com/voxpupuli/hiera-eyaml) - pluggable encryption for Hiera keys (includes optional GPG support, PKCS#7 by default)
* [arver](https://code.immerda.ch/immerda/apps/arver/) - "tool to manage luks devices and maintain the access of users"
* [rotx](https://rotx.dev/) - very new player, interesting cleanroom implementation
* [bitwarden](https://en.wikipedia.org/wiki/Bitwarden) - open core, client/server model, would be more fit as a organisation-wide service
Next steps:
* [x] replace pwstore with password-store (#41522)
* [x] replace hkdf() by trocla in Puppet (#30009)
* [ ] move root passwords to trocla (#33332)?
* [ ] move LUKS passwords to Arver or keep in pwstore?
* [ ] consider deploying an organisation-wide password manager (testing vaultwarden in #41541)anarcatanarcat2024-02-15