Skip to content
Snippets Groups Projects
Verified Commit c8d23b69 authored by Jérôme Charaoui's avatar Jérôme Charaoui :telescope:
Browse files

howto/upgrades: update postgresql procedure (team#41401)

parent b2010bc7
No related branches found
No related tags found
No related merge requests found
......@@ -67,7 +67,13 @@ See the "conflicts resolution" section below for how to handle
dpkg --get-selections "*" > /var/backups/dpkg-selections-pre-bookworm.txt &&
debconf-get-selections > /var/backups/debconf-selections-pre-bookworm.txt
) &&
( puppet agent --test || true )&&
: lock down puppet-managed postgresql version &&
(
if jq -re '.resources[] | select(.type=="Class" and .title=="Profile::Postgresql") | .title' < /var/lib/puppet/client_data/catalog/$(hostname -f).json; then
echo "tpa_preupgrade_pg_version_lock: '$(/usr/share/postgresql-common/supported-versions)'" > /etc/facter/facts.d/tpa_preupgrade_pg_version_lock.yaml; fi
) &&
: pre-upgrade puppet run
( puppet agent --test || true ) &&
apt-mark showhold &&
dpkg --audit &&
echo look for dkms packages and make sure they are relevant, if not, purge. &&
......@@ -149,7 +155,11 @@ See the "conflicts resolution" section below for how to handle
printf "End of Step 7\a\n" &&
shutdown -r +1 "bookworm upgrade step 7: removing old kernel image"
8. Post-upgrade checks:
8. PostgreSQL upgrade
If the server is hosting a PostgreSQL instance, see [#postgresql-upgrades](#postgresql-upgrades).
9. Post-upgrade cleanup:
export LC_ALL=C.UTF-8 &&
sudo ttyrec -a -e screen /var/log/upgrade-bookworm.ttyrec
......@@ -168,7 +178,7 @@ See the "conflicts resolution" section below for how to handle
echo review installed kernels: &&
dpkg -l 'linux-image*' | less &&
printf "End of Step 8\a\n" &&
shutdown -r +1 "bookworm upgrade step 8: testing reboots one final time"
shutdown -r +1 "bookworm upgrade step 9: testing reboots one final time"
IMPORTANT: make sure you test the services at this point, or at least
notify the admins responsible for the service so they do so. This will
......@@ -207,15 +217,6 @@ WARNING: this section needs to be updated for bookworm.
## PostgreSQL upgrades
Note: *before* doing the entire major upgrade procedure, it is worth
considering upgrading PostgreSQL to "backports". There are no officiel
"Debian backports" of PostgreSQL, but there is an
<https://apt.postgresql.org/> repo which is *supposedly* compatible
with the official Debian packages. The only (currently known) problem
with that repo is that it doesn't use the tilde (`~`) version number,
so that when you do eventually do the major upgrade, you need to
manually upgrade those packages as well.
PostgreSQL is special and needs to be upgraded manually.
1. make a full backup of the old cluster:
......@@ -265,9 +266,7 @@ PostgreSQL is special and needs to be upgraded manually.
else
pg_dropcluster --stop 15 main &&
pg_upgradecluster -m upgrade -k 13 main &&
for cluster in `ls /etc/postgresql/13/`; do
mv /etc/postgresql/13/$cluster/conf.d/* /etc/postgresql/15/$cluster/conf.d/
done
rm -f /etc/facter/facts.d/tpa_preupgrade_pg_version_lock.yaml
fi
Yes, that implies DESTROYING the *NEW* version but the point is we
......@@ -276,30 +275,13 @@ PostgreSQL is special and needs to be upgraded manually.
TODO: this whole procedure needs to be moved into fabric, for
sanity.
4. change the cluster target in the backup system, in `tor-puppet`,
for example:
--- a/modules/postgres/manifests/backup_source.pp
+++ b/modules/postgres/manifests/backup_source.pp
@@ -30,7 +30,7 @@ class postgres::backup_source {
# this block is to allow different cluster versions to be backed up,
# or to turn off backups on some hosts
case $::hostname {
- 'materculae': {
+ 'materculae', 'bacula-director-01': {
postgres::backup_cluster { $::hostname:
pg_version => '13',
}
... and run Puppet on the server and the storage server (currently
`bungei`). Update: this change should normally not be necessary as
we have version-specific logic now.
4. if services were stopped on step 3, restart them, e.g.:
service bacula-director start
5. change the postgres version in `tor-nagios` as well:
4. run puppet on the server and on the storage server to update backup
configuration files; this should also restart any services stopped at step 1
puppet agent --enable && pat
ssh bungei.torproject.org pat
6. change the postgres version in `tor-nagios` as well:
--- a/config/nagios-master.cfg
+++ b/config/nagios-master.cfg
......@@ -313,22 +295,21 @@ PostgreSQL is special and needs to be upgraded manually.
# bacula storage
6. make a new full backup of the new cluster:
7. make a new full backup of the new cluster:
ssh -tt bungei.torproject.org 'sudo -u torbackup postgres-make-one-base-backup $(grep ^meronense.torproject.org $(which postgres-make-base-backups ))'
7. make sure you check for gaps in the write-ahead log, see
8. make sure you check for gaps in the write-ahead log, see
[tpo/tpa/team#40776](https://gitlab.torproject.org/tpo/tpa/team/-/issues/40776) for an example of that problem and [the
WAL-MISSING-AFTER PosgreSQL playbook](howto/postgresql#wal-missing-after) for recovery.
8. once everything works okay, remove the old packages:
apt purge postgresql-13 postgresql-client-13
9. purge the old backups directory after 3 weeks:
ssh bungei.torproject.org "echo 'rm -r /srv/backups/pg/meronense-13/' | at now + 21day"
The old PostgreSQL packages will be automatically cleaned up and purged at step
9 of the general upgrade procedure.
It is also wise to read the [release notes](https://www.postgresql.org/docs/release/) for the relevant
release to see if there are any specific changes that are needed at
the application level, for service owners. In general, the above
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment