Skip to content
Snippets Groups Projects
title: LVM cheat sheet

Caching

Create the VG consisting of 2 block devices (a slow and a fast)

apt install lvm2 &&
vg="vg_$(hostname)" &&
cat /proc/partitions &&
echo -n 'slow disk: ' && read slow &&
echo -n 'fast disk: ' && read fast &&
vgcreate "$vg" "$slow" "$fast"

Create the srv LV, but leave a few (like 50?) extents empty on the slow disk. (lvconvert needs this extra free space later. That's probably a bug.)

pvdisplay &&
echo -n "#extents: " && read extents &&
lvcreate -l "$extents" -n srv "$vg" "$slow"

The -cache-meta disk should be 1/1000 the size of the -cache LV. (if it is slightly more that also shouldn't hurt.)

lvcreate -L 100MB -n srv-cache-meta "$vg" "$fast" &&
lvcreate -l '100%FREE' -n srv-cache "$vg" "$fast"

setup caching

lvconvert --type cache-pool --cachemode writethrough --poolmetadata "$vg"/srv-cache-meta "$vg"/srv-cache

lvconvert --type cache --cachepool "$vg"/srv-cache "$vg"/srv

Disabling / Recovering from a cache failure

If for some reason the cache LV is destroyed or loss (typically by naive operator error), it might be possible to restore the original LV functionality with:

lvconvert --uncache vg_colchicifolium/srv

Resizing

Assume we want to grow this partition to take the available free space in the PV:

root@vineale:/srv# lvs
  LV   VG         Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  srv  vg_vineale -wi-ao---- 35,00g                                                    
root@vineale:/srv# pvs
  PV         VG         Fmt  Attr PSize  PFree
  /dev/sdb   vg_vineale lvm2 a--  40,00g 5,00g
root@vineale:~# pvdisplay 
  --- Physical volume ---
  PV Name               /dev/sdb
  VG Name               vg_vineale
  PV Size               40,00 GiB / not usable 4,00 MiB
  Allocatable           yes 
  PE Size               4,00 MiB
  Total PE              10239
  Free PE               1279
  Allocated PE          8960
  PV UUID               CXKO15-Wze1-xY6y-rOO6-Tfzj-cDSs-V41mwe

extend the volume group

The procedures below assume there is free space on the volume group for the operation. If there isn't you will need to add disks to the volume group, and grow the physical volume. For example:

pvcreate /dev/md123
vgextend vg_vineale /dev/md123

If the underlying disk was grown magically without your intervention, which happens in virtual hosting environments, you can also just extend the physical volume:

pvresize /dev/sdb

Note that if there's an underlying crypto layer, it needs to be resized as well:

cryptsetup resize $DEVICE_LABEL

In this case, the $DEVICE_LABEL is the device's name in /etc/crypttab, not the device name. For example, it would be /dev/mapper/crypt_sdb, not /dev/sdb.

See also the upstream documentation.

online procedure (ext3 and later)

Online resize has been possible ever since ext3 came out and it considered reliable enough for use. If you are unsure that you can trust that procedure, or if you have an ext2 filesystem, do not use this procedure and see the ext2 procdure below instead.

To resize the partition to take up all available free space, you should do the following:

  1. extend the partition, in case of a logical volume:

    lvextend vg_vineale/srv -L +5G

    This might miss some extents, however. You can use the extent notation to take up all free space instead:

    lvextend vg_vineale/srv -l +1279

    If the partition sits directly on disk, use parted's resizepart command or fdisk to resize that first.

    To resize to take all available free space:

    lvextend vg_vineale/srv -l '+%100FREE' 
  2. resize the filesystem:

    resize2fs /dev/mapper/vg_vineale-srv

That's it! The resize2fs program automatically determines the size of the underlying "partition" (the logical volume, in most cases) and fixes the filesystem to fill the space.

Note that the resize process can take a while. Growing an active 20TB partition to 30TB took about 5 minutes, for example. The -p flag that could show progress only works in the "offline" procedure (below).

offline procedure (ext2)

To resize the partition to take up all available free space, you should do the following:

  1. stop services and processes using the partition (will obviously vary):

    service apache2 stop
  2. unmount the filesystem:

    umount /srv
  3. check the filesystem:

    fsck -y -f /dev/mapper/vg_vineale-srv
  4. extend the filesystem using the extent notation to take up all available space:

    lvextend vg_vineale/srv -l +1279
  5. grow the filesystem (-p is for "show progress"):

    resize2fs -p /dev/mapper/vg_vineale-srv
  6. recheck the filesystem:

    fsck  -f -y /dev/mapper/vg_vineale-srv
  7. remount the filesystem and start processes:

    mount /srv
    service apache2 start

Shrinking

Shrinking the filesystem is also possible, but is more risky. It is very important to reduce the size of the filesystem before resizing the size of the logical volume, so the order of the steps below is critical.

  1. unmount the filesystem:

    umount /mnt

    If that's not possible because the filesystem is in use, you'll need to stop processes using it. If that's possible (for example when resizing /), you'll need to reboot in a separate operating system first.

  2. forcibly check the filesystem:

    e2fsck -fy /dev/sdX
  3. shrink the filesystem:

    resize2fs /dev/sdX 5G
  4. shrink the logical volume, to reduce to 5G:

    lvreduce -L 5G vg/mnt

    to reduce by 5G:

    lvreduce -L -5G vg/mnt

    WARNING: make sure the resulting size matches exactly the one specified in the resize2fs command above. For more precision, resize2fs and lvreduce can both accept an argument (s) in "sectors" (512 bytes).

  5. tell ext to resize the filesystem again:

    resize2fs /dev/sdX

    NOTE: no size argument here!

  6. check the filesystem again:

    e2fsck -fy /dev/sdX
  7. if you want to resize the underlying device (for example, if this is a LVM inside a virtual machine on top of another LVM), you can also shrink the parent logical volume, physical volume, and crypto device (if relevant) at this point.

    lvreduce -L 5G vg/hostname
    pvresize /dev/sdY
    cryptsetup resize DEVICE_LABEL

    WARNING: this last step has not been tested.

Creating

New logical volume with bind mount

Assuming a situation where the root LV is full and want to add a new volume and move part of the existing filesystem in it. We also assume that we have available free space in the existing volume group.

Create the new LV, we'll call it srv since it will be mounted at /srv:

lvcreate -n srv -l50G vg_ganeti
mkfs -t ext4 /dev/vg_ganeti/srv

Mount the new volume at a temporary location:

mkdir /mnt/srv
mount /dev/vg_ganeti/srv /mnt/srv

Then stop any services that might be using the data that we're about to move to its new home:

systemctl stop gitlab-runner docker

Move data around, recreate mount point and unmount the temp location:

mv /var/lib/docker /mnt/srv
mkdir /var/lib/docker
umount /mnt/srv

Adjust /etc/fstab:

echo "UUID=$(blkid /dev/vg_ganeti/srv -o value -s UUID) /srv    ext4    defaults        1       2" >> /etc/fstab
echo "/srv/docker     /var/lib/docker none    bind    0       0" >> /etc/fstab

Reload systemd and local-fs.target to regenerate .mount services:

systemctl daemon-reload
systemctl reload local-fs.target

Verify that the new volume is mounted correctly and restart services:

findmnt
systemctl start docker gitlab-runner