Skip to content
Snippets Groups Projects
ganeti.mdwn 13.1 KiB
Newer Older
- To create a new box, follow [[new-machine-hetzner-robot]] but change
  the following settings:

  * Server: [PX62-NVMe](https://www.hetzner.com/dedicated-rootserver/px62-nvme?country=OTHER)
  * Location: `FSN1`
  * Operating system: Rescue
  * Additional drives: 2x10TB
  * Add in the comment form that the server needs to be in the same
anarcat's avatar
anarcat committed
    datacenter as the other machines (FSN1-DC13, but double-check)
- Make sure all nodes have the same LVM setup and the same network setup.  They want openvswitch.  Cf. host `fsn-node-01`'s /etc/network/interfaces.

- Prepare all the nodes by configuring them in puppet.  They should be in the class `roles::ganeti::fsn` if they
  are part of the fsn cluster.  If you make a new cluster, make a new role and add nodes.

Note: we considered experimenting with the new AX line
([AX51-NVMe](https://www.hetzner.com/dedicated-rootserver/ax51-nvme?country=OTHER)) but in the past DSA had problems live-migrating (it
wouldn't immediately fail but there were "issues" after). So we might
need to [failover](http://docs.ganeti.org/ganeti/2.15/man/gnt-instance.html#failover) instead of migrate between those parts of the
cluster. There are also doubts that the Linux kernel supports those
shiny new processors at all: similar processors had [trouble booting
before Linux 5.5](https://www.phoronix.com/scan.php?page=news_item&px=Threadripper-3000-MCE-5.5-Fix) for example, so it might be worth waiting a
little before switching to that new platform, even if it's
cheaper. See the cluster configuration section below for a larger
discussion of CPU emulation.

## New cluster

To create the fsn master, we added fsngnt to DNS, then ran

    gnt-cluster init \
      --master-netdev vlan-gntbe \
      --vg-name vg_ganeti \
      --secondary-ip 172.30.135.1 \
      --enabled-hypervisors kvm \
      --nic-parameters link=br0,vlan=4000 \
      --mac-prefix 00:66:37 \
      --no-ssh-init \
      --no-etc-hosts \
      fsngnt.torproject.org

## Add a new node

We did run the following on fsn-node-01:

    gnt-node add \
      --secondary-ip 172.30.135.2 \
      --no-ssh-key-check \
      --no-node-setup \
      fsn-node-02.torproject.org

## cluster config

These could probably be merged into the cluster init, but just to document what has been done:

    gnt-cluster modify --reserved-lvs vg_ganeti/root,vg_ganeti/swap
    gnt-cluster modify -H kvm:kernel_path=,initrd_path=,
    gnt-cluster modify -H kvm:security_model=pool
    gnt-cluster modify -H kvm:kvm_extra='-device virtio-rng-pci\,bus=pci.0\,addr=0x1e\,max-bytes=1024\,period=1000'
    gnt-cluster modify -H kvm:disk_cache=none
    gnt-cluster modify -H kvm:disk_discard=unmap
    gnt-cluster modify -H kvm:scsi_controller_type=virtio-scsi-pci
    gnt-cluster modify -H kvm:disk_type=scsi-hd
    gnt-cluster modify --uid-pool 4000-4019
    gnt-cluster modify --nic-parameters mode=openvswitch,link=br0,vlan=4000
    gnt-cluster modify -D drbd:c-plan-ahead=0,disk-custom='--c-plan-ahead 0'
    gnt-cluster modify -H kvm:migration_bandwidth=950
    gnt-cluster modify -H kvm:migration_downtime=500
anarcat's avatar
anarcat committed
Note that we might want to tweak the `cpu_type` parameter. By default,
it emulates a lot of processing that can be delegated to the host CPU
instead. If we use `kvm:cpu_type=host`, then each node will tailor the
emulation system to the CPU on the node. But that might make the live
migration more brittle: VMs or processes can crash after a live
migrate because of a slightly different configuration (microcode, CPU,
kernel and QEMU versions all play a role). So we need to find the
lowest common demoninator in CPU families. The list of available
families supported by QEMU varies between releases, but is visible
with:

    # qemu-system-x86_64 -cpu help
    Available CPUs:
    x86 486
    x86 Broadwell             Intel Core Processor (Broadwell)
    [...]
    x86 Skylake-Client        Intel Core Processor (Skylake)
    x86 Skylake-Client-IBRS   Intel Core Processor (Skylake, IBRS)
    x86 Skylake-Server        Intel Xeon Processor (Skylake)
    x86 Skylake-Server-IBRS   Intel Xeon Processor (Skylake, IBRS)
    [...]

The current PX62 line is based on the [Coffee Lake](https://en.wikipedia.org/wiki/Coffee_Lake) Intel
micro-architecture. The closest matching family would be
`Skylake-Server` or `Skylake-Server-IBRS`, [according to wikichip](https://en.wikichip.org/wiki/intel/microarchitectures/coffee_lake#Compiler_support).
Note that newer QEMU releases (4.2, currently in unstable) have more
supported features.

In that context, of course, supporting different CPU manufacturers
(say AMD vs Intel) is impractical: they will have totally different
families that are not compatible with each other. This will break live
migration, which can trigger crashes and problems in the migrated
virtual machines.

If there are problems live-migrating between machines, it is still
possible to "failover" (`gnt-instance failover` instead of `migrate`)
which shuts off the machine, fails over disks, and starts it on the
other side. That's not such of a big problem: we often need to reboot
the guests when we reboot the hosts anyways. But it does complicate
our work. Of course, it's also possible that live migrates work fine
if *no* `cpu_type` at all is specified in the cluster, but that needs
to be verified.

Nodes could also [grouped](http://docs.ganeti.org/ganeti/2.15/man/gnt-group.html) to limit (automated) live migration to a
subset of nodes.

References:

 * <https://dsa.debian.org/howto/install-ganeti/>
 * <https://qemu.weilnetz.de/doc/qemu-doc.html#recommendations_005fcpu_005fmodels_005fx86>

### Network configuration

IP allocation is managed by Ganeti through the `gnt-network(8)`
system. Say we have `192.0.2.0/24` reserved for the cluster, with
the host IP `192.0.2.100`` and the gateway on `192.0.2.1`. You will
create this network with:

    gnt-network add --network 192.0.2.0/24 --gateway 192.0.2.1 --network6 2001:db8::/32 --gateway6 fe80::1 example-network

Then we associate the new network to the default node group:

anarcat's avatar
anarcat committed
    gnt-network connect --nic-parameters=link=br0,vlan=4000,mode=openvswitch example-network default

The arguments to `--nic-parameters` come from the values configured in
the cluster, above. The current values can be found with `gnt-cluster
info`.

anarcat's avatar
anarcat committed
TODO: create a private network.

## Listing instances and nodes

    gnt-instance list
    gnt-node list
    watch -n5 -d 'gnt-instance list -o pnode,name,be/vcpus,be/memory,status,disk_template  |  sort; echo; gnt-node list'

# Instance Operations

## Adding a new instance

This command creates a new guest, or "instance" in Ganeti's
vocabulary:

    gnt-instance add \
      -o debootstrap+buster \
      -t drbd --no-wait-for-sync \
      --disk 0:size=10G \
      --disk 1:size=2G,name=swap \
      --disk 2:size=20G \
      --disk 3:size=800G,vg=vg_ganeti_hdd \
      --backend-parameters memory=8g,vcpus=2 \
      --net 0:ip=pool,network=gnt-fsn \
      --no-name-check \
      --no-ip-check \
      static-master-fsn.torproject.org
anarcat's avatar
anarcat committed
TODO: the above doesn't include the private network configuration.

WARNING: there is a bug in `ganeti-instance-debootstrap` which
misconfigures `ping` (among other things), see [bug #31781](https://bugs.torproject.org/31781). It's
currently patched on the two known Ganeti nodes, but that patch might
disappear if there's a spurious upgrade of the package.

This configures the following:

 * redundant disks in a DRBD mirror, use `-t plain` instead of `-t drbd` for
   tests as that avoids syncing of disks and will speed things up considerably
   (even with `--no-wait-for-sync` there are some operations that block on
   synced mirrors).  Only one node should be provided as the argument for
   `--node` then.
 * three partitions: one on the default VG (SSD), one on another (HDD)
   and a swap file on the default VG, if you don't specify a swap device,
   a 512MB swapfile is created in `/swapfile`
 * 2GB of RAM with 2 virtual CPUs
 * an IP allocated from the public gnt-fsn pool:
   `gnt-instance add` will print the IPv4 address it picked to stdout.  The
   IPv6 address can be found in `/var/log/ganeti/os/` on the primary node
   of the instance, see below.
 * with the `test01.torproject.org` hostname

To find the root password, ssh host key fingerprints, and the IPv6
address, run this **on the node where the instance was created**, for
example:
    egrep 'root password|configured eth0 with|SHA256' $(ls -tr /var/log/ganeti/os/* | tail -1) | grep -v $(hostname)

Note that you need to use the `--node` parameter to pick on which
machines you want the machine to end up, otherwise Ganeti will choose
for you. Use, for example, `--node fsn-node-01:fsn-node-02` to use
`node-01` as primary and `node-02` as secondary. It might be better to
let the Ganeti allocator do its job since it will, eventually do this
during cluster rebalancing.
We copy root's authorized keys into the new instance, so you should be able to
log in with your token.  You will be required to change the root password immediately.
Pick something nice and document it in `tor-passwords`.

Also set reverse DNS for both IPv4 and IPv6 in [hetzner's robot](https://robot.your-server.de/).

Then follow [[new-machine]].

anarcat's avatar
anarcat committed
## Adding and removing addresses on instances

Say you created an instance but forgot to assign a private IP. You can
still do so with:

    gnt-instance modify --net -1:add,ip=172.30.135.3,network=internal test01.torproject.org

TODO: the internal network hasn't been created yet.

## Destroying an instance

This totally deletes the instance, including all mirrors and
everything, be very careful with it:

    gnt-instance remove test01.torproject.org
anarcat's avatar
anarcat committed

## Accessing serial console

Our instances do serial console, starting in grub.  To access it, run

    gnt-instance console test01.torproject.org

To exit, use `^]` -- that is, Control-&lt;Closing Bracket&gt;.

anarcat's avatar
anarcat committed
## Disk operations (DRBD)

Instances should be setup using the DRBD backend, in which case you
should probably take a look at [[drbd]] if you have problems with
that. Ganeti handles most of the logic there so that should generally
not be necessary.
anarcat's avatar
anarcat committed
## Rebooting

Those hosts need special care, as we can accomplish zero-downtime
reboots on those machines. There's a script (`ganeti-reboot-cluster`)
deployed in the ganeti cluster that can be ran on the master to
migrate all instances around and perform a clean reboot.

Such a reboot should be ran interactively, inside a `tmux` or `screen`
session, and takes over 15 minutes to complete right now, but depends
on the size of the cluster (in terms of core memory usage).

Once the reboot is completed, all instances might end up on a single
machine, and the cluster might need to be rebalanced. This is
automatically scheduled by the `ganeti-reboot-cluster` script and will
be done within 30 minutes of the reboot.
anarcat's avatar
anarcat committed

## Rebalancing a cluster

After a reboot or a downtime, all nodes might end up on the same
machine. This is normally handled by the reboot script, but it might
be desirable to do this by hand if there was a crash or another
special condition.
This can be easily corrected with this command, which will spread
instances around the cluster to balance it:

    hbal -L -C -v -X

This will automatically move the instances around and rebalance the
cluster. Here's an example run on a small cluster:

    root@fsn-node-01:~# gnt-instance list
    Instance                          Hypervisor OS                 Primary_node               Status  Memory
    loghost01.torproject.org          kvm        debootstrap+buster fsn-node-02.torproject.org running   2.0G
    onionoo-backend-01.torproject.org kvm        debootstrap+buster fsn-node-02.torproject.org running  12.0G
    static-master-fsn.torproject.org  kvm        debootstrap+buster fsn-node-02.torproject.org running   8.0G
    web-fsn-01.torproject.org         kvm        debootstrap+buster fsn-node-02.torproject.org running   4.0G
    web-fsn-02.torproject.org         kvm        debootstrap+buster fsn-node-02.torproject.org running   4.0G
    root@fsn-node-01:~# hbal -L -X
    Loaded 2 nodes, 5 instances
    Group size 2 nodes, 5 instances
    Selected node group: default
    Initial check done: 0 bad nodes, 0 bad instances.
    Initial score: 8.45007519
    Trying to minimize the CV...
        1. onionoo-backend-01 fsn-node-02:fsn-node-01 => fsn-node-01:fsn-node-02   4.98124611 a=f
        2. loghost01          fsn-node-02:fsn-node-01 => fsn-node-01:fsn-node-02   1.78271883 a=f
    Cluster score improved from 8.45007519 to 1.78271883
    Solution length=2
    Got job IDs 16345
    Got job IDs 16346
    root@fsn-node-01:~# gnt-instance list
    Instance                          Hypervisor OS                 Primary_node               Status  Memory
    loghost01.torproject.org          kvm        debootstrap+buster fsn-node-01.torproject.org running   2.0G
    onionoo-backend-01.torproject.org kvm        debootstrap+buster fsn-node-01.torproject.org running  12.0G
    static-master-fsn.torproject.org  kvm        debootstrap+buster fsn-node-02.torproject.org running   8.0G
    web-fsn-01.torproject.org         kvm        debootstrap+buster fsn-node-02.torproject.org running   4.0G
    web-fsn-02.torproject.org         kvm        debootstrap+buster fsn-node-02.torproject.org running   4.0G