title: TPA-RFC-40: Cymru migration budget pre-approval
costs: 12k$/year hosting, 5-7 weeks staff
approval: TPA, accounting, ED
deadline: ASAP, accounting/ed: end of week/month
status: obsolete
discussion: https://gitlab.torproject.org/tpo/tpa/team/-/issues/40897
Summary: broadly approve the idea of buying three large servers to migrate services from Cymru to a trusted colocation facility. hardware: 40k$ ± 5k$ for 5-7 years, colocation fees: 600$/mth.
Note: this is a huge document. The executive summary is above, to see more details of the proposals, jump to the "Proposal" section below. A copy of this document is available in the TPA wiki:
https://gitlab.torproject.org/tpo/tpa/team/-/wikis/policy/tpa-rfc-40-cymru-migration
Here's a table of contents as well:
- Background
- Proposal
- Costs
- Status
- References
Background
We have decided to move all services away from Team Cymru infrastructure.
This proposal discusses various alternatives which can be regrouped in three big classes:
- self-hosting: we own hardware (buy it or donated) and have someone set it up in a colo facility
- dedicated hosting: we rent hardware, someone else manages it to our spec
- cloud hosting: we don't bother with hardware at all and move everything into virtual machine hosting managed by someone else
Some services (web mirrors) were already moved (to OVH cloud) and might require a second move (back into an eventual new location). That's considered out of scope for now, but we do take into account those resources in the planning.
Inventory
gnt-chi
In the Ganeti (gnt-chi
) cluster, we have 12 machines hosting about
17 virtual machines, of which 14 much absolutely be migrated.
Those machines count for:
- memory: 262GB used out of 474GB allocated to VMs, including 300GB for a single runner
- CPUs: 78 vcores allocated
- Disk: 800GB disk allocated on SAS disks, about 400GB allocated on the SAN
- SAN: basically 1TB used, mostly for the two mirrors
- a /24 of IP addresses
- unlimited gigabit
- 2 private VLANs for management and data
This does not include:
- shadow simulator: 40 cores + 1.5TB RAM (
chi-node-14
) - moly: another server considered negligible in terms of hardware (3 small VMs, one to rebuild)
gnt-fsn
While we are not looking at replacing the existing gnt-fsn cluster, it's still worthwhile to look at the capacity and usage there, in case we need to replace that cluster as well, or grow the gnt-chi cluster to similar usage.
-
gnt-fsn has 4x10TB + 1x5TB HDD and 8x1TB NVMe (after raid), according to
gnt-nodes list-storage
, for a total of 45TB HDD, 8TB NVMe after RAID -
out of that, around 17TB is in use (basically:
ssh fsn-node-02 gnt-node list-storage --no-header | awk '{print $5}' | sed 's/T/G * 1000/;s/G/Gbyte/;s/$/ + /' | qalc
), 13TB of which on HDD -
memory: ~500GB (8*62GB = 496GB), out of this 224GB is allocated
-
cores: 48 (8*12 = 96 threads), out of this 107 vCPUs are allocated
Colocation specifications
This is the specifications we are looking for in a colocation provider:
- 4U rack space
- enough power to feed four machines, the three specified below and
chi-node-14 (
Dell PowerEdge R640
) - 1 or ideally 10gbit uplink unlimited
- IPv4: /24, or at least a /27 in the short term
- IPv6: we currently only have a /64
- out of band access (IPMI or serial)
- rescue systems (e.g. PXE booting)
- remote hands SLA ("how long to replace a broken hard drive?")
- private VLANs
- ideally not in Europe (where we already have lots of resources)
Proposal
After evaluating the costs, it is the belief of TPA that infrastructure hosted at Cymru should be rebuilt in a new Ganeti cluster hosted in a trusted colocation facility which still needs to be determined.
This will require a significant capital expenditure (around 40,000$, still to be clarified) that could be subsidized. Amortized over 7 to 8 years, it is actually cheaper, per month, than moving to the cloud.
Migration labor costs are also smaller; we could be up and running in as little as two weeks of full time work. Lead time for server delivery and data transfers will prolong this significantly, with total migration times from 4 to 8 weeks.
The actual proposal here is, formally, to approve the acquisition of three physical servers, and the monthly cost of hosting them at a colocation facility.
The price breakdown is as follows:
- hardware: 40k$ ±5k, 8k\$/year over 5 years, 6k/year over 7 years, or about 500-700$/mth, most likely 600$/mth (about 6 years amortization)
- colo: 600$/mth (4U at 150$/mth)
- total: 1100-1300$/mth, most likely 1200$/mth
- labor: 5-7 weeks full time
Scope
This proposal doesn't detail exactly how the migration will happen, or exactly where. This discussion happens in a subsequent RFC, TPA-RFC-43.
This proposal was established to examine quickly various ideas and validate with accounting and the executive director a general direction to take.
Goals
No must/nice/non-goals were actually set in this proposal, because it was established in a rush.
Risks
Costs
This is the least expensive option, but possibly more risky in terms of costs in the long term, as there are risks that a complete hardware failure brings the service down and requires a costly replacement.
There's also a risk of extra labor required in migrating the services around. We believe the risk of migrating to the cloud or another hosted service is actually higher, however, because we wouldn't control the mechanics of the hosting as well as with the proposed colo providers.
In effect, we are betting that the cloud will not provide us with the cost savings it promises, because we have massive CPU/memory (shadow), and storage (GitLab, metrics, mirrors) requirements.
There is the possibility we are miscalculating because we are calculating on the worst case scenario of full time shadow simulation and CPU/memory usage, but on the other hand, we haven't explicitly counted for storage usage in the cloud solution, so we might be underestimating costs there as well.
Censorship and surveillance
There is a risk we might get censored more easily at a specialized provider than at a general hosting provider like Hetzner, Amazon, or OVH.
We balance that risk with the risk of increased surveillance and lack of trust in commercial providers.
If push comes to shove, we can still spin up mirrors or services in the cloud. And indeed, the anti-censorship and metrics teams are already doing so.
Costs
This section evaluates the cost of the three options, in broad terms. More specific estimates will be established as we go along. For now, this broad budget in the proposal is the actual proposal, and the costs below should be considered details of the above proposal.
Self-hosting: ~12k$/year, 5-7 weeks
With this option, TPI buys hardware and has it shipped to a colocation facility (or has the colo buy and deploy the hardware).
A new Ganeti cluster is built from those machines, and the current virtual machines are mass-migrated to the new cluster.
The risk of this procedure is that the mass-migration fails and that virtual machines need to be rebuilt from scratch, in which case the labor costs are expanded.
Hardware: ~10k/year
We would buy 3 big servers, each with:
- at least two NICs (one public, one internal), 10gbit
- 25k$ AMD ryzen 64 cores, 512GB RAM, chassis, 20 bays 16 SATA 4 NVMe
- 2k$ 2xNVMe 1TB, 2 free slots
- 6k$ 6xSSD 2TB, 12 free slots
- hyper-convergent (e.g. we keep the current DRBD setup)
- total storage per node, post-RAID, 7TB 1TB NVMe, 6TB SSD
- total per server: ~33kCAD or 25kUSD +- 5k$
- total for 3 servers: 75kUSD +- 15k
- total capacity:
- CPUs 192 cores (384 threads)
- 1.5TB RAM
- 21TB storage, half of those for redundancy
We would amortize this expense over 7-8 years, so around 10k$/year for hardware, assuming we would buy something similar (but obviously probably better by then) every 7 to 8 years.
USD, ~8k/yr over 5 years, 6k$/yr for 7yrs
Updated server spec: 42kHere's a more precise quote established on 2022-10-06 by lavamind:
Based on the server builder on http://interpromicro.com which is a supplier Riseup has used in the past. Here's what I was able to find out. We're able to cram our base requirements into a SuperMicro 1U package with the following specs :
- SuperMicro 1114CS-THR 1U
- AMD Milan (EPYC) 7713P 64C/128T @ 2.00Ghz 256M cache
- 512G DDR4 RAM (8x64G)
- 6x Intel S4510 1.92T SATA3 SSD
- 2x Intel DC P4610 1.60T NVMe SSD
- AOC NIC 2x10GbE SFP+
- Quote: 13,645.25$USD
For three such servers, we have:
- 192 cores, 384 threads
- 1536GB RAM (1.5TB)
- 34.56TB SSD storage (17TB after RAID-1)
- 9.6TB NVMe storage (4.8TB after RAID-1)
- Total: 40,936$USD
At this price range we could likely afford to throw in a few extras:
- Double amount of RAM (1T total) +2,877
- Double SATA3 SSD capacity with 3.84T drives +2,040
- Double NVMe SSD capacity with 3.20T drives +814
- Switch to faster AMD Milan (EPYC) 75F3 32C/64T @ 2.95Ghz +186
There are also comparable 2U chassis with 3.5" drive bays, but since we use only 2.5" drives it doesn't make much sense unless we really want a system with 2 CPU sockets. Such a system would cost an additional ~6,000$USD depending on the model of CPU we end up choosing, bringing us closer to initial ballpark number, above.
Considering that the base build would have enough capacity to host both gnt-chi (800GB) and gnt-fsn (17TB, including 13TB on HDD and 4TB on NVMe), it seems like a sufficient build.
Note that none of this takes into account DRBD replication, but neither those the original specification anyways, so that is abstracted away.
Actual quotes
We have established prices from three providers:
- Provider D: 35 334$ (48,480$ CAD = 3 x 16,160$CAD for SuperMicro 1114CS-THR 1U, AMD Milan (EPYC) 7713P 64C/128T @ 2.00Ghz 256M cache, 512G DDR4 RAM, 6x 1.92T SATA3 SSD, 2x 1.60T NVMe SSD, NIC 2x10GbE SFP+0)
- Provider E: 36,450$ (3 x 12,150$ USD for Super 1114CS-TNR, AMD Milan 7713P-2.0Ghz/64C/128T, 512GB DDR4 RAM, 6x 1.92T SATA3 SSD, 2x 1.60T NVMe SSD, NIC 2x 10GB/SFP+)
- Provider F: 35,470$ (48,680$ CAD = 3 x 16,226$CAD for Supermicro 1U AS -1114CS-TNR, Milan 7713P UP 64C/128T 2.0G 256M, 8x 64GB DDR4-3200 RAM, 6x Intel D3 S4520 1.92TB SSD, 2x IntelD7-P5520 1.92TB NVMe, NIC 2-port 10G SFP+)
Colocation: 600$/mth
Exact prices are still to be determined. 150/U/mth (900/mth for 6U, 600mth for 4U) figure is from [this source][] (confidential). There's [another quote][] at 350/U/mth (1400$/mth) that was brought down to match the other.
See also this comment for other colo resources.
Actual quotes
We have established prices from three providers:
- Provider A: 600/mth (4 x 150 per 1U, discounted from 350$)
- Provider B: 900/mth (4 x 225 per 1U)
- Provider C: 2300$/mth (20 x a1.xlarge + 1 x r6g.12xlarge at Amazon AWS, public prices extracted from https://calculator.aws, includes hardware)
Initial setup: one week
Ganeti cluster setup costs:
Task | Estimate | Uncertainty | Total | Notes |
---|---|---|---|---|
Node setup | 3 days | low | 3.3d | 1 d / machine |
VLANs | 1 day | medium | 1.5d | could involve IPsec |
Cluster setup | 0.5 day | low | 0.6d | |
Total | 4.5 days | 5.4d |
This gets us a basic cluster setup, into which virtual machines can be imported (or created).
Batch migration: 1-2 weeks, worst case full rebuild (4-6w)
We assume each VM will take 30 minutes of work to migrate which, if all goes well, means that we can basically migrate all the machines in one day of work.
Task | Estimate | Uncertainty | Total | Notes |
---|---|---|---|---|
research and testing | 1 day | extreme | 5d | half a day of this already spent |
total VM migration time | 1 day | extreme | 5d | |
Total | 2 day | extreme | 10 days |
It might take more time to do the actual transfers, but the assumption is the work can be done in parallel and therefore transfer rates are non-blocking. So that "day" of work would actually be spread over a week of time.
There is a lot of uncertainty in this estimate. It's possible the migration procedure doesn't work at all, and in fact has proven to be problematic in our first tests. Further testing showed it was possible to migrate a virtual machine so it is believed we will be able to streamline this process.
It's therefore possible that we could batch migrate everything in one fell swoop. We would then just have to do manual changes in LDAP and inside the VM to reset IP addresses.
Worst case: full rebuild, 3.5-4.5 weeks
The worst case here is a fall back to the full rebuild case that we computed for the cloud, below.
To this, we need to add a "VM bootstrap" cost. I'd say 1h hour per VM, medium uncertainty in Ganeti, so 1.5h per VM or ~22h (~3 days).
Dedicated hosting: 2-6k$/mth, 7+ weeks
In this scenario, we rent machines from a provider (probably a commercial provider). It's unclear we will be able to reproduce the Ganeti setup the way we need to, as we do not always get the private VLAN we need to setup the storage backend. At Hetzner, for example, this setup is proving costly and complex.
OVH cloud: 2.6k$/mth
The Scale 7 server seem like it could fit well for both simulations and general-purpose hosting:
- AMD Epyc 7763 - 64c/128t - 2.45GHz/3.5GHz
- 2x SSD SATA 480GB
- 512GB RAM
- 2× 1.92TB SSD NVMe + 2× 6TB HDD SATA Soft RAID
- 1Gbit/s unmetered and guaranteed
- 6bit/s local
- back order in americas
- 1 192,36$CAD/mth (871USD) with a 12mth commit
- total, for 3 servers: 3677CAD or 2615USD/mth
Data packet: 6k$/mth
Data Packet also has AMD EPYC machines, see their pricing page:
- AMD EPYC 7702P 64 Cores, 128 Threads, 2 GHz
- 2x2TB NVME
- 512GB RAM
- 1gbps unmetered
- 2020$USD / mth
- ashburn virginia
- total, for 3 servers: 6000USD/mth
Scaleway: 3k$/mth
Scaleway also has EPYC machines, but only in Europe:
- 2x AMD EPYC 7532 32C/64T - 2.4 GHz
- 1024 GB RAM
- 2 x 1.92 TB NVMe
- Up to 1 Gbps
- €1,039.99/month
- only europe
- total, for 3 servers: ~3000USD/mth
Migration costs: 7+ weeks
We haven't estimated the migration costs specifically for this scenario, but we assume those will be similar to the self-hosting scenario, but on the upper uncertainty margin.
Cloud hosting: 2-4k$/mth, 5-11 weeks
In this scenario, each virtual machine is moved to cloud. It's unclear how that would happen exactly, which is the main reason behind the far ranging time estimates.
In general, large simulations seem costly in this environment as well, at least if we run them full time.
Hardware costs: 2k-4k$/mth
Let's assume we need at minimum 80 vcores and 300GB of memory, with 1TB of storage. This is likely an underestimation, as we don't have proper per-VM disk storage details. This would require a lot more effort in estimation that is not seen as necessary.
Note that most providers do not provide virtual machines large enough for the Shadow simulations, or if they do, are too costly (e.g. Amazon), with Scaleway being an exception.
Amazon: 2k$/mth
- 20x a1.xlarge (4 cores, 8GB memory) 998.78 USD/mth
- large runners are ridiculous: 1x r6g.12xlarge (48 CPUs, 384GB) 1317.39USD (!!)
Extracted from https://calculator.aws/.
OVH cloud: 1.2k$/mth, small shadow
- 20x "comfort" (4 cores, 8GB, 28CAD/mth) = 80 cores, 160GB RAM, 400USD/mth
- 2x r2-240 (16 cores, 240GB, 1.1399$CAD/h) = 32 cores, 480GB RAM, 820USD/mth
- cannot fully replace large runners, missing CPU cores
Gandi VPS: 600$/mth, no shadow
- 20xV-R8 (4 cores, 8GB, 30EUR/mth) = 80 cores, 160GB RAM, ~600USD/mth
- cannot replace large runners at all
Scaleway: 3500$/mth
- 20x GP1-XS, 4 vCPUs, 16 GB, NVMe Local Storage or Block Storage on demand, 500 Mbit/s, From €0.08/hour, 1110USD/mth
- 1x ENT1-2XL: 96 cores, 384 GB RAM, Block Storage backend, Up to 20 Gbit/s BW, From €3.36/hour, 2333$USD/mth
Infomaniak, 950USD/mth, no shadow
https://www.infomaniak.com/en/hosting/dedicated-and-cloud-servers/cloud-server
- 20x 4-CPU cloud servers, 12GB each, 100GB SSD, no caps, 49,00 €/mth: 980€/mth, ~950USD/mth
- max: 32 cores, 96GB CPU, 230,00 €/mth
- cannot fully replace large runners, missing CPU cores and memory
Base setup 1-5 weeks
This involves creating 15 virtual machines in the cloud, so learning a new platform and bootstrapping new tools. It could involve things like Terraform or click-click-click in a new dashboard? Full unknown.
Let's say 2 hours per machine, 28 hours, which means is 4 days of 7 hours of work, with extreme uncertainty, so five times which is about 5 weeks.
This might be an over-estimation.
Base VM bootstrap cost 2-10 days
We estimate setting up a machine takes a ground time of 1 hour per VM, extreme uncertainty, which means 1-5 hours, so 15-75 hours, or 2 to 10 days.
Full rebuild: 3-4 weeks
In this scenario, we need to reinstall the virtual machines from scratch, as we cannot use the export/import procedures Ganeti provides us. It's possible we could use a more standard export mechanism in Ganeti and have that adapted to the cloud, but this would also take some research and development time.
machine | estimate | uncertainty | total | notes |
---|---|---|---|---|
btcpayserver-02 | 1 day | low | 1.1 | |
ci-runner-01 | 0.5 day | low | 0.55 | |
ci-runner-x86-05 | 0.5 day | low | 0.55 | |
dangerzone-01 | 0.5 day | low | 0.55 | |
gitlab-dev-01 | 1 day | low | 1.1 | optional |
metrics-psqlts-01 | 1 day | high | 2 | |
moria-haven-01 | N/A | to be retired | ||
onionbalance-02 | 0.5 day | low | 0.55 | |
probetelemetry-01 | 1 day | low | 1.1 | |
rdsys-frontend-01 | 1 day | low | 1.1 | |
static-gitlab-shim | 0.5 day | low | 0.55 | |
survey-01 | 0.5 day | low | 0.55 | |
tb-pkgstage-01 | 1 day | high | 2 | (unknown) |
tb-tester-01 | 1 day | high | 2 | (unknown) |
telegram-bot-01 | 1 day | low | 1.1 | |
web-chi-03 | N/A | to be retired | ||
web-chi-04 | N/A | to be retired | ||
fallax | 3 days | medium | 4.5 | |
build-x86-05 | N/A | to be retired | ||
build-x86-06 | N/A | to be retired | ||
Total | 19.3 |
That's 15 VMs to migrate, 5 to be destroyed (total 20).
This is almost four weeks of full time work, generally low uncertainty. This could possibly be reduced to 14 days (about three weeks) if jobs are parallelized and if uncertainty around tb* machines is reduced.
Status
This proposal is currently in the obsolete
state. It has been
broadly accepted but the details of the budget were not accurate
enough and will be clarified in TPA-RFC-43.
References
See tpo/tpa/team#40897 for the discussion ticket.
gnt-chi detailed inventory
Hosted VMs
root@chi-node-01:~# gnt-instance list --no-headers -o name | sed 's/.torproject.org//'
btcpayserver-02
ci-runner-01
ci-runner-x86-05
dangerzone-01
gitlab-dev-01
metrics-psqlts-01
moria-haven-01
onionbalance-02
probetelemetry-01
rdsys-frontend-01
static-gitlab-shim
survey-01
tb-pkgstage-01
tb-tester-01
telegram-bot-01
web-chi-03
web-chi-04
root@chi-node-01:~# gnt-instance list --no-headers | wc -l
17
Resources used
root@chi-node-01:~# gnt-instance list -o name,be/vcpus,be/memory,disk_usage,disk_template
Instance ConfigVCPUs ConfigMaxMem DiskUsage Disk_template
btcpayserver-02.torproject.org 2 8.0G 82.4G drbd
ci-runner-01.torproject.org 8 64.0G 212.4G drbd
ci-runner-x86-05.torproject.org 30 300.0G 152.4G drbd
dangerzone-01.torproject.org 2 8.0G 12.2G drbd
gitlab-dev-01.torproject.org 2 8.0G 0M blockdev
metrics-psqlts-01.torproject.org 2 8.0G 32.4G drbd
moria-haven-01.torproject.org 2 8.0G 0M blockdev
onionbalance-02.torproject.org 2 2.0G 12.2G drbd
probetelemetry-01.torproject.org 8 4.0G 62.4G drbd
rdsys-frontend-01.torproject.org 2 8.0G 32.4G drbd
static-gitlab-shim.torproject.org 2 8.0G 32.4G drbd
survey-01.torproject.org 2 8.0G 32.4G drbd
tb-pkgstage-01.torproject.org 2 8.0G 112.4G drbd
tb-tester-01.torproject.org 2 8.0G 62.4G drbd
telegram-bot-01.torproject.org 2 8.0G 0M blockdev
web-chi-03.torproject.org 4 8.0G 0M blockdev
web-chi-04.torproject.org 4 8.0G 0M blockdev
root@chi-node-01:~# gnt-node list-storage | sort
Node Type Name Size Used Free Allocatable
chi-node-01.torproject.org lvm-vg vg_ganeti 464.7G 447.1G 17.6G Y
chi-node-02.torproject.org lvm-vg vg_ganeti 464.7G 387.1G 77.6G Y
chi-node-03.torproject.org lvm-vg vg_ganeti 464.7G 457.1G 7.6G Y
chi-node-04.torproject.org lvm-vg vg_ganeti 464.7G 104.6G 360.1G Y
chi-node-06.torproject.org lvm-vg vg_ganeti 464.7G 269.1G 195.6G Y
chi-node-07.torproject.org lvm-vg vg_ganeti 1.4T 239.1G 1.1T Y
chi-node-08.torproject.org lvm-vg vg_ganeti 464.7G 147.0G 317.7G Y
chi-node-09.torproject.org lvm-vg vg_ganeti 278.3G 275.8G 2.5G Y
chi-node-10.torproject.org lvm-vg vg_ganeti 278.3G 251.3G 27.0G Y
chi-node-11.torproject.org lvm-vg vg_ganeti 464.7G 283.6G 181.1G Y
SAN storage
root@chi-node-01:~# tpo-show-san-disks
Storage Array chi-san-01
|- Total Unconfigured Capacity (20.911 TB)
|- Disk Groups
| |- Disk Group 2 (RAID 5) (1,862.026 GB)
| | |- Virtual Disk web-chi-03 (500.000 GB)
| | |- Free Capacity (1,362.026 GB)
Storage Array chi-san-02
|- Total Unconfigured Capacity (21.820 TB)
|- Disk Groups
| |- Disk Group 1 (RAID 1) (1,852.026 GB)
| | |- Virtual Disk telegram-bot-01 (150.000 GB)
| | |- Free Capacity (1,702.026 GB)
| |- Disk Group 2 (RAID 1) (1,852.026 GB)
| | |- Virtual Disk gitlab-dev-01 (250.000 GB)
| | |- Free Capacity (1,602.026 GB)
| |- Disk Group moria-haven-01 (RAID 1) (1,852.026 GB)
| | |- Virtual Disk moria-haven-01 (1,024.000 GB)
| | |- Free Capacity (828.026 GB)
Storage Array chi-san-03
|- Total Unconfigured Capacity (32.729 TB)
|- Disk Groups
| |- Disk Group 0 (RAID 1) (1,665.726 GB)
| | |- Virtual Disk web-chi-04 (500.000 GB)
| | |- Free Capacity (1,165.726 GB)
moly inventory
instance | memory | vCPU | disk |
---|---|---|---|
fallax | 512MiB | 1 | 4GB |
build-x86-05 | 14GB | 6 | 90GB |
build-x86-06 | 14GB | 6 | 90GB |
gnt-fsn inventory
root@fsn-node-02:~# gnt-instance list -o name,be/vcpus,be/memory,disk_usage,disk_template
Instance ConfigVCPUs ConfigMaxMem DiskUsage Disk_template
alberti.torproject.org 2 4.0G 22.2G drbd
bacula-director-01.torproject.org 2 8.0G 262.4G drbd
carinatum.torproject.org 2 2.0G 12.2G drbd
check-01.torproject.org 4 4.0G 32.4G drbd
chives.torproject.org 1 1.0G 12.2G drbd
colchicifolium.torproject.org 4 16.0G 734.5G drbd
crm-ext-01.torproject.org 2 2.0G 24.2G drbd
crm-int-01.torproject.org 4 8.0G 164.4G drbd
cupani.torproject.org 2 2.0G 144.4G drbd
eugeni.torproject.org 2 4.0G 99.4G drbd
gayi.torproject.org 2 2.0G 74.4G drbd
gettor-01.torproject.org 2 1.0G 12.2G drbd
gitlab-02.torproject.org 8 16.0G 1.2T drbd
henryi.torproject.org 2 1.0G 32.4G drbd
loghost01.torproject.org 2 2.0G 61.4G drbd
majus.torproject.org 2 1.0G 32.4G drbd
materculae.torproject.org 2 8.0G 174.5G drbd
media-01.torproject.org 2 2.0G 312.4G drbd
meronense.torproject.org 4 16.0G 524.4G drbd
metrics-store-01.torproject.org 2 2.0G 312.4G drbd
neriniflorum.torproject.org 2 1.0G 12.2G drbd
nevii.torproject.org 2 1.0G 24.2G drbd
onionoo-backend-01.torproject.org 2 16.0G 72.4G drbd
onionoo-backend-02.torproject.org 2 16.0G 72.4G drbd
onionoo-frontend-01.torproject.org 4 4.0G 12.2G drbd
onionoo-frontend-02.torproject.org 4 4.0G 12.2G drbd
palmeri.torproject.org 2 1.0G 34.4G drbd
pauli.torproject.org 2 4.0G 22.2G drbd
perdulce.torproject.org 2 1.0G 524.4G drbd
polyanthum.torproject.org 2 4.0G 84.4G drbd
relay-01.torproject.org 2 8.0G 12.2G drbd
rude.torproject.org 2 2.0G 64.4G drbd
static-master-fsn.torproject.org 2 16.0G 832.5G drbd
staticiforme.torproject.org 4 6.0G 322.5G drbd
submit-01.torproject.org 2 4.0G 32.4G drbd
tb-build-01.torproject.org 8 16.0G 612.4G drbd
tbb-nightlies-master.torproject.org 2 2.0G 142.4G drbd
vineale.torproject.org 4 8.0G 124.4G drbd
web-fsn-01.torproject.org 2 4.0G 522.5G drbd
web-fsn-02.torproject.org 2 4.0G 522.5G drbd
root@fsn-node-02:~# gnt-node list-storage | sort
Node Type Name Size Used Free Allocatable
fsn-node-01.torproject.org lvm-vg vg_ganeti 893.1G 469.6G 423.5G Y
fsn-node-01.torproject.org lvm-vg vg_ganeti_hdd 9.1T 1.9T 7.2T Y
fsn-node-02.torproject.org lvm-vg vg_ganeti 893.1G 495.2G 397.9G Y
fsn-node-02.torproject.org lvm-vg vg_ganeti_hdd 9.1T 4.4T 4.7T Y
fsn-node-03.torproject.org lvm-vg vg_ganeti 893.6G 333.8G 559.8G Y
fsn-node-03.torproject.org lvm-vg vg_ganeti_hdd 9.1T 2.5T 6.6T Y
fsn-node-04.torproject.org lvm-vg vg_ganeti 893.6G 586.3G 307.3G Y
fsn-node-04.torproject.org lvm-vg vg_ganeti_hdd 9.1T 3.0T 6.1T Y
fsn-node-05.torproject.org lvm-vg vg_ganeti 893.6G 431.5G 462.1G Y
fsn-node-06.torproject.org lvm-vg vg_ganeti 893.6G 446.1G 447.5G Y
fsn-node-07.torproject.org lvm-vg vg_ganeti 893.6G 775.7G 117.9G Y
fsn-node-08.torproject.org lvm-vg vg_ganeti 893.6G 432.2G 461.4G Y
fsn-node-08.torproject.org lvm-vg vg_ganeti_hdd 5.5T 1.3T 4.1T Y