TPA issueshttps://gitlab.torproject.org/groups/tpo/tpa/-/issues2023-01-09T22:35:12Zhttps://gitlab.torproject.org/tpo/tpa/team/-/issues/40966order and ship servers for gnt-dal cluster in new datacenter2023-01-09T22:35:12Zanarcatorder and ship servers for gnt-dal cluster in new datacenteras per [TPA-RFC-43](https://gitlab.torproject.org/tpo/tpa/team/-/wikis/policy/tpa-rfc-43-cymru-migration-plan)as per [TPA-RFC-43](https://gitlab.torproject.org/tpo/tpa/team/-/wikis/policy/tpa-rfc-43-cymru-migration-plan)trusted high performance cluster (gnt-dal migration)Jérôme Charaouilavamind@torproject.orgJérôme Charaouilavamind@torproject.org2023-01-07https://gitlab.torproject.org/tpo/tpa/team/-/issues/40991create milestones for the 2023 roadmap2022-12-20T20:03:13Zanarcatcreate milestones for the 2023 roadmapsee wiki-replica@e0b193d77325bd25d4bab3f7399dae4f304543besee wiki-replica@e0b193d77325bd25d4bab3f7399dae4f304543beanarcatanarcat2023-01-15https://gitlab.torproject.org/tpo/tpa/team/-/issues/40892TPA-RFC-46: enforce 2FA for TPA members2023-01-17T16:06:01ZanarcatTPA-RFC-46: enforce 2FA for TPA membersi would like us to enforce 2fa for the tpo/tpa group. there is a setting in this group here that says:
> All users in this group must set up two-factor authentication
there is also a field that sets a grace period (default 48h) to enfo...i would like us to enforce 2fa for the tpo/tpa group. there is a setting in this group here that says:
> All users in this group must set up two-factor authentication
there is also a field that sets a grace period (default 48h) to enforce it. i would like to push that button. maybe i should make an ~RFC but for now a ticket will do.anarcatanarcat2023-01-17https://gitlab.torproject.org/tpo/tpa/team/-/issues/40967get access to the new colocation facility2023-01-25T17:03:38Zanarcatget access to the new colocation facilityas per [TPA-RFC-43](https://gitlab.torproject.org/tpo/tpa/team/-/wikis/policy/tpa-rfc-43-cymru-migration-plan), contract the new colocation facility, to:
1. [x] get credentials for OOB management
1. [x] get address to ship servers
1. [x...as per [TPA-RFC-43](https://gitlab.torproject.org/tpo/tpa/team/-/wikis/policy/tpa-rfc-43-cymru-migration-plan), contract the new colocation facility, to:
1. [x] get credentials for OOB management
1. [x] get address to ship servers
1. [x] get emergency/support contact informationtrusted high performance cluster (gnt-dal migration)anarcatanarcat2023-01-19https://gitlab.torproject.org/tpo/tpa/team/-/issues/41043remove the chi-node-14-verylarge runner from the shared pool2023-01-31T18:18:01Zanarcatremove the chi-node-14-verylarge runner from the shared poolwe've had multiple cases of users abusing our runners (e.g. https://gitlab.torproject.org/tpo/tpa/team/-/issues/41032) which wouldn't be *that* bad if it wasn't blocking production for our users. i was under the impression that a single ...we've had multiple cases of users abusing our runners (e.g. https://gitlab.torproject.org/tpo/tpa/team/-/issues/41032) which wouldn't be *that* bad if it wasn't blocking production for our users. i was under the impression that a single job wasn't supposed to block the runner, which is why it was acceptable to have it in the shared pool, but because this is blocking urgent production work for @mikeperry and others, we should, as a stopgap, remove it from the shared pool.
/cc @lavamind @ahfJérôme Charaouilavamind@torproject.orgJérôme Charaouilavamind@torproject.org2023-01-31https://gitlab.torproject.org/tpo/tpa/team/-/issues/41046Deploy the bridge scanner2023-03-10T08:15:28ZjugaDeploy the bridge scannerWe need to deploy [onbasca](https://gitlab.torproject.org/tpo/network-health/onbasca), a bridge scanner that communicates with rdsys via Web and might replace bridgestrap in the future (https://gitlab.torproject.org/tpo/anti-censorship/r...We need to deploy [onbasca](https://gitlab.torproject.org/tpo/network-health/onbasca), a bridge scanner that communicates with rdsys via Web and might replace bridgestrap in the future (https://gitlab.torproject.org/tpo/anti-censorship/rdsys/-/issues/150). It needs python, other python packages and postgres.
If deployed in a different vm as polyanthum, we might need to create a tunnel because atm there's no any authentication mechanism.Jérôme Charaouilavamind@torproject.orgJérôme Charaouilavamind@torproject.org2023-02-15https://gitlab.torproject.org/tpo/tpa/team/-/issues/40969gnt-dal cluster physical setup and burn in2023-02-16T16:26:19Zanarcatgnt-dal cluster physical setup and burn inas per [TPA-RFC-43](https://gitlab.torproject.org/tpo/tpa/team/-/wikis/policy/tpa-rfc-43-cymru-migration-plan), once we have contact at the new colo (#40967), once the servers are ordered and shipped (#40966), we need to connect them tog...as per [TPA-RFC-43](https://gitlab.torproject.org/tpo/tpa/team/-/wikis/policy/tpa-rfc-43-cymru-migration-plan), once we have contact at the new colo (#40967), once the servers are ordered and shipped (#40966), we need to connect them together correctly and do a basic burn in again (boot stressant, see if everything works).
* [x] OOB access confirmation
* [x] BIOS password reset
* [x] BIOS setup
* [x] live system boot
* [x] dal-node-01 burn in
* [x] dal-node-02 burn in
* [x] dal-node-03 burn in
* [x] disk configuration problem (SSD disks not detected?)
this does *not* include the base Debian install (#40970)trusted high performance cluster (gnt-dal migration)anarcatanarcat2023-02-17https://gitlab.torproject.org/tpo/tpa/team/-/issues/41054Add mentors to gsoc at torproject dot org if we get approved2023-05-08T13:12:34ZGabagaba@torproject.orgAdd mentors to gsoc at torproject dot org if we get approvedIf Tor gets accepted to the program google summer of code 2023, then we need to add the following people to the alias gsoc at torproject dot org: @hiro, @gk (already in the alias), @juga, @nickm, @diziet, @raya (and other co-mentor that ...If Tor gets accepted to the program google summer of code 2023, then we need to add the following people to the alias gsoc at torproject dot org: @hiro, @gk (already in the alias), @juga, @nickm, @diziet, @raya (and other co-mentor that raya will choose)
On **February 22nd** @smith will let you all know if we got accepted on the gsoc program.2023-02-23https://gitlab.torproject.org/tpo/tpa/team/-/issues/41079Support for private GitLab Pages2023-02-27T21:08:48ZSilvio RhattoSupport for private GitLab Pages# Description
Right now it seems that Tor's GitLab instance does not have an option to have a private GitLab Pages.
Having this feature would be handy for those like me who have a (still) private repository that needs a private GitLab ...# Description
Right now it seems that Tor's GitLab instance does not have an option to have a private GitLab Pages.
Having this feature would be handy for those like me who have a (still) private repository that needs a private GitLab static site.
# Steps to reproduce
1. Access a private repository on [Tor's GitLab](https://gitlab.torproject.org) and on [GitLab.com](https://gitlab.com).
2. Go to "Settings > General" and expand "Visibility, project features, permissions" on each repository.
# Current result
On GitLab.com we have an option for controlling Pages visibility:
![2023-02-16-10_21_12_799x145](/uploads/849cdf7ee3b41bb10ae14517da3abffb/2023-02-16-10_21_12_799x145.png)
On Tor' GitLab we don't.
# Expected result
Having an option for controlling Pages visibility on Tor's GitLab.
# Suggested fix
[GitLab Pages access control](https://docs.gitlab.com/ee/user/project/pages/pages_access_control.html) docs says:
> [...] If you don’t see the toggle [Pages visibility] button, that means it isn’t enabled. Ask your administrator to [enable it][].
I could not check whether the current GitLab version at https://gitlab.torproject.org does not have this option, neither check if it runs the same version from GitLab.com (probably not), but it's possible that this feature is already available in recent GitLab versions.
I'm also unaware of the performance, labor and complexity impact of this request.
[enable it]: https://docs.gitlab.com/ee/administration/pages/index.html#access-control
Thank you :)anarcatanarcat2023-02-28https://gitlab.torproject.org/tpo/tpa/team/-/issues/40970gnt-dal cluster software setup2023-03-02T21:53:17Zanarcatgnt-dal cluster software setupas per [TPA-RFC-43](https://gitlab.torproject.org/tpo/tpa/team/-/wikis/policy/tpa-rfc-43-cymru-migration-plan), once the servers are shipped (#40966) and burnt-in (#40969), we need to work on the base setup.
* [x] establish naming conv...as per [TPA-RFC-43](https://gitlab.torproject.org/tpo/tpa/team/-/wikis/policy/tpa-rfc-43-cymru-migration-plan), once the servers are shipped (#40966) and burnt-in (#40969), we need to work on the base setup.
* [x] establish naming convention (gnt-dal? dal-node-XX?)
* [x] confirm public IP allocation for the new Ganeti cluster
* [x] establish reverse DNS delegation
* [x] basic Debian install
* [x] dal-node-01
* [x] dal-node-02
* [x] dal-node-03
* [x] mandos configuration (dal-node-01 done, needs to be tested)
* [x] VLAN configuration
* [x] establish private IP allocation for the backend network
* [x] cross connects verification
this plan is slightly reworked from TPA-RFC-43 to take into account VLANs and the base Debian install, which were somehow omitted from the original proposal.
this might be split in one ticket per machine. next step once this is done is the ganeti setup (#40971).trusted high performance cluster (gnt-dal migration)anarcatanarcat2023-03-03https://gitlab.torproject.org/tpo/tpa/team/-/issues/41072onionprobe exporter uses too much disk space2023-03-15T12:25:46Zanarcatonionprobe exporter uses too much disk spaceIn #41070 we had a situation where the prometheus server was threatening to fill up its 160GB disk. part of the problem was the high cardinality of the systemd and logind components of the node exporter, which have been disabled, but we'...In #41070 we had a situation where the prometheus server was threatening to fill up its 160GB disk. part of the problem was the high cardinality of the systemd and logind components of the node exporter, which have been disabled, but we're still seeing high cardinality in another exporter, onionprobe.
It's unclear if we're still going to run out of disk space or when, so this is not an emergency *yet*, but it would be nice if the onionprobe folks (@rhatto) could look at this issue and see if we could reduce the cardinality in labels.
here's a part of the output of https://prometheus.torproject.org/classic/status (u: tor-guest, no password), copied here for convenience. you can see the `updated_at` label has a *lot* of instances. i suspect that's part of onionprobe, but haven't checked, would be worth double-checking. you can definitely see it's rivaling the node exporter in terms of usage as the `job=onionprobe` is using almost as many pairs as the `job=node`, and the latter runs on *every* TPA server (~100 machines).. so that's a lot!
# Highest Cardinality Labels
| Name | Count |
|------|-------|
| updated_at | 50736 |
| hsdir | 1703 |
| **name** | 1594 |
| relname | 1018 |
| name | 659 |
| address | 411 |
| device | 326 |
| instance | 165 |
| grpc_method | 151 |
| endpoint | 149 |
# Highest Cardinality Metric Names
| Name | Count |
|------|-------|
| onion_service_descriptor_fetch_attempts | 6826 |
| onion_service_descriptor_reachable | 6798 |
| node_cpu_seconds_total | 6424 |
| onion_service_descriptor_latency | 6350 |
| onion_service_introduction_points_number | 6350 |
| onion_service_connection_attempts | 5984 |
| onion_service_reachable | 5984 |
| onion_service_status_code | 5864 |
| onion_service_latency | 5864 |
| node_scrape_collector_success | 3744 |
# Label Names With Highest Cumulative Label Value Length
| Name | Length |
|------|--------|
| updated_at | 856780 |
| hsdir | 88423 |
| **name** | 52941 |
| relname | 26462 |
| name | 19333 |
| filename | 11764 |
| address | 9867 |
| instance | 4997 |
| endpoint | 4923 |
| device | 3186 |
# Most Common Label Pairs
| Name | Count |
|------|-------|
| job=node | 98023 |
| alias=hetzner-nbg1-01.torproject.org | 52030 |
| job=onionprobe | 51473 |
| instance=hetzner-nbg1-01.torproject.org:9935 | 51473 |
| classes=role::undefined | 27916 |
| classes=role::ganeti::chi | 24791 |
| protocol=http | 24176 |
| port=80 | 24176 |
| classes=role::ganeti::fsn | 22911 |
| reachable=1 | 12214 |
/cc @rhattoanarcatanarcat2023-03-07https://gitlab.torproject.org/tpo/tpa/team/-/issues/40971gnt-dal cluster ganeti setup and burn-in2023-03-23T15:39:53Zanarcatgnt-dal cluster ganeti setup and burn-inas per TPA-RFC-43, once we have contact at the new colo (#40967), the servers are ordered and shipped (#40966), burnt-in (#40969), and installed (#40970), we need to setup Ganeti on the new nodes.
1. [x] disk partitionning (RAID, LUKS, ...as per TPA-RFC-43, once we have contact at the new colo (#40967), the servers are ordered and shipped (#40966), burnt-in (#40969), and installed (#40970), we need to setup Ganeti on the new nodes.
1. [x] disk partitionning (RAID, LUKS, LVM...)
1. [x] install first node
1. [x] Ganeti cluster initialization
1. [x] install second node
2. [x] confirm DRBD networking and live migrations are operational
1. [x] VM migration "wet run" (try to migrate one VM from cymru and confirm it works)
once this is done, the cluster is operational and ready to accept VM migrations (#40972)trusted high performance cluster (gnt-dal migration)anarcatanarcat2023-03-10https://gitlab.torproject.org/tpo/tpa/team/-/issues/40972TPA-RFC-52: mass VM migration from gnt-chi to gnt-dal2023-07-10T18:37:23ZanarcatTPA-RFC-52: mass VM migration from gnt-chi to gnt-dalas per TPA-RFC-43, once we have contact at the new colo (#40967), the servers are ordered and shipped (#40966), burnt-in (#40969), installed #40970, and configured with ganeti #40971, we need to migrate the entire gnt-chi cluster to it.
...as per TPA-RFC-43, once we have contact at the new colo (#40967), the servers are ordered and shipped (#40966), burnt-in (#40969), installed #40970, and configured with ganeti #40971, we need to migrate the entire gnt-chi cluster to it.
the plan is, roughly:
* [x] announce VM migration plan, see https://gitlab.torproject.org/tpo/tpa/team/-/wikis/policy/tpa-rfc-52-cymru-migration-timeline
* [x] mass VM migration setup (the [move-instance](https://docs.ganeti.org/docs/ganeti/3.0/html/move-instance.html) command), just missing patch deployment, probably debian packages
* [x] mass migration and renumbering, hopefully Monday 2023-03-20
VMs to migrate:
- [x] btcpayserver-02
- [x] ci-runner-x86-01, @lavamind has the idea of rebuilding the VM, but it was migrated to speed up the gnt-chi retirement and close this ticket.
- [x] dangerzone-01
- [x] gitlab-dev-01
- [x] metrics-psqlts-01
- [x] onionbalance-02
- [x] probetelemetry-01
- [x] rdsys-frontend-01
- [x] static-gitlab-shim
- [x] survey-01
- [x] tb-pkgstage-01
- [x] tb-tester-01
- [x] telegram-bot-01
- [x] test-01
- [x] tpa-bootstrap-01
once this is done, the gnt-chi cluster can be (almost) entirely retired, see #40973 and #40968.trusted high performance cluster (gnt-dal migration)anarcatanarcat2023-03-24https://gitlab.torproject.org/tpo/tpa/team/-/issues/41106make new mirrors in gnt-dal cluster (web-dal-07 and web-dal-08)2023-03-31T18:24:17Zanarcatmake new mirrors in gnt-dal cluster (web-dal-07 and web-dal-08)in #40897, we decided to do an emergency migration of the Cymru web mirrors to OVH. but in #40971 we finally setup a cluster in an alternate location and in #40972 we migrated all VMs off of the Cymru server successfully.
Now is the tim...in #40897, we decided to do an emergency migration of the Cymru web mirrors to OVH. but in #40971 we finally setup a cluster in an alternate location and in #40972 we migrated all VMs off of the Cymru server successfully.
Now is the time to decide what we do about the web-bhs-* mirrors. The plan was to create new VMs in the new cluster and shutdown those at OVH, and I think it should be followed through. I probably won't have time to do this today, but it would be great if we could get to this next week.trusted high performance cluster (gnt-dal migration)2023-04-05https://gitlab.torproject.org/tpo/tpa/team/-/issues/40968ship chi-node-14 to new datacenter2023-04-05T20:26:45Zanarcatship chi-node-14 to new datacenteras per [TPA-RFC-43](https://gitlab.torproject.org/tpo/tpa/team/-/wikis/policy/tpa-rfc-43-cymru-migration-plan), once we have contact at the new colo (#40967), ship chi-node-14 there, which involves:
1. [x] maintenance window announced t...as per [TPA-RFC-43](https://gitlab.torproject.org/tpo/tpa/team/-/wikis/policy/tpa-rfc-43-cymru-migration-plan), once we have contact at the new colo (#40967), ship chi-node-14 there, which involves:
1. [x] maintenance window announced to shadow people
1. [x] ~~server shutdown in preparation for shipping~~ looks like the server was shutdown without prior warning
1. [x] server is shipped
1. [x] server arrives
1. [x] server is racked and connected
1. [x] server is renumbered and brought back online
1. [x] unpause chi-node-14 runner
1. [x] unpause chi-node-14-shadow runner
1. [x] end of the maintenance windowtrusted high performance cluster (gnt-dal migration)anarcatanarcat2023-04-05https://gitlab.torproject.org/tpo/tpa/team/-/issues/41083TPA-RFC-53: consider propagating 2FA everywhere, maybe at the April Tor Meeting2023-05-23T16:06:02ZanarcatTPA-RFC-53: consider propagating 2FA everywhere, maybe at the April Tor MeetingI've been meaning to do a training or work session or "EVERYONE GETS A YUBIKEY PARTY" thing for a while now. I don't know how it will look like. I don't know quite the requirements yet. But it feels like an opportunity.
/cc @shelikhoo @...I've been meaning to do a training or work session or "EVERYONE GETS A YUBIKEY PARTY" thing for a while now. I don't know how it will look like. I don't know quite the requirements yet. But it feels like an opportunity.
/cc @shelikhoo @linusanarcatanarcat2023-04-06https://gitlab.torproject.org/tpo/tpa/team/-/issues/41119brainstorm ideas for TPA in-person meeting2023-09-14T14:03:42Zanarcatbrainstorm ideas for TPA in-person meetingwe'll have the chance to meet in person with a bunch of people, we should use it. we'll share "THE BAR" space with the ops team, but we can welcome other folks in our session as well.
Once settled, we should throw the results in https:/...we'll have the chance to meet in person with a bunch of people, we should use it. we'll share "THE BAR" space with the ops team, but we can welcome other folks in our session as well.
Once settled, we should throw the results in https://nc.torproject.net/f/458264 (or the wiki? see also https://gitlab.torproject.org/tpo/team/-/wikis//2023-Tor-Meeting-Costa-Rica-Wiki#schedule)
I suggest we proceed by making one comment here per idea, and :+1: the ones we like, asynchronously.
/cc @gaba @lavamind @kezanarcatanarcat2023-04-14https://gitlab.torproject.org/tpo/tpa/team/-/issues/41111retire web-bhs-* servers2023-05-03T14:55:26Zanarcatretire web-bhs-* serversnow that we have new mirrors in the new Ganeti cluster (gnt-dal, #41106), we should (soon) be able to retire the web-bhs-* mirrors.
This must *not* done before I approve of it, and absolutely not before Monday April 3rd.
1. [x] announ...now that we have new mirrors in the new Ganeti cluster (gnt-dal, #41106), we should (soon) be able to retire the web-bhs-* mirrors.
This must *not* done before I approve of it, and absolutely not before Monday April 3rd.
1. [x] announcement (N/A)
2. [x] nagios
3. [x] retire the host in fabric
4. [x] remove from LDAP with `ldapvi`
5. [x] power-grep
6. [x] remove from tor-passwords
7. [x] remove from DNSwl (N/A)
8. [x] remove from docs
9. [x] remove from reverse DNS
10. [x] cancel servers with OVHJérôme Charaouilavamind@torproject.orgJérôme Charaouilavamind@torproject.org2023-04-14https://gitlab.torproject.org/tpo/tpa/team/-/issues/41058build a VPN / jumphost for the gnt-dal cluster2023-04-18T18:23:55Zanarcatbuild a VPN / jumphost for the gnt-dal clusterit seems our quintex PoP will require our own OOB network, as the one provided by upstream is not exactly to our liking. or, to be more specific, it's a combination of limitations in the BIOS (not possible to upload an image bigger than ...it seems our quintex PoP will require our own OOB network, as the one provided by upstream is not exactly to our liking. or, to be more specific, it's a combination of limitations in the BIOS (not possible to upload an image bigger than 1.44MB) and limitations in the VPN (not possible to serve files from the clients) that make booting a rescue system overly complicated.
furthermore, we feel this is a problem we often have to fix. bootstrapping and rescue on the cymru cluster was also hellish, having a remote box under our control would have facilitated this immensely.
me and @lavamind started working on a network design for this, documented in:
https://gitlab.torproject.org/tpo/tpa/team/-/wikis/howto/quintex#network-topology
basically, the idea is to have a dedicated machine hooked to the uplink, but also the OOB/IPMI/BIOS network *and* the internal management network (e.g. where DRBD lives, to offer PXE boot, since that shouldn't happen on the public uplink and *can't happen on the OOB network). that can be done with a single network port with VLAN tagging, or (more simply) with three different ports on the device.
ideally, the device must be small to avoid any supplementary costs in rack space, and low power, to avoid costs in power. it should also be rugged to avoid requiring too much hardware maintenance (e.g. all solid state, ideally RAID-1).
checklist:
- [x] hardware design
- [x] expense approval
- [x] hardware order
- [x] hardware shipped
- [x] machine name (`dal-rescue-01`?)
- [x] OS bootstrap
- [x] Puppet setup
- [x] ~~shoelaces install~~
- [x] DHCP server configuration
- [x] iPXE image build
- [x] grml image builds
- [x] DHCP / grml test boot on eth1
- [x] naming convention tweak (tpo/tpa/wiki-replica!39)
- [x] IP allocation
- [x] VLAN setup
- [x] label ports and dal-rescue-01
- [x] second dal-rescue-02 setup
- [x] label dal-rescue-02
- [x] ~~IP ACL~~ built-in firewall rules considered sufficient
- [ ] ~~ship dal-rescue-01~~
- [ ] ~~renumber iDRACs? (maybe split in another ticket?)~~ see #41135trusted high performance cluster (gnt-dal migration)anarcatanarcat2023-05-10https://gitlab.torproject.org/tpo/tpa/team/-/issues/40973gnt-chi cluster retirement2023-10-04T21:04:19Zanarcatgnt-chi cluster retirementonce we have done the mass VM migration from gnt-chi to gnt-dal (#40972), we should retire the old gnt-chi machines that are still around.
for each machine, we need to go through the `retire-a-host` procedure. special care should probab...once we have done the mass VM migration from gnt-chi to gnt-dal (#40972), we should retire the old gnt-chi machines that are still around.
for each machine, we need to go through the `retire-a-host` procedure. special care should probably be taken with the SANs as well.
- [x] chi-node-01 (wiped at 67.84%)
- [x] chi-node-02 (wiped 100%)
- [x] chi-node-03 (wiped 100%)
- [x] chi-node-04 (wiped 100%)
- [x] ~~chi-node-05~~ (already done in #40738)
- [x] chi-node-06 (wiped 100%)
- [x] chi-node-07 (partially wiped)
- [x] chi-node-08 (partially wiped)
- [x] chi-node-09 (wiped 100%)
- [x] chi-node-10 (wiped 100%)
- [x] ~~chi-node-11~~ (already done in #41071)
- [x] chi-node-12 (wiped 100%)
- [x] chi-node-13 (wiped 100%)
- [x] ~~chi-node-14~~ (shipped to dallas, see #40968)
- [x] moly and peninsulare (see #29974)
each server will follow the normal retirement procedure except those steps which will be done in one batch:
* [x] nagios
* [x] power-grep
* [x] remove from tor-passwords
* [x] remove from DNSwl (N/A)
* [x] remove from docs
* [x] remove from reverse DNS
* [x] remove from racks (the wipe is done individually, but the unracking will be issued to cymru all at once) update: rack removal requested from cymru
SAN servers retirement:
- [x] chi-san-01
- [x] chi-san-02
- [x] chi-san-03
- [x] chi-san-04trusted high performance cluster (gnt-dal migration)anarcatanarcat2023-05-11