Skip to content
Snippets Groups Projects
Unverified Commit c82270cd authored by anarcat's avatar anarcat
Browse files

expand on possible solutions

parent 73d8679d
No related branches found
No related tags found
No related merge requests found
......@@ -469,34 +469,65 @@ present in Hiera, see [issue 30020](https://gitlab.torproject.org/tpo/tpa/team/-
## Goals
TODO: document requirements
### Must have
* high availability: continue serving content even if one (or a few?)
servers go down
* atomicity: the deployed content must be coherent
* high performance: should be able to saturate a gigabit link and
withstand simple DDOS attacks
### Nice to have
* cache-busting: changes to a CSS or JavaScript file must be
propagated to the client reasonably quickly
* possibly host Debian and RPM package repositories
### Non-Goals
* implement our own global content distribution network
## Approvals required
Should be approved by TPA.
## Proposed Solution
TODO: propose improvements to the current static mirror system.
The static mirror system certainly has its merits: it's flexible,
powerful and provides a reasonably easy to deploy, high availability
service, at the cost of some level of obscurity, complexity, and high
disk space requirements.
brainstorm:
It should be possible to replace parts or the entirety of the system
progressively, however. A few ideas:
* replace source with gitlab CI/runners
* get rid of master altogether? becomes gitlab pages?
* replace mirrors with the caching system?
* the **mirror** hosts could be replaced by the [cache
system](cache). this would possibly require shifting the web service
from the **mirror** to the **master** or at least some significant
re-architecture
* the **source** hosts could be replaced by some parts of the [GitLab
Pages](https://docs.gitlab.com/ee/administration/pages/) system. unfortunately, that system relies on a custom
webserver, but it might be possible to bypass that and directly
access the on-disk files provided by the CI.
One concern with using GitLab pages is that it uses a custom webserver
(to get and issue TLS certs for the custom domains) and requires a
shared filesystem to deploy content. GitLab.com uses NFS to decouple
the pages host from the main GitLab host, maybe we could use CephFS
instead? In any case it's a little clunky and doesn't immediately
fulfill the high availability requirement.
(to get and issue TLS certs for the custom domains).
It also assumes the existence of a shared filesystem to deploy
content. GitLab.com uses NFS to decouple the pages host from the main
GitLab host, maybe we could use CephFS instead? In any case it's a
little clunky and doesn't immediately fulfill the high availability
requirement.
The other downside of this approach is increased dependency on GitLab
for deployments.
Next steps:
1. check if the GitLab Pages subsystem provides atomic updates
2. see how GitLab Pages can be distributed to multiple hosts and how
scalable it actually is or if we'll need to run the cache frontend
in front of it
## Cost
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment