... | ... | @@ -596,7 +596,21 @@ tpo/tpa/team#40615). |
|
|
Please be aware of the [known upstream issues](#issues) that affect those
|
|
|
diagnostics as well.
|
|
|
|
|
|
To see if expiration policies work (or if "kept" artifacts or
|
|
|
To obtain a list of project sorted by space usage, log on to GitLab using an
|
|
|
account with administrative privileges and open the [Projects page](https://gitlab.torproject.org/admin/projects?sort=storage_size_desc)
|
|
|
sorted by `Largest repository`. The total space consumed by each project is
|
|
|
displayed and clicking on a specific project shows a breakdown of how this space
|
|
|
is consumed by different components of the project (repository, LFS, CI
|
|
|
artifacts, etc.).
|
|
|
|
|
|
If a project is consuming an unexpected amount of space for artifacts, the
|
|
|
scripts from the [tpo/tpa/gitlab-tools](https://gitlab.torproject.org/tpo/tpa/gitlab-tools)
|
|
|
project can by utilized to obtain a breakdown of the space used by job logs and
|
|
|
artifacts, per job or per pipeline. These scripts can also be used to manually
|
|
|
remove such data, see the [gitlab-tools README](https://gitlab.torproject.org/tpo/tpa/gitlab-tools/README.md).
|
|
|
|
|
|
It's also possible to compile some CI artifact usage statistics directly on the
|
|
|
GitLab server. To see if expiration policies work (or if "kept" artifacts or
|
|
|
old `job.log` are a problem), use this command (which takes a while to
|
|
|
run):
|
|
|
|
... | ... | @@ -606,11 +620,6 @@ To limit this to `job.log`, of course, you can do: |
|
|
|
|
|
find -name "job.log" -mtime +14 -print0 | du --files0-from=- -c -h | tee find-mtime+14-joblog-du.log
|
|
|
|
|
|
To produce more specific usage statistics and perform cleanups of artifacts using
|
|
|
the GitLab API, the [gitlab-artifact-vacuum][] script should be used.
|
|
|
|
|
|
[gitlab-artifact-vacuum]: https://gitlab.torproject.org/tpo/tpa/gitlab-tools#gitlab-artifact-vacuum
|
|
|
|
|
|
## Disaster recovery
|
|
|
|
|
|
In case the entire GitLab machine is destroyed, a new server should be
|
... | ... | |