TPA issueshttps://gitlab.torproject.org/groups/tpo/tpa/-/issues2023-11-09T15:38:05Zhttps://gitlab.torproject.org/tpo/tpa/team/-/issues/30273improve inventory of hardware resources2023-11-09T15:38:05Zanarcatimprove inventory of hardware resourcesWe currently have a few hosting providers and locations where we have "stuff":
* virtual machines
* colocated servers
* raspberri pi under desk
* routers
* "cloud" things (like AWS)
* test machines
* etc
TPO machines are current...We currently have a few hosting providers and locations where we have "stuff":
* virtual machines
* colocated servers
* raspberri pi under desk
* routers
* "cloud" things (like AWS)
* test machines
* etc
TPO machines are currently documented in LDAP. But they are also in Puppet. And there's a spreadsheet (which we want to replace with something else, probably a grafana dashboard, in legacy/trac#29816). And there are many things (like AWS) which are not really tracked formally anywhere that I am aware of.
So this project is about establishing a clearer process to keep such an inventory. It should at least cover the following, TPO-managed infrastructure:
* physical servers
* virtual machines on those physical servers *or* on other cloud providers
Ideally, we would also have a unified view of this for all machines paid for by TPI, regardless of the team.
Each machine should have documentation on:
* remote console access or control panel
* cost
* location
* responsible team
* purpose
* age and lifecycle (see parent legacy/trac#29304)
The last bit is of course related to another problem, which is lifecycle management (see parent ticket legacy/trac#29304).
A lot of that stuff is currently in LDAP and maybe it should just be added there. But I wonder if it would be useful to create another system (which might eventually supersede LDAP) that would be more flexible. If that process would happen at all, we would first need to thoroughly document how hosts are integrated into LDAP and so on, of course.cleanup and publish the sysadmin codebasehttps://gitlab.torproject.org/tpo/tpa/team/-/issues/40580consider retiring build boxes2022-04-07T16:21:19Zanarcatconsider retiring build boxesin the jenkins retirement (#40218) we have decided to keep a few (three machines with debian_build_box, one of which is also a CI runner) build boxes with sbuild on it. even though we have retired Jenkins, which were its primary consumer...in the jenkins retirement (#40218) we have decided to keep a few (three machines with debian_build_box, one of which is also a CI runner) build boxes with sbuild on it. even though we have retired Jenkins, which were its primary consumer, users like @weasel and @kez may still require those boxes for two use cases:
* @kez doesn't run Debian and might need a place to build random Debian packages not currently in GitLab (which could be fixed by moving those package builds inside GitLab)
* @weasel has a similar use case although he obviously runs debian, he also needs access to the ARM builder. it's unclear whether he still requires access to the build box in the long term or why (sorry, my memory fails me here)https://gitlab.torproject.org/tpo/tpa/team/-/issues/40077Nginx/GitLab Prometheus exporter should not be hardcoded2022-04-06T20:54:12ZanarcatNginx/GitLab Prometheus exporter should not be hardcodedHi!
I have noticed the services were "degraded" on `gitlab-02` this morning. After digging a little, it seemed the `prometheus-nginx-exporter` server was failing:
```
Nov 3 15:41:16 gitlab-02/gitlab-02 prometheus-nginx-exporter[2129]:...Hi!
I have noticed the services were "degraded" on `gitlab-02` this morning. After digging a little, it seemed the `prometheus-nginx-exporter` server was failing:
```
Nov 3 15:41:16 gitlab-02/gitlab-02 prometheus-nginx-exporter[2129]: 2020/11/03 15:41:16 Could not create Nginx Client: Failed to create NginxClient: failed to parse response body "<!DOCTYPE html>\n<html class=\"devise-layout-html\">\n<head prefix=\"og: http://ogp.me/ns#\">\n<meta charset=\"utf-8\">\n<link as=\"style\" href=\"http://127.0.0.1:8080/assets/application-bf1ba5d5d3395adc5bad6f17cc3cb21b3fb29d3e3471a5b260e0bc5ec7a57bc4.css\" rel=\"preload\">\n<link as=\"style\" href
```
... basically, it was hitting the GitLab frontpage instead of the `stub_status` page.
To clear the Nagios warning, I have added a Nginx `location` block in a new arbitrary `server` that does only that service. But then I realized that nothing was actually scraping the exporter: the main prometheus server does not have data on the nginx server running on Gitlab-02.
<del>Maybe I missed something, but it seems to me this data should be scraped on the prometheus server.</del> As @hiro pointed out in the comments, the target is being correctly scraped. But it's done by hardcoding the configuration on the Prometheus server, instead of using exported resources. We should still fix this and contribute our precious puppet code back upstream to the prometheus module, where it belongs...
The way this is typically done is by including a `profile` (e.g. `profile::prometheus::apache_exporter`) which configures a class from the 3rdparty Prometheus module (e.g. `prometheus::apache_exporter`). Except in this case, the third-party prometheus module doesn't support the nginx exporter we're using: it only supports the `vts` one.
So one of two things:
1. write our own prometheus exporter wrapper in Puppet for the exporter we're using (and, of course, ideally, contribute it back upstream), or;
2. just hack something together in `profile::prometheus::nginx_exporter` (which is basically the same, except we can't rely on sharing this with the community in the future)
We could start with hacking something together real quick (option 2) and move the code to the 3rdparty module (option 1) once we're happy with the result. In any case, we need something better than the current situation, because as things are, the prometheus nginx exporter just does nothing. It just sits there waiting for a prometheus scraper that never comes. It's pretty harmless, but then we don't get precious nginx metrics that could help us diagnose performance issues in the future.
assigned to @hiro because she built this, and it (working with prometheus puppet upstream) would be an important thing to learn. let me know if you want me to do it or need help with this!https://gitlab.torproject.org/tpo/tpa/team/-/issues/33332move root passwords to trocla?2022-04-07T15:58:50Zanarcatmove root passwords to trocla?one manual step of our install process is to initialize the root password and set it in the password manager. that manual step could be completely skipped if we just set the root password in trocla.one manual step of our install process is to initialize the root password and set it in the password manager. that manual step could be completely skipped if we just set the root password in trocla.https://gitlab.torproject.org/tpo/tpa/team/-/issues/40096automate/puppetize (or replace) mandos installation2023-03-30T01:37:58Zanarcatautomate/puppetize (or replace) mandos installationwe use Mandos to unlock server's LUKS-encrypted partitions on boot, but the [setup is done manually](https://gitlab.torproject.org/tpo/tpa/team/-/wikis/howto/new-machine-mandos/). that is error-prone and slow, it's actually one of the sl...we use Mandos to unlock server's LUKS-encrypted partitions on boot, but the [setup is done manually](https://gitlab.torproject.org/tpo/tpa/team/-/wikis/howto/new-machine-mandos/). that is error-prone and slow, it's actually one of the slowest part of our install procedure.
in #31239, we identified the following steps to get this ball rolling:
* [x] export/import firewall rules (in `roles::fde`)
* [ ] generate and export new LUKS key in Puppet
* [ ] import new key on mandos server
* [ ] rebuild initramfs
We should also consider alternatives to Mandos, if this "Puppetization" is too complicated. On the top of my head, there is also:
* [arver](https://code.immerda.ch/immerda/apps/arver/)
* secure enclaves and secureboot, e.g. [this](https://blog.habets.se/2015/03/How-to-boot-an-encrypted-system-safely.html), [this](https://blog.dowhile0.org/2017/10/18/automatic-luks-volumes-unlocking-using-a-tpm2-chip/), or [this](https://safeboot.dev/), [this](https://github.com/xmikos/cryptboot), or [mortar](https://github.com/noahbliss/mortar)
* [clevis](https://github.com/latchset/clevis) (in Debian since buster), [introduced by RedHat as NDBE](https://www.redhat.com/en/blog/easier-way-manage-disk-decryption-boot-red-hat-enterprise-linux-75-using-nbde)cleanup and publish the sysadmin codebasehttps://gitlab.torproject.org/tpo/tpa/team/-/issues/40168track and respond to email spam complaints systematically2022-04-06T21:00:58Zanarcattrack and respond to email spam complaints systematicallyRight now we get complaints about spam to postmaster@tpo but do not necessarily act on those. Worst, there might be places where we just don't get notifications because we do not register to other provider's interfaces.
Some ideas:
* ...Right now we get complaints about spam to postmaster@tpo but do not necessarily act on those. Worst, there might be places where we just don't get notifications because we do not register to other provider's interfaces.
Some ideas:
* subscribe to <https://fbl.returnpath.net/>
* register on [Google's postmaster tools](https://gmail.com/postmaster/)
* try to figure out whatever is going on with Outlook (see https://gitlab.torproject.org/tpo/tpa/team/-/issues/33037#note_2725160)
* use some automation to measure feedback, for example [feedback-loop](https://git.autistici.org/ai3/tools/feedback-loop)
We already have improved our Prometheus metrics and Grafana dashboards as part of #33037, so there's already that, but work remains to be done to ensure we have good delivery.
This is part of the 2021 roadmap.improve mail serviceshttps://gitlab.torproject.org/tpo/tpa/team/-/issues/31226add validation checks in puppet2022-12-20T19:56:26Zanarcatadd validation checks in puppetwe often do "YOLO" (You Only Live Once) commits in Puppet because of silly syntax errors and typos that could be caught by automated systems. even just a simple git hook checking for syntax errors in manifests would be an improvement, bu...we often do "YOLO" (You Only Live Once) commits in Puppet because of silly syntax errors and typos that could be caught by automated systems. even just a simple git hook checking for syntax errors in manifests would be an improvement, but we could also run tests and so on.Puppet CIhttps://gitlab.torproject.org/tpo/tpa/team/-/issues/30020switch from our custom YAML implementation to Hiera2023-10-18T15:50:45Zanarcatswitch from our custom YAML implementation to HieraWe currently use a custom-made YAML database for assigning roles to servers and other metadata. I started using Hiera for some hosts and it seems to be working well.
Hiera is officially supported in Puppet and shipped by default in Pupp...We currently use a custom-made YAML database for assigning roles to servers and other metadata. I started using Hiera for some hosts and it seems to be working well.
Hiera is officially supported in Puppet and shipped by default in Puppet 5 and later. It's the standard way of specifying metadata and class parameters for hosts. I suspect it covers most of our needs in terms of metadata and should cover most if not all of what we're currently doing with the YAML stuff in Puppet.
We should therefore switch to using Hiera instead of our homegrown solution.
This involves converting:
* [x] `if has_role('foo') { include foo }` into `classes: [ 'foo' ]` in hiera (DONE!)
* [x] the `$roles` array into Hiera (DONE!)
* [x] the `$localinfo` into Hiera (assuming all the data is there) (DONE!)
* [x] ~~hardcoded macros in the ferm module's `me.conf.erb` into exported resources (DONE, except for HOST_TPO)~~
* [x] ~~templates looping over `$allnodeinfo` into exported resources~~
* [x] ~~the `$nodeinfo` and `$allnodeinfo` arrays into Hiera (assuming we can switch from LDAP for host inventory)~~
* [ ] `./modules/torproject_org/misc/hoster.yaml`
* [x] `./modules/torproject_org/misc/local.yaml`
* [x] `./modules/ipsec/misc/config.yaml`
* [x] `./modules/roles/misc/static-components.yaml`
* [x] `./modules/roles/files/spec/spec-redirects.yaml`
Ideally, all YAML data should end up in the hiera/ directory somehow. This is the first step in making our repository public (#29387) but also using Hiera as a more elaborate inventory system (#30273).
The idea of switching from LDAP to Hiera for host inventory will definitely need to be evaluated more thoroughly before going ahead with that part of the conversion, but YAML stuff in Puppet should definitely be converted.
The general goal of this is both to allow for a better inventory system but also make it easier for people to get onboarded with Puppet. By using community standards like Hiera, we make it easier for new people to get familiar with the puppet infrastructures and do things meaningfully.
Update: `get_roles()`, `has_role()`, `yamlinfo()` and `local.yaml` are *all* gone! The main chunks remaining are now `nodeinfo()`, `allnodeinfo()`, `$nodeinfo` and `hoster.yaml`. A plan has been laid out for that replacement below. Obviously, the ipsec, static components and redirects YAML files could use a transition into Hiera as well, but those are lower priority.cleanup and publish the sysadmin codebasehttps://gitlab.torproject.org/tpo/tpa/team/-/issues/34036audit access permissions in rt.torproject.org2021-06-16T20:57:01Zanarcataudit access permissions in rt.torproject.orgthere are a lot of users in rt, some of which probably do not belong there:
https://rt.torproject.org/Admin/Users/
we need to perform an audit of who has access to RT, to which queue and clean all that up.
ideally, users shouldn't be ...there are a lot of users in rt, some of which probably do not belong there:
https://rt.torproject.org/Admin/Users/
we need to perform an audit of who has access to RT, to which queue and clean all that up.
ideally, users shouldn't be granted individual access to stuff and only be part of groups which, in turn, have the required accesses.
users should also be added/removed properly as part of the onboarding/offboarding procedures, but that's a question for legacy/trac#32519. for now, this ticket is just about playing catchup.Jérôme Charaouilavamind@torproject.orgJérôme Charaouilavamind@torproject.orghttps://gitlab.torproject.org/tpo/tpa/gitlab/-/issues/71Send commits to mailing list(s)2022-10-31T14:01:56ZAlexander Færøyahf@torproject.orgSend commits to mailing list(s)The browser folks wants us to enable commit emails from fenix and other TB related repositories to their commit mailing list. We should find a way to do this in a structured way for the tpo/ namespace such that all our projects (also upc...The browser folks wants us to enable commit emails from fenix and other TB related repositories to their commit mailing list. We should find a way to do this in a structured way for the tpo/ namespace such that all our projects (also upcoming) gets these hooks enabled.
For now, we need to get Fenix and Tor-Browser.https://gitlab.torproject.org/tpo/tpa/gitlab/-/issues/50Allowing pushing to protected branches enables the Merge button on MRs2022-05-30T19:08:43ZGeorg KoppenAllowing pushing to protected branches enables the Merge button on MRsPreviously (for some value of it) "Allowed to merge" controlled which
users received a green Merge button on an MR and "Allowed to push"
controlled which users were allowed to push via `git push` to protected
branches.
I think that work...Previously (for some value of it) "Allowed to merge" controlled which
users received a green Merge button on an MR and "Allowed to push"
controlled which users were allowed to push via `git push` to protected
branches.
I think that worked reasonable in the sense that we could not merge
mistakenly to gitlab branches that were protected but the
torproject-pusher could sync over from git.tpo.
However, no it seems that the "Allowed to push" option is controlling
the Merge button, too. Even if no one is allowed to merge on protected
branches I get the green Merge button on MRs as long as "Allowed to
push" is enabled.
@sysrqb for notice as we talked about that problem earlier.https://gitlab.torproject.org/tpo/tpa/team/-/issues/31633publish HTML documentation of our puppet source2022-04-06T20:54:12Zanarcatpublish HTML documentation of our puppet sourcethere are ways of generating HTML versions of Puppet source code, based on the docstrings littering the source code. i've done some tentative runs of this and it looks ... interesting. the utility of this is currently limited by the fact...there are ways of generating HTML versions of Puppet source code, based on the docstrings littering the source code. i've done some tentative runs of this and it looks ... interesting. the utility of this is currently limited by the fact that only 35% of the source is documented, according to `puppet strings`, but i figured I would document the efforts I've done so far already.
Koumbit uses the following Rakefile to generate the docs for their monorepo:
```
#require 'bundler/gem_tasks'
task :default do
# nothing
puts('no action')
end
task :doc do
require 'puppet-strings/tasks/generate'
# This doesn't seem to really process node files, but
# an exclude of manifests/ might be interesting.
Rake::Task['strings:generate'].invoke(
# This list of included files was taken from
# https://github.com/puppetlabs/puppet-strings#generating-documentation-with-puppet-strings
# and should correspond to what puppet-strings does by default, but spanned
# over all of the code directories in the control repos.
# It's possible that some directories might include .rb files that were not
# specified.. We'll have to fix this if we ever encounter such an issue.
'**/manifests/**/*.pp **/functions/**/*.pp **/types/**/*.pp **/tasks/**/*.pp **/lib/**/*.rb',
'false',
'false',
'markdown'
)
end
# Generate documentation only for manifests in site/
# This will help to verify if there's anything in our own code that's missing
# comments for documentation. The run will be faster and less noisy than when
# we generate everything.
# Note, though, that it will create an index only for things in site/
task :doc_site do
require 'puppet-strings/tasks/generate'
# This doesn't seem to really process node files, but
# an exclude of manifests/ might be interesting.
Rake::Task['strings:generate'].invoke(
'site/**/*.pp site/**/*.rb',
'false',
'false',
'markdown'
)
end
task :doc_clean do
system("rm -rf doc")
end
task :doc_upload, [:ftp_host, :ftp_port, :ftp_user, :ftp_pass, :ftp_dir] do |t, args|
puts "lftp -e \"mirror -R doc #{args[:ftp_dir]}\" -u #{args[:ftp_user]},#{args[:ftp_pass]} -p #{args[:ftp_port]} #{args[:ftp_host]}"
system("lftp -e \"mirror -R doc #{args[:ftp_dir]}; quit\" -u #{args[:ftp_user]},#{args[:ftp_pass]} -p #{args[:ftp_port]} #{args[:ftp_host]}")
end
```
Notice the two different jobs for `site` (private) and `modules` (public).https://gitlab.torproject.org/tpo/tpa/team/-/issues/33449puppet: replace dsa_systemd with camptocamp systemd module2023-12-19T20:23:01Zanarcatpuppet: replace dsa_systemd with camptocamp systemd modulewe currently have two systemd module in Puppet, dsa_systemd (from the Debian sysadmins) and [camptocamp-systemd](https://github.com/camptocamp/puppet-systemd/), from the Puppet forge.
the latter was imported as a dependency of the Prome...we currently have two systemd module in Puppet, dsa_systemd (from the Debian sysadmins) and [camptocamp-systemd](https://github.com/camptocamp/puppet-systemd/), from the Puppet forge.
the latter was imported as a dependency of the Prometheus module and it would be very hard to remove it from our codebase.
we should look at whether we can replace the dsa_systemd module with the forge systemd module instead. this would allow us to collaborate with a broader community and remove duplicate code from our monorepo.
ideally, we'd also provide the good DSA folks a procedure on how to perform the migration, since we'll have to do it anyways.
so far, I've found this transition:
```
dsa_systemd::linger { 'bridgescan': }
```
... becomes:
```
loginctl_user { 'tordnsel':
linger => enabled,
}
```
we also use:
- [x] `dsa_systemd::override` (to replace with `systemd::dropin_file` with a possible notify)
- [ ] `dsa_systemd::mask`
- [ ] `dsa_systemd` class which deploys two `mask` resources, and a cleanup cronjob, to be reviewed
we already use the camptocamp:
* `systemd::tmpfile`
* `systemd::unit_file`
... and we have various systemd files manually deployed in `/lib` and `/etc`cleanup and publish the sysadmin codebasehttps://gitlab.torproject.org/tpo/tpa/team/-/issues/8689Periodically verify signatures in /dist2020-09-28T16:02:41ZMoritz BartlPeriodically verify signatures in /distGive the recent bad signatures of some files in /dist that only came to light after a user emailed helpdesk, I wrote a bash script that I now run periodically on my dist mirror to verify the signatures. I think it's not a bad idea to run...Give the recent bad signatures of some files in /dist that only came to light after a user emailed helpdesk, I wrote a bash script that I now run periodically on my dist mirror to verify the signatures. I think it's not a bad idea to run it on tpo.org as well.
As first argument, it takes the path to /dist. It uses a local independent public keyring I update from time to time. That path must be customized in the script.
It currently excludes /dist/manual because that contains unsigned copies of the user manual.https://gitlab.torproject.org/tpo/tpa/gitlab/-/issues/76make wikis more editable2022-10-13T13:17:43Zanarcatmake wikis more editablewikis can only be edited by members of the Developer group in a project, according to the [upstream documentation](https://docs.gitlab.com/ce/user/permissions.html#project-members-permissions). that is problematic because that permission...wikis can only be edited by members of the Developer group in a project, according to the [upstream documentation](https://docs.gitlab.com/ce/user/permissions.html#project-members-permissions). that is problematic because that permission also grants access to the source code in a project.
we'd like to have more flexibility in that regard: wikis are ideal for drive-by contributions who we might not want to grant special privileges to. similarly, teams should be able to edit each other's wiki, since documentation is often collaborative across teams.
let's see if that's possible at all.
There's an upstream feature request to make [wikis publicly editable](https://gitlab.com/gitlab-org/gitlab/-/issues/27294) and another to [allow visitors to suggest edits](https://gitlab.com/gitlab-org/gitlab/-/issues/42412). the wireshark team uses a [separate wiki with MR wokflow](https://gitlab.com/wireshark/editor-wiki/) to allow outside contributions, but that feels rather clunky.https://gitlab.torproject.org/tpo/tpa/gitlab/-/issues/46Gitlab should show text files in the browser2022-03-24T23:28:17ZGeorg KoppenGitlab should show text files in the browserRight now if I want to look at some .md or .txt file on Gitlab I need to
download and open it with an external application. However, that should
not be necessary. The browser should be sufficient for this task.Right now if I want to look at some .md or .txt file on Gitlab I need to
download and open it with an external application. However, that should
not be necessary. The browser should be sufficient for this task.https://gitlab.torproject.org/tpo/tpa/team/-/issues/12436Mail archive lint2020-09-28T16:02:41ZgrarpampMail archive lintSome messages in the gzip pipermail archives (lists.torproject.org)
lack the correct metadata and format for what would otherwise be
full use by MUA's.
If the full raw archives exist, it may be easier to see what
reimporting with curren...Some messages in the gzip pipermail archives (lists.torproject.org)
lack the correct metadata and format for what would otherwise be
full use by MUA's.
If the full raw archives exist, it may be easier to see what
reimporting with current mailman tools looks like.
From a concatenation of the three main lists: dev, relays, talk
(the others were not checked and may suffer as well)
There is...https://gitlab.torproject.org/tpo/tpa/team/-/issues/13122Please make .asc files be downloaded instead of displayed2020-09-28T16:02:41ZLunarPlease make .asc files be downloaded instead of displayedCurrently when opening the following link in a browser, the signature will be displayed instead of downloaded: https://www.torproject.org/dist/torbrowser/3.6.5/torbrowser-install-3.6.5_en-US.exe.asc
This can confuse users trying to veri...Currently when opening the following link in a browser, the signature will be displayed instead of downloaded: https://www.torproject.org/dist/torbrowser/3.6.5/torbrowser-install-3.6.5_en-US.exe.asc
This can confuse users trying to verify the signature, because they will need the file saved on their disk with most tools. In order to save them a step, and some bewilderment, let's make .asc files be downloaded by browsers.
I believe the Apache configuration snippet to be close to the following:
```
<FilesMatch "\.asc$">
ForceType application/octet-stream
Header set Content-Disposition attachment
</FilesMatch>
```https://gitlab.torproject.org/tpo/tpa/team/-/issues/21303monitor our fastly usage for early warning of overage charges2022-11-29T21:15:29ZRoger Dingledinemonitor our fastly usage for early warning of overage chargeshttps://docs.fastly.com/api/stats says that we can fetch our fastly stats in an automated way:
```
curl -H "Fastly-Key: api-key" https://api.fastly.com/stats/usage
```
gives me
```
{"data":{"africa":{"bandwidth":0,"requests":0},"anzac":{...https://docs.fastly.com/api/stats says that we can fetch our fastly stats in an automated way:
```
curl -H "Fastly-Key: api-key" https://api.fastly.com/stats/usage
```
gives me
```
{"data":{"africa":{"bandwidth":0,"requests":0},"anzac":{"bandwidth":21746725539,"requests":36556},"asia":{"bandwidth":236910848733,"requests":371271},"europe":{"bandwidth":62146174145799,"requests":90885278},"latam":{"bandwidth":14281347382,"requests":25199},"usa":{"bandwidth":12092179356871,"requests":18200808}},"status":"success","msg":null,"meta":{"from":"2016-12-24 19:08:47 UTC","to":"2017-01-24 19:08:47 UTC","by":"day","region":"all"}}
```
where api-key is a secret number that we can get from our fastly account (let me know and I'll tell you our current number).
We should run this fetch periodically, e.g. daily or several times a day, and use it to notice if our numbers are way bigger than we expect them to be.
Specifically, fastly has given us $20k of free money each month, and we're using around $5k-6k of it each month, and I don't know what happens if we suddenly use a lot more than $20k, but it could be ugly.
weasel suggests that if we write a script we can put in cron that writes a file, with the first line being (OK|WARNING|CRITICAL|UNKNOWN) and then more info on the second line, that would be easy to glue into the current monitoring and reporting infrastructure.
It's probably 20 lines of python for the person who knows how to import json and add up the bandwidth numbers and compare them to a set of thresholds.
Alternatively, it's possible that the Internet has this script already written and maintained.https://gitlab.torproject.org/tpo/tpa/gitlab/-/issues/43Calculate estimated and spent time automatically for tickets with task lists2022-05-30T19:08:43ZGeorg KoppenCalculate estimated and spent time automatically for tickets with task listsWe work around the unavailability of the Epics feature by using task
lists to denote parent/child relationships. One of the things that is
missing in that model is an update of the estimated/spent time in the
"parent" if things change on...We work around the unavailability of the Epics feature by using task
lists to denote parent/child relationships. One of the things that is
missing in that model is an update of the estimated/spent time in the
"parent" if things change on any issue listed in the task list. I am not
sure if that works for Epics but for us it would definitely be good to
have because some of use are trying to use parent tickets and task lists
to have tickets effectively on different milestones (the ticket itself
on milestone A while the parent ticket with ticket A on the task list on
milestone B) and we want to have proper time tracking for all of our
milestones.
I've not looked closely how we could solve this issue but maybe there is
a hook/plugin we can write that could help. The amount of dependent
tasks and their open/close status are already tracked automatically,
which is good and might provide us some insight on how to bolt the
timetracking onto that.
FWIW: This is not to say that those parent tickets should only reflect
the time tracking information for their issues in the task list. It
should be possible to add additional time spent etc. Just that the
figures can't be below the sum of the respective fields of the child issues.
Nested lists should be taken into account as well. :)
@doulget, @gaba, and @sysrqb for visibility as this came up yesterday in
during work on label clean-up.