fix typos, stop putting blackbox in pre everywhere authored by anarcat's avatar anarcat
I was doing this to bypass my spellchecker, but that's dumb, added it
to a file-local list instead.
......@@ -331,7 +331,7 @@ servers.
[prometheus-alerts]: https://gitlab.torproject.org/tpo/tpa/prometheus-alerts
Note: we currently have a handful of `blackbox-exporter`-related targets for TPA
Note: we currently have a handful of `blackbox_exporter`-related targets for TPA
services, namely for the HTTP checks. We intend to move those into puppet
profiles whenever possible.
......@@ -384,7 +384,7 @@ Those rules are declared on the server, in `prometheus::prometheus::server::inte
[`gitlab#20`]: https://gitlab.torproject.org/tpo/tpa/gitlab/-/issues/20
### Adding a `blackbox` target
### Adding a blackbox target
Most exporters are pretty straightforward: a service binds to a port and exposes
metrics through HTTP requests on that port, generally on the `/metrics` URL.
......@@ -399,16 +399,16 @@ will try to reach. The check will be initiated from the host running the
blackbox exporter to the target at the moment the Prometheus server is scraping
the exporter.
The `blackbox_exporter` is rather peculiar and counter-intuitive, see
the [how to debug the `blackbox_exporter`](#debugging-blackbox_exporter) for
The blackbox exporter is rather peculiar and counter-intuitive, see
the [how to debug the blackbox exporter](#debugging-blackbox-exporter) for
more information.
#### Scrape jobs
In Prometheus's point of view, two informations are needed:
In Prometheus's point of view, two information are needed:
* the address and port of the host where Prometheus can reach the blackbox exporter
* the target (and possibly the port tested) that the exporter will try to reach
* The address and port of the host where Prometheus can reach the blackbox exporter
* The target (and possibly the port tested) that the exporter will try to reach
Prometheus transfers the information above to the exporter via two labels:
......@@ -424,7 +424,7 @@ Prometheus transfers the information above to the exporter via two labels:
The reshuffling of labels mentioned above is achieved with the `relabel_configs`
option for the scrape job.
For TPA-managed services, we define this scrape jobs in hiera in
For TPA-managed services, we define this scrape jobs in Hiera in
`common/prometheus.yml` under keys named `collect_scrape_jobs`. Jobs in those
keys expect targets to be exported by other parts of the puppet code.
......@@ -497,7 +497,7 @@ For non-TPA services, the targets need to be defined in the `prometheus-alerts`
repository.
The targets defined this way for blackbox exporter look exactly like normal
prometheus targets, except that they define what the blackbox exporter will try
Prometheus targets, except that they define what the blackbox exporter will try
to reach. The targets can be `hostname:port` pairs or URLs, depending on the
nature of the type of check being defined.
......@@ -636,7 +636,7 @@ configuration, or alerting rule:
- `job`: name of the job (e.g. `JobDown`)
- `instance`: host name and port of affected device, including URL for
some `blackbox` probes (e.g. `test-01.torproject.org:9100`,
some blackbox probes (e.g. `test-01.torproject.org:9100`,
`https://www.torproject.org`)
- `alias`: similar to instance, without the port number
(e.g. `test-01.torproject.org`, `https://www.torproject.org`)
......@@ -2050,7 +2050,7 @@ would otherwise be around long enough for Prometheus to scrape their
metrics. We use it as a workaround to bridge Metrics data with
Prometheus/Grafana.
## Debugging `blackbox_exporter`
## Debugging the blackbox exporter
The [upstream documentation][] has some details that can help. We also
have examples [above][] for how to configure it in our setup.
......@@ -2064,7 +2064,7 @@ want to include debugging output. For example, to run an ICMP test on host
curl http://localhost:9115/probe?target=pauli.torproject.org&module=icmp&debug=true
Note that the above trick can be used for _any_ target, not just for ones
currently configured in the `blackbox_exporter`. So you can also use this to test
currently configured in the blackbox exporter. So you can also use this to test
things before creating the final configuration for the target.
[upstream documentation]: https://github.com/prometheus/blackbox_exporter
......@@ -2675,7 +2675,7 @@ dates.
| `Ganeti - cluster` | `check_ganeti_cluster` | [`ganeti-exporter`][] | `warning` | Runs a full verify, costly, was already disabled |
| `Ganeti - disks` | `check_ganeti_instances` | Idem | `warning` | Was timing out and already disabled |
| `Ganeti - instances` | `check_ganeti_instances` | Idem | `warning` | Currently noisy: warns about retired hosts waiting for destruction, drop? |
| `SSL cert - LE` | `dsa-check-cert-expire-dir` | TBD | `warning` | Exhaustively check *all* certs, see [#41731][], possibly with `critical` severity for actual prolonged downtimes |
| `SSL cert - LE` | `dsa-check-cert-expire-dir` | TBD | `warning` | Exhaustively check *all* certs, see [#41731][], possibly with `critical` severity for actual prolonged down times |
| `SSL cert - db.torproject.org` | `dsa-check-cert-expire` | TBD | `warning` | Checks local CA for expiry, on disk, `/etc/ssl/certs/thishost.pem` and `db.torproject.org.pem` on each host, see [#41732][] |
| `puppet - * catalog run(s)` | `check_puppetdb_nodes` | [`puppet-exporter`][] | `warning` | |
| `system - all services running` | `systemctl is-system-running` | `node_systemd_unit_state` | `warning` | Sanity check, checks for failing timers and services |
......
......