Changes
Page history
prometheus: document how to trace a metric
authored
Jun 19, 2025
by
anarcat
Show whitespace changes
Inline
Side-by-side
service/prometheus.md
View page @
a91b252f
...
@@ -1582,6 +1582,70 @@ things before creating the final configuration for the target.
...
@@ -1582,6 +1582,70 @@ things before creating the final configuration for the target.
[
upstream documentation
]:
https://github.com/prometheus/blackbox_exporter
[
upstream documentation
]:
https://github.com/prometheus/blackbox_exporter
[
above
]:
#adding-alert-rules
[
above
]:
#adding-alert-rules
## Tracing a metric to its source
If you have a metric (say
`gitlab_workhorse_http_request_duration_seconds_bucket`
) that you
don't know where it's coming from, try getting the full metric with
its label, and look at the job label. This can be done in the
Prometheus web interface or with Fabric, for example with:
fab prometheus.query-to-series --expression gitlab_workhorse_http_request_duration_seconds_bucket
For our sample metric, it shows:
```
anarcat@angela:~/s/t/fabric-tasks> fab prometheus.query-to-series --expression gitlab_workhorse_http_request_duration_seconds_bucket | head
INFO: sending query gitlab_workhorse_http_request_duration_seconds_bucket to https://prometheus.torproject.org/api/v1/query
gitlab_workhorse_http_request_duration_seconds_bucket{alias="gitlab-02.torproject.org",backend_id="rails",code="200",instance="gitlab-02.torproject.org:9229",job="gitlab-workhorse",le="0.005",method="get",route_id="default",team="TPA"} 162
gitlab_workhorse_http_request_duration_seconds_bucket{alias="gitlab-02.torproject.org",backend_id="rails",code="200",instance="gitlab-02.torproject.org:9229",job="gitlab-workhorse",le="0.025",method="get",route_id="default",team="TPA"} 840
```
The details of those metrics don't matter, what matters is the
`job`
label here:
job="gitlab-workhorse"
This corresponds to a
`job`
field in the Prometheus configuration. On
the
`prometheus1`
server, for example, we can see this in
`/etc/prometheus/prometheus.yml`
:
```
yaml
-
job_name
:
gitlab-workhorse
static_configs
:
-
targets
:
-
gitlab-02.torproject.org:9229
labels
:
alias
:
gitlab-02.torproject.org
team
:
TPA
```
Then you can go on
`gitlab-02`
and see what listens on port 9229:
```
root@gitlab-02:~# lsof -n -i :9229
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
gitlab-wo 1282 git 3u IPv6 14159 0t0 TCP *:9229 (LISTEN)
gitlab-wo 1282 git 561u IPv6 2450737 0t0 TCP [2620:7:6002:0:266:37ff:feb8:3489]:9229->[2a01:4f8:c2c:1e17::1]:59922 (ESTABLISHED)
```
... which is:
```
root@gitlab-02:~# ps 1282
PID TTY STAT TIME COMMAND
1282 ? Ssl 9:56 /opt/gitlab/embedded/bin/gitlab-workhorse -listenNetwork unix -listenUmask 0 -listenAddr /var/opt/gitlab/gitlab-workhorse/sockets/s
```
So that's the
[
GitLab Workhorse
](
https://gitlab.com/gitlab-org/gitlab-workhorse/
)
proxy, in this case.
In other case, you'll more typically find it's the
`node`
job, in
which case that's typically the node exporter. But rather exotic
metrics can show up there: typically, those would be written by an
external job to
`/var/lib/prometheus/node-exporter`
, also known as the
"textfile collector". To find what generates that, you need to either
watch the file change or grep for the filename in Puppet.
## Advanced metrics ingestion
## Advanced metrics ingestion
This section documents more advanced metrics injection topics that we
This section documents more advanced metrics injection topics that we
...
...
...
...