Tor issueshttps://gitlab.torproject.org/tpo/core/tor/-/issues2024-01-16T20:29:50Zhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40736anti-exit-DDoS: token bucket limit for new streams per circuit at exits2024-01-16T20:29:50Zcypherpunksanti-exit-DDoS: token bucket limit for new streams per circuit at exitsCurrently a single circuit can apparently affect an entire exit relay by creating new outbound connections at a high rate.
The CPU load will increase and traffic drops fast during such events.
Some operators run scripts that adjust thei...Currently a single circuit can apparently affect an entire exit relay by creating new outbound connections at a high rate.
The CPU load will increase and traffic drops fast during such events.
Some operators run scripts that adjust their exit policy once they detect such outbound floods but blocking popular destinations
by exit policy is not a solution for this problem, especially since they turned to cloudflare destinations.
Please implement a limit a token bucket quota system for new outbound connections at exits per circuit. The destination IP/port should not be relevant.
This needs a mitigation inside tor instead of custom packet filter scripts.
Packet filters do now know whether a new outbound TCP connection at an exit is related to a specific tor circuit or not.
This has been brought up previously on the last relay operator meetup and currently is becoming an increasing issue at exits:
https://lists.torproject.org/pipermail/tor-relays/2022-November/020885.html
search for 'cpu' on that text
also related to: #40676trinity-1686atrinity-1686ahttps://gitlab.torproject.org/tpo/core/tor/-/issues/40735[WARN] Tried connecting to router ... identity keys were not as expected2023-11-14T16:59:05Zcypherpunks[WARN] Tried connecting to router ... identity keys were not as expectedBackground: Tor Browser 12.0, Tor 4.7.12, Windows 7, vanilla bridges.
Repeatedly getting the following log line.
```
[WARN] Tried connecting to router at *address* ID=<none> RSA_ID=*FP1*, but RSA + ed25519 identity keys were not as exp...Background: Tor Browser 12.0, Tor 4.7.12, Windows 7, vanilla bridges.
Repeatedly getting the following log line.
```
[WARN] Tried connecting to router at *address* ID=<none> RSA_ID=*FP1*, but RSA + ed25519 identity keys were not as expected: wanted *FP1* + no ed25519 key but got *FP2* + *edFP*.
```
Ideas of what happened:
* MITM
* Bridge operator reinstalled it in-between me getting the bridge and now.
What is wrong:
* Bridge should be marked as unreachable: either it is not used already and connections are doomed to spend resources for nothing, or it should not be used as something is clearly wrong with it
* There should be a way to distinguish first idea from second - my best guess is building a tunneled directory connection to bridge authority and asking "Is there a bridge *FP2* and does it listen on *address*?"https://gitlab.torproject.org/tpo/core/tor/-/issues/40734Add lost comments about decay playing a role in guaranteed Stable flag assign...2022-12-21T13:37:39ZGeorg KoppenAdd lost comments about decay playing a role in guaranteed Stable flag assignment guaranteed familiarityIn 8bf1a86ae1f3f71fa4b8b13f6d8eef5ad5eff8ca we removed comments related to potential decay rates when e.g. `Stable` guarantees. The problem now is that in that case the spec has a different number ("7") than the code ("5") without explan...In 8bf1a86ae1f3f71fa4b8b13f6d8eef5ad5eff8ca we removed comments related to potential decay rates when e.g. `Stable` guarantees. The problem now is that in that case the spec has a different number ("7") than the code ("5") without explanation whatsoever. Let's get those comments back and then at some point think about how to get the decay concept into our specs as well.Tor: 0.4.8.x-freezeGeorg KoppenGeorg Koppenhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40732Congestion control should not update unless cwnd is full2023-01-17T18:27:32ZMike PerryCongestion control should not update unless cwnd is fullWhile staring at conflux algorithms, I found a very interesting congestion control bug: We're updating the congestion window even if it is not full. This means for chatty protocols that don't cause queue, they may inflate their congestio...While staring at conflux algorithms, I found a very interesting congestion control bug: We're updating the congestion window even if it is not full. This means for chatty protocols that don't cause queue, they may inflate their congestion window too much and overshoot. Then, when they do have data to send, they cause queue overload.
There's a few possible fixes. Here's a simple one: https://gitlab.torproject.org/mikeperry/tor/-/commit/663bfb5584ef1e8e2d35dd044c2924cc6b298bb5
The benefits from a shadow run with this were considerable, without much impact on performance. Queue use over 1000 disappeared, so we can lower `circ_max_cell_queue_size` down to ~750.
Other options include using `inflight` instead of `cwnd` to estimate BDP and queue use, in which case we could still backoff, but we just would not increase the cwnd in these cases. I am going to test those.
This may also enable us to make our congestion control parameters more aggressive again. Quite the find.
Cc: @dgouletTor: 0.4.7.x-post-stableMike PerryMike Perryhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40731relay: Decouple streams blocked on channel2022-12-20T18:45:23ZDavid Gouletdgoulet@torproject.orgrelay: Decouple streams blocked on channelThe `streams_blocked_on_n_chan` and `streams_blocked_on_p_chan` are set if the cell queue of the circuit is above the high watermark (256). And unblocked if the queue goes back below low watermark (10).
However, conflux and congestion c...The `streams_blocked_on_n_chan` and `streams_blocked_on_p_chan` are set if the cell queue of the circuit is above the high watermark (256). And unblocked if the queue goes back below low watermark (10).
However, conflux and congestion control need more than that to decide if the streams ends up actually blocked.
So, in order to do that, we'll decouple this logic outside into the circuit subsystem in order to be able to make a decision based on different algorithms that can look at:
* Conflux state
* Congestion control state
* KIST scheduling state
* High and low watermark.Tor: 0.4.8.x-freezeDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40730TROVE-2022-002: The SafeSocks option for SOCKS4(a) is inverted leading to SOC...2023-11-02T18:55:30ZDavid Gouletdgoulet@torproject.orgTROVE-2022-002: The SafeSocks option for SOCKS4(a) is inverted leading to SOCKS4 going throughThis is a report from hackerone: https://hackerone.com/bugs?subject=torproject&report_id=1784589
Below is the content of the report which has detailed information.
We have classified this as `medium` considering that tor was not defend...This is a report from hackerone: https://hackerone.com/bugs?subject=torproject&report_id=1784589
Below is the content of the report which has detailed information.
We have classified this as `medium` considering that tor was not defending in-depth for dangerous SOCKS request and so any user relying on `SafeSocks 1` to make sure they don't link DNS leak and their Tor traffic wasn't safe afterall for `SOCKS4(a)`.
Tor Browser doesn't use `SafeSocks 1` and SOCKS4 so at least the likely vast majority of users are not affected.
# H1 Report
## Summary:
This bug appears to be a typo; [src/core/proto/proto_socks.c, process_socks4_request()](https://gitweb.torproject.org/tor.git/tree/src/core/proto/proto_socks.c?h=tor-0.4.7.11#n236) contains the line:
` if (is_socks4a && !addressmap_have_mapping(req->address, 0)) {`
Which presumably should be `!is_socks4a`. The following logic appears correct only if that change is made; as it exists now, the check is inverted and only warns on and rejects safe SOCKS4a requests.
## Steps To Reproduce:
1. Configure Tor with SafeSocks set to 1 (listening on localhost, with default port 9050)
1. Perform an unsafe SOCKS4 request; e.g., with socat: `echo -en "HEAD / HTTP/1.1\r\nHost: duckduckgo.com\r\n\r\n" | socat -v STDIO SOCKS4:127.0.0.1:duckduckgo.com:80,socksport=9050,shut-none`
Note that the locally-performed DNS request is leaked, and only the server IP is sent to Tor, but Tor does not block the request as it should have. The request reaches the server, and Tor does not log that a DNS leak may have occurred.
1. Perform a safe SOCKS4a request: `echo -en "HEAD / HTTP/1.1\r\nHost: duckduckgo.com\r\n\r\n" | socat -v STDIO SOCKS4a:127.0.0.1:duckduckgo.com:80,socksport=9050,shut-none`
Note that the domain is being sent to Tor, but Tor incorrectly blocks the request, and logs an incorrect "Your application (using socks4 to port 80) is giving Tor only an IP address." error.
The check is inverted; when using SOCKS4/SOCKS4a, SafeSocks 1 does the exact opposite of what it should, ensuring the user is making only unsafe requests. I have not tested SOCKS5, though it presumably works correctly given TorBrowser uses it.
## Additional info:
I tested this on Tor versions 0.4.7.10 and 0.4.5.10; git history suggests this has been broken all the way back to 0.3.5, seems to have been introduced in a rewrite, merge commit 7556933537b5777a9bef21230bb91a08aa70d60e, see https://gitweb.torproject.org/user/nickm/tor.git/commit/src/or/proto_socks.c?h=socks_trunnel4_squashed_merged&id=9155e08450fe7a609f8223202e8aa7dfbca20a6d
The commands above were run using Bash and socat, and this bug had been discovered while using a custom SOCKS4a application.
## Impact
Impact is a DNS leak that occurs under a configuration intended to guard against such a possibility. That configuration is not under attacker control, but users who believe they need this protection may be more likely to turn on the broken config option.
2 scenarios are plausible, that impact user anonymity by leaking DNS traffic with SafeSocks 1:
1. Users configuring SOCKS4 apps that don't support SOCKS4a will be led to believe their traffic is fully secured, when in fact DNS requests are still being leaked.
1. Users configuring apps that do support SOCKS4a could be misled into selecting SOCKS4 instead, as Tor will error out if they try to use the safe configuration; this could conceivably create a DNS leak where one need not exist.
It should also be noted that the incorrect warning does still get printed even with SafeSocks 0, but it does not cause any otherwise-visible problems.
Note that users will only be impacted if they connect using SOCKS4, and rely on SafeSocks to protect against DNS leaks. SafeSocks 1 is recommended in a number of online guides to securing Tor. The application I was working on when I discovered this also relied on it as a defense-in-depth measure. I nevertheless suspect SOCKS4a may be a rare configuration, given this bug seems to have existed for about 4 years without detection.Tor: 0.4.8.x-freezeDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40729Error replacing "/var/lib/tor/my-consensus-ns"2023-01-12T17:35:50Zweasel (Peter Palfrader)Error replacing "/var/lib/tor/my-consensus-ns"on 0.4.7.12 I get:
Dec 11 11:55:00.000 [warn] Error replacing "/var/lib/tor/my-consensus-ns": Operation not permitted
Dec 11 11:55:01.000 [warn] Error replacing "/var/lib/tor/my-consensus-microdesc": Operation not permitted
wit...on 0.4.7.12 I get:
Dec 11 11:55:00.000 [warn] Error replacing "/var/lib/tor/my-consensus-ns": Operation not permitted
Dec 11 11:55:01.000 [warn] Error replacing "/var/lib/tor/my-consensus-microdesc": Operation not permitted
with wider context:
Dec 11 11:52:30.000 [notice] Time to fetch any votes that we're missing.
Dec 11 11:55:00.000 [notice] Time to compute a consensus.
Dec 11 11:55:00.000 [notice] Computed bandwidth weights for Case 3a (E scarce) with v10: G=57835441 M=21454912 E=10895123 D=18039801 T=108225277
Dec 11 11:55:00.000 [notice] Bandwidth-weight Case 3a (E scarce) is verified and valid.
Dec 11 11:55:00.000 [warn] Error replacing "/var/lib/tor/my-consensus-ns": Operation not permitted
Dec 11 11:55:00.000 [notice] Computed bandwidth weights for Case 3a (E scarce) with v10: G=57835441 M=21454912 E=10895123 D=18039801 T=108225277
Dec 11 11:55:00.000 [notice] Bandwidth-weight Case 3a (E scarce) is verified and valid.
Dec 11 11:55:01.000 [warn] Error replacing "/var/lib/tor/my-consensus-microdesc": Operation not permitted
Dec 11 11:55:01.000 [notice] Consensus computed; uploading signature(s)
Dec 11 11:55:01.000 [notice] Signature(s) posted.
Which might be the result of sandboxing.
This suggests that tpo/core/tor#40663 has not been fixed.Tor: 0.4.7.x-post-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40727metrics: Typo in CC label name2022-12-12T18:16:13ZDavid Gouletdgoulet@torproject.orgmetrics: Typo in CC label nameUnfortunately, we introduced a typo as a label in one of the congestion control metrics in the latest release.
`tor_relay_congestion_control_total{state="cc_circuits",action="circs_creared"}`
Should be `circs_created`.Unfortunately, we introduced a typo as a label in one of the congestion control metrics in the latest release.
`tor_relay_congestion_control_total{state="cc_circuits",action="circs_creared"}`
Should be `circs_created`.Tor: 0.4.7.x-post-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40726Missing signature files for tor 0.4.7.12 and 0.4.5.152022-12-06T18:43:08ZSven GottwaldMissing signature files for tor 0.4.7.12 and 0.4.5.15### Summary
Tor 0.4.5.15 and 0.4.7.12 was released today. The corresponding sha256sum.asc-files are missing from https://dist.torproject.org/.
### Steps to reproduce:
Try downloading https://dist.torproject.org/tor-0.4.7.12.tar.gz.sha...### Summary
Tor 0.4.5.15 and 0.4.7.12 was released today. The corresponding sha256sum.asc-files are missing from https://dist.torproject.org/.
### Steps to reproduce:
Try downloading https://dist.torproject.org/tor-0.4.7.12.tar.gz.sha256sum.asc and https://dist.torproject.org/tor-0.4.5.15.tar.gz.sha256sum.asc.
```
sven@docker-test01:~/src/docker-tor$ curl --fail --location --remote-name https://dist.torproject.org/tor-0.4.7.12.tar.gz.sha256sum.asc
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 266 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (22) The requested URL returned error: 404
sven@docker-test01:~/src/docker-tor$ curl --fail --location --remote-name https://dist.torproject.org/tor-0.4.5.15.tar.gz.sha256sum.asc
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 266 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (22) The requested URL returned error: 404
```
### What is the current bug behavior?
`404 Not Found`
### What is the expected behavior?
`200 OK`https://gitlab.torproject.org/tpo/core/tor/-/issues/40725[warn] Received http status code 404 ("Consensus not signed by sufficient num...2022-12-30T15:53:57ZRoger Dingledine[warn] Received http status code 404 ("Consensus not signed by sufficient number of requested authorities")Tor relay operators and clients report seeing a series of these scary-sounding log messages this evening. An example:
```
Dec 01 15:56:27.720 [warn] Received http status code 404 ("Consensus not signed by sufficient number of requested a...Tor relay operators and clients report seeing a series of these scary-sounding log messages this evening. An example:
```
Dec 01 15:56:27.720 [warn] Received http status code 404 ("Consensus not signed by sufficient number of requested authorities") from server 131.188.40.189:443 while fetching consensus directory.
```
I believe these messages are harmless so long as they are only transient.
What's happening I believe is that some relays still believe in Faravahar as a directory authority, so if they get a consensus with five signatures including one from Faravahar they accept it and serve it; whereas modern Tor clients no longer believe Faravahar is one of the directory authorities, so from their perspective that consensus only has *four* acceptable signatures on it, so they decline to download it and later they try again somewhere else.
So, as long as the situation is transient, i.e. most of the time most relays are serving a consensus with six or more sigs on it, then everything should be fine.
I'm opening this ticket as a placeholder in case people come to report the bug, or in case the support team gets questions about it.
We should be able to close this ticket ~~on Tuesday when we finish doing the synchronized switch-over at the other directory authorities, after which we should stop generating consensus documents that include signatures from Faravahar.~~ once the new Tor version is out with moria1's new keys and it is in a Tor Browser too. [edit: Sebastian correctly points out that while we're seeing this issue with Faravahar's key currently, we could see it after Tuesday with moria1's new key as well.]Roger DingledineRoger Dingledinehttps://gitlab.torproject.org/tpo/core/tor/-/issues/40724New congestion control metrics2022-12-05T18:28:59ZMike PerryNew congestion control metrics@dgoulet and I are puzzling over why the average congestion control window is hovering around 400 on our test relays: https://gitlab.torproject.org/tpo/network-health/analysis/-/issues/44#note_2857941
We're going to add at least 3-4 mor...@dgoulet and I are puzzling over why the average congestion control window is hovering around 400 on our test relays: https://gitlab.torproject.org/tpo/network-health/analysis/-/issues/44#note_2857941
We're going to add at least 3-4 more metrics to help diagnose this:
* queue use
* Vegas BDP
* number of circuits that exit slow start
* Percent of circuits that exit slow startTor: 0.4.7.x-post-stableMike PerryMike Perryhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40723conflux: Cell parsing and handling2023-01-11T16:53:09ZDavid Gouletdgoulet@torproject.orgconflux: Cell parsing and handlingTicket to implement the prop329 cell parsing and handling (trunnel work in part).Ticket to implement the prop329 cell parsing and handling (trunnel work in part).Tor: 0.4.8.x-freezeDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40722rotate moria1's relay identity and v3 identity keys2022-12-07T07:58:26ZRoger Dingledinerotate moria1's relay identity and v3 identity keysI'm going to rotate moria1's keys to a fresh set. I will move it to listen to new ports while I'm at it.
We don't have any specific evidence that the old keys are compromised, but they've been online almost a decade, and also there was ...I'm going to rotate moria1's keys to a fresh set. I will move it to listen to new ports while I'm at it.
We don't have any specific evidence that the old keys are compromised, but they've been online almost a decade, and also there was a break-in to the old hardware this month (November 2022) so it is better to do it and be safe.
I will leave it on 128.31.0.39 despite the backbone blocking anomalies (https://gitlab.torproject.org/tpo/network-health/analysis/-/issues/10) because I still hold out hope that we will get the internet to drop its firewall rules for that IP address, and that is a better strategy than just lighting IP addresses on fire and moving to the next one in line. (It appears that the routes from the other dir auths to/from moria are not affected by this backbone filter rule, so it doesn't seem to be harming the consensus process itself.)Tor: 0.4.8.x-freezeRoger DingledineRoger Dingledinehttps://gitlab.torproject.org/tpo/core/tor/-/issues/40721conflux: New protocol version to support2023-01-11T16:52:36ZDavid Gouletdgoulet@torproject.orgconflux: New protocol version to supportThis is the ticket for the Conflux (prop329) part of the protocol version advertisement and handling for relays.This is the ticket for the Conflux (prop329) part of the protocol version advertisement and handling for relays.Tor: 0.4.8.x-freezeDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40720MIssing description of a torrc variables ORPort and Nickname2022-12-12T19:38:14ZslrslrMIssing description of a torrc variables ORPort and NicknameHello,
the issue is described [here please](https://forum.torproject.net/t/orport-line-has-address-but-no-port/5681).Hello,
the issue is described [here please](https://forum.torproject.net/t/orport-line-has-address-but-no-port/5681).https://gitlab.torproject.org/tpo/core/tor/-/issues/40719cpuworker: Off by one when calculating the onionskin processing room2022-11-23T19:58:21ZDavid Gouletdgoulet@torproject.orgcpuworker: Off by one when calculating the onionskin processing roomIn `onion_queue.c`, our function `have_room_for_onionskin()` uses the number of CPUs to calculate an average of time we processed an onionskin across all threads. For example:
```
/* How long would it take to process all the NTor ce...In `onion_queue.c`, our function `have_room_for_onionskin()` uses the number of CPUs to calculate an average of time we processed an onionskin across all threads. For example:
```
/* How long would it take to process all the NTor cells in the queue? */
ntor_usec = estimated_usec_for_onionskins(
ol_entries[ONION_HANDSHAKE_TYPE_NTOR],
ONION_HANDSHAKE_TYPE_NTOR) / num_cpus;
```
The `num_cpus` value is taken by calling `get_num_cpus()` which returns the number of core the system has. However, our CPU worker thread pool is configured to always have an extra thread (see `cpuworker.c`):
```
const int n_threads = get_num_cpus(get_options()) + 1;
```
And so we are off by one in terms of number of CPU for our calculation. Meaning that we are **over** estimating our ntor onionskin process time because our divisor is minus off by one from the real number of threads being used for onionskins.
This can lead to triggering the overload signal way too early on relays.Tor: 0.4.7.x-post-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/40718Allow setting DirPortFrontPage with DirCache 02023-09-05T12:53:08ZcypherpunksAllow setting DirPortFrontPage with DirCache 0relay meetup follow up:
Currently it is not possible to set `DirCache 0` on a relay that has set `DirPort`.
```
DirPort configured but DirCache disabled. DirPort requires DirCache.
```relay meetup follow up:
Currently it is not possible to set `DirCache 0` on a relay that has set `DirPort`.
```
DirPort configured but DirCache disabled. DirPort requires DirCache.
```https://gitlab.torproject.org/tpo/core/tor/-/issues/40717Additional metricsport stats for various stages of onionservice handshake2023-12-07T14:41:35ZMike PerryAdditional metricsport stats for various stages of onionservice handshakeIf we export additional onion service metrics such as time measurements on the HSDIR, INTRO, and REND stages of circuit setup for both client and service side, and the number of timeouts/failures there, it would help to uncover the root ...If we export additional onion service metrics such as time measurements on the HSDIR, INTRO, and REND stages of circuit setup for both client and service side, and the number of timeouts/failures there, it would help to uncover the root cause of issues like https://gitlab.torproject.org/tpo/core/tor/-/issues/40570 and related reliability and connectivity issues with onion services.
We can also export congestion control info from https://gitlab.torproject.org/tpo/core/tor/-/issues/40708 to the onionservice metrics set, too, which can help us with tuning congestion control for onion services.
We can then hook up the onionperf onion service instances to our grafana dashboard, and gather more detailed stats that way, as a supplement to the metrics that get graphed on the metrics website.https://gitlab.torproject.org/tpo/core/tor/-/issues/40716Impelement conflux for onion services2022-11-28T14:01:05ZMike PerryImpelement conflux for onion servicesConflux is traffic splitting, and will result in increased throughput and reduced latency for onion services after a connection has been established, by routing traffic over multiple paths, or via the lowest latency path to a service.
T...Conflux is traffic splitting, and will result in increased throughput and reduced latency for onion services after a connection has been established, by routing traffic over multiple paths, or via the lowest latency path to a service.
This ticket is for the onion service pieces of conflux (https://gitlab.torproject.org/tpo/core/tor/-/issues/40593).
We will not be implementing the onion services pieces of conflux as part of that ticket. It can be done later, if any onion service sponsors care about latency or throughput.
The pieces for onion services are:
- **Negotiation**
- [ ] Protover Advertisement for Onions (24h)
- [ ] Rend circuit linking (40h)
This is specified in https://gitlab.torproject.org/tpo/core/torspec/-/blob/main/proposals/329-traffic-splitting.txt, but we probably want to allow onion services to configure their scheduler by manually choosing either BLEST, or LowRTT, since different kinds of onion services may want to optimize for either throughput or latency.
There may be some additional work wrt making sure linked edge conns work properly, if they are handled differently for the onion service case.
Also, some shadow validation and performance testing will be needed. Maybe 40h or so of dev time (though much longer wall-clock time).https://gitlab.torproject.org/tpo/core/tor/-/issues/40715MetricsPort: inbound ORPort connections: relays vs. non-relay connections2023-09-22T23:50:13ZcypherpunksMetricsPort: inbound ORPort connections: relays vs. non-relay connectionsthis got previously submitted on 2022-10-24 https://gitlab.torproject.org/tpo/core/tor/-/issues/40194#note_2849481
but that issue got closed and asked for new specific tickets for each new metric:
From last week's relay meetup we know t...this got previously submitted on 2022-10-24 https://gitlab.torproject.org/tpo/core/tor/-/issues/40194#note_2849481
but that issue got closed and asked for new specific tickets for each new metric:
From last week's relay meetup we know that tor knows whether an incoming OR connection is from a client or from a relay without looking at the source IP address.
https://pad.riseup.net/p/tor-relay-op-meetup-o22-keep
From the metrics added in !625 (merged) we know, that the increased CPU load correlates with an increase in the rate of new inbound OR connections. This rate increases when CPU load increases on exits:
```
rate(tor_relay_connections{type="OR",state="created",direction="received"}[$__rate_interval])
```
Could you please add a label for OR connections coming from clients vs. OR connections coming from other relays?
This would allow us to confirm that exits get more new inbound connections from clients when CPU load increases.
that new label could be `src`:
```
tor_relay_connections_total{type="OR",state="created",direction="received",src="relay"}
tor_relay_connections_total{type="OR",state="created",direction="received",src="non-relay"}
tor_relay_connections{type="OR",state="opened",direction="received",src="relay"}
tor_relay_connections{type="OR",state="opened",direction="received",src="non-relay"}
```