Tor issueshttps://gitlab.torproject.org/tpo/core/tor/-/issues2023-04-12T14:47:42Zhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40768Decide whether to disable circuit cannibalization entirely2023-04-12T14:47:42Zgabi-250Decide whether to disable circuit cannibalization entirelyThe discussions around #40570 reignited a discussion about circuit cannibalization, and whether the performance improvements it provides (if any) justify the maintenance costs of keeping it around. This ticket is about deciding whether w...The discussions around #40570 reignited a discussion about circuit cannibalization, and whether the performance improvements it provides (if any) justify the maintenance costs of keeping it around. This ticket is about deciding whether we should disable cannibalization in c-tor.https://gitlab.torproject.org/tpo/core/tor/-/issues/40767Investigate high circuit build error rates in simulation2023-04-12T14:46:43Zgabi-250Investigate high circuit build error rates in simulationWe ran some shadow simulations to debug/repro the issue from #40570, and @jnewsome noticed the onion service clients have consistently high [circuit build failure rates](https://gitlab.torproject.org/tpo/core/tor/-/issues/40570#note_2883...We ran some shadow simulations to debug/repro the issue from #40570, and @jnewsome noticed the onion service clients have consistently high [circuit build failure rates](https://gitlab.torproject.org/tpo/core/tor/-/issues/40570#note_2883257).
We should figure out what causes these circuit build failures.https://gitlab.torproject.org/tpo/core/tor/-/issues/40761DDoS mitigation: analysis to understand relay-to-relay connections from non-r...2023-04-12T14:46:14ZbnmDDoS mitigation: analysis to understand relay-to-relay connections from non-relay IPs
We are working on a tor proposal that should help
with protecting non-guard relays from a large fraction
of the DDoS load.
In first tests we have seen a 55% CPU usage decrease
when deploying our proposed mitigations, but we
want to mak...
We are working on a tor proposal that should help
with protecting non-guard relays from a large fraction
of the DDoS load.
In first tests we have seen a 55% CPU usage decrease
when deploying our proposed mitigations, but we
want to make sure that we are not introducing an
over blocking problem. We know about a few
configurations when a relay will use a source IP that is not in
consensus to connect to other relays (OutboundBindAddress, OutboundBindAddressOR)
but we would like to have some actual data about it.
To measure, understand and solve that potential problem and to
back up the proposal with some actual data we would
like to measure the following on our tor relays:
Log when our non-guard tor relays get
an authenticated relay to relay connection to our ORPort
from a source IP that is not in consensus and not in
the exit lists:
```
timestamp relay-fingerprint source-IP
```
If the "and not in the exit lists" part is too hard,
we can take care of that in post-processing of the logs
to filter them out.
We do not care about client to relay connections and do not want to log them.
Would it be possible to provide a patch or branch
that implements that logging on top of main?
It does not have to be in a release and we will run it
only temporarily.
thank you!https://gitlab.torproject.org/tpo/core/tor/-/issues/40746Conflicting logic about whether bridges need descriptors for fetching dir inf...2023-04-12T14:42:46ZRoger DingledineConflicting logic about whether bridges need descriptors for fetching dir info from themIf you start your Tor with a pile of configured bridges but nothing cached, your Tor will sample the configured bridges to pick its ordered list of primary entry guards, and launch descriptor fetches to each of them.
But if the descript...If you start your Tor with a pile of configured bridges but nothing cached, your Tor will sample the configured bridges to pick its ordered list of primary entry guards, and launch descriptor fetches to each of them.
But if the descriptor hasn't arrived yet, while trying to bootstrap dir info you get these confusing messages in your logs:
```
Jan 31 18:56:44.928 [notice] Ignoring directory request, since no bridge nodes are available yet.
```
Things do bootstrap eventually, but it takes longer than it should, and the pile of scary log messages is scary.
What's going on here?
The way the log message comes about is that directory_get_from_dirserver() calls
```
const node_t *node = guards_choose_dirguard(dir_purpose, &guard_state);
if (node && node->ri) {
[...]
} else {
[...]
log_notice(LD_DIR, "Ignoring directory request, since no bridge "
"nodes are available yet.");
}
```
i.e. guards_choose_dirguard had better return a bridge for which we have the descriptor, or we're going to log a complaint and abort the directory fetch attempt.
But in select_primary_guard_for_circuit(), we do
```
const int need_descriptor = (usage == GUARD_USAGE_TRAFFIC);
[...]
SMARTLIST_FOREACH_BEGIN(gs->primary_entry_guards, entry_guard_t *, guard) {
[...]
if (guard->is_reachable != GUARD_REACHABLE_NO) {
if (need_descriptor && !guard_has_descriptor(guard)) {
log_info(LD_GUARD, "Guard %s does not have a descriptor",
entry_guard_describe(guard));
continue;
}
```
That is, in select_primary_guard_for_circuit() we require that the bridge have a descriptor only for the GUARD_USAGE_TRAFFIC case, but then in directory_get_from_dirserver() we expect that the bridge will always have a descriptor, even in the GUARD_USAGE_DIRGUARD case.
In normal operation this bug isn't a big deal, because it is a race to finish fetching the descriptor before we happen to pick it for asking directory info. But with the #40578 fix, where we defer fetching the descriptor if we won't use the bridge for the GUARD_USAGE_TRAFFIC case, the bug becomes more obvious.
I believe the fix is simply to always need_descriptor in select_primary_guard_for_circuit() -- meaning when we're going to launch a directory fetch we always choose among our primary guards who have descriptors already.https://gitlab.torproject.org/tpo/core/tor/-/issues/40735[WARN] Tried connecting to router ... identity keys were not as expected2023-11-14T16:59:05Zcypherpunks[WARN] Tried connecting to router ... identity keys were not as expectedBackground: Tor Browser 12.0, Tor 4.7.12, Windows 7, vanilla bridges.
Repeatedly getting the following log line.
```
[WARN] Tried connecting to router at *address* ID=<none> RSA_ID=*FP1*, but RSA + ed25519 identity keys were not as exp...Background: Tor Browser 12.0, Tor 4.7.12, Windows 7, vanilla bridges.
Repeatedly getting the following log line.
```
[WARN] Tried connecting to router at *address* ID=<none> RSA_ID=*FP1*, but RSA + ed25519 identity keys were not as expected: wanted *FP1* + no ed25519 key but got *FP2* + *edFP*.
```
Ideas of what happened:
* MITM
* Bridge operator reinstalled it in-between me getting the bridge and now.
What is wrong:
* Bridge should be marked as unreachable: either it is not used already and connections are doomed to spend resources for nothing, or it should not be used as something is clearly wrong with it
* There should be a way to distinguish first idea from second - my best guess is building a tunneled directory connection to bridge authority and asking "Is there a bridge *FP2* and does it listen on *address*?"https://gitlab.torproject.org/tpo/core/tor/-/issues/40717Additional metricsport stats for various stages of onionservice handshake2023-12-07T14:41:35ZMike PerryAdditional metricsport stats for various stages of onionservice handshakeIf we export additional onion service metrics such as time measurements on the HSDIR, INTRO, and REND stages of circuit setup for both client and service side, and the number of timeouts/failures there, it would help to uncover the root ...If we export additional onion service metrics such as time measurements on the HSDIR, INTRO, and REND stages of circuit setup for both client and service side, and the number of timeouts/failures there, it would help to uncover the root cause of issues like https://gitlab.torproject.org/tpo/core/tor/-/issues/40570 and related reliability and connectivity issues with onion services.
We can also export congestion control info from https://gitlab.torproject.org/tpo/core/tor/-/issues/40708 to the onionservice metrics set, too, which can help us with tuning congestion control for onion services.
We can then hook up the onionperf onion service instances to our grafana dashboard, and gather more detailed stats that way, as a supplement to the metrics that get graphed on the metrics website.https://gitlab.torproject.org/tpo/core/tor/-/issues/40702Single Onion Service Rends become 7 hop after retry2023-02-09T16:22:00ZMike PerrySingle Onion Service Rends become 7 hop after retryIn `retry_service_rendezvous_point()`, if a rend connect fails for a non-anonymous rend, we promote it to a 7-hop slow rend for some reason.
This will impact non-anonymous onions who want performance, especially during the DoS.
David n...In `retry_service_rendezvous_point()`, if a rend connect fails for a non-anonymous rend, we promote it to a 7-hop slow rend for some reason.
This will impact non-anonymous onions who want performance, especially during the DoS.
David notes that this decision to fall back to full anonymous mode in the event of timeout or failure was explicitly written just in case a non-anonymous onion service was also behind a restrictive firewall, and that firewall was the thing that happened to cause a timeout. There is also a comment that explains this, believe it or not. Back then, decision making in C-Tor was a bit more...special.
I bet if we get funders who actually care about single onion performance, they would prefer that their single onions not randomly double in latency on a timeout or failure, just to support the case where some single onion out there might be behind a firewall that they don't know about. Such a funder might suggest that we provide some other option for people behind firewalls to use, instead of this madness.
But I look forward to more research.https://gitlab.torproject.org/tpo/core/tor/-/issues/40422[CircuitPadding] circpad_add_matching_machines() should be called when a cir...2023-06-09T13:26:45ZJaym[CircuitPadding] circpad_add_matching_machines() should be called when a circuit has opened.### Summary
The circuit padding framework supports negotiating padding upon various events. Among them, CIRCPAD_CIRC_OPENED states that a given padding machine should be applied to a circuit when a circuit has opened.
However, no code ...### Summary
The circuit padding framework supports negotiating padding upon various events. Among them, CIRCPAD_CIRC_OPENED states that a given padding machine should be applied to a circuit when a circuit has opened.
However, no code seems to trigger this mechanism. When a circuit has built, the function circpad_machine_event_circ_built() is called and checks whether some machine may be removed/added to the circuit. However, at this stage of the circuit building process, the circuit has built but is not marked as open yet.
### Bug
If some machine uses `client_machine->conditions.apply_state_mask = CIRCPAD_CIRC_OPENED;` the machine would only be applied when another event than a circ building/opening triggers the function circpad_add_matching_machines() (e.g., ap conn links a stream, or the circ purpose changes from general to something else).
### What is the expected behavior?
When circuituse.c calls circuit_has_opened(), it should also call the circpad module; e.g., a new function circpad_machine_event_circ_opened() that checks for adding machine to the circuit.
### Environment
Running a version forked from 0.4.5.7
### Relevant logs and/or screenshots
Contains some logs showing a call to circpad_machine_event_circ_built() while the circuit is still marked as building. Also contains custom logs:
```Jun 30 11:23:50.000 [info] circuit_finish_handshake(): Finished building circuit hop:
Jun 30 11:23:50.000 [info] internal (high-uptime) circ (length 3, last hop test000a): $22BA781A60C0CBB7FFAEA8858128427F67F60038(open) $7684DE04DCBB44538554E2CD1D14CDF836D5AF4D(open) $C7ADB1DBCE99F0B2ED2812B1953E4986EE9846DB(open)
Jun 30 11:23:50.000 [debug] dispatch_send_msg_unchecked(): Queued: ocirc_cevent (<gid=7 evtype=2 reason=0 onehop=0>) from or, on ocirc.
Jun 30 11:23:50.000 [debug] dispatcher_run_msg_cbs(): Delivering: ocirc_cevent (<gid=7 evtype=2 reason=0 onehop=0>) from or, on ocirc:
Jun 30 11:23:50.000 [debug] dispatcher_run_msg_cbs(): Delivering to btrack.
Jun 30 11:23:50.000 [debug] btc_cevent_rcvr(): CIRC gid=7 evtype=2 reason=0 onehop=0
Jun 30 11:23:50.000 [debug] circuit_build_times_add_time(): Adding circuit build time 43
Jun 30 11:23:50.000 [debug] circpad_machine_conditions_apply(): Checking circuit purpose, 5
Jun 30 11:23:50.000 [debug] circpad_machine_conditions_apply(): Checking condition state mask 21 vs condition: 2
Jun 30 11:23:50.000 [debug] circpad_machine_conditions_apply(): Checking circuit purpose, 5
Jun 30 11:23:50.000 [debug] circpad_machine_conditions_apply(): Checking circuit purpose, 5
Jun 30 11:23:50.000 [debug] circpad_machine_event_circ_built(): Circpad module event circ built -- circ state: 0
Jun 30 11:23:50.000 [debug] circpad_machine_conditions_apply(): Checking circuit purpose, 5
Jun 30 11:23:50.000 [debug] circpad_machine_conditions_apply(): Checking condition state mask 21 vs condition: 2
Jun 30 11:23:50.000 [debug] circpad_machine_conditions_apply(): Checking circuit purpose, 5
Jun 30 11:23:50.000 [debug] circpad_machine_conditions_apply(): Checking circuit purpose, 5
Jun 30 11:23:50.000 [debug] invoke_plugin_operation_or_default(): Plugin found for caller calling a plugin in the circpad module when a circuit has built
Jun 30 11:23:50.000 [info] circpad_dropmark_activate_when_built(): Looks like the client_dropmark_def machine does not exist over this circuit
Jun 30 11:23:50.000 [debug] plugin_run(): Plugin execution returned -2147483648
Jun 30 11:23:50.000 [debug] plugin_run(): vm error message: (null)
Jun 30 11:23:50.000 [info] entry_guards_note_guard_success(): Recorded success for primary confirmed guard test002r ($22BA781A60C0CBB7FFAEA8858128427F67F60038)
Jun 30 11:23:50.000 [debug] dispatch_send_msg_unchecked(): Queued: ocirc_state (<gid=7 state=4 onehop=0>) from or, on ocirc.
Jun 30 11:23:50.000 [debug] dispatcher_run_msg_cbs(): Delivering: ocirc_state (<gid=7 state=4 onehop=0>) from or, on ocirc:
Jun 30 11:23:50.000 [debug] dispatcher_run_msg_cbs(): Delivering to btrack.
Jun 30 11:23:50.000 [debug] btc_state_rcvr(): CIRC gid=7 state=4 onehop=0
Jun 30 11:23:50.000 [info] circuit_build_no_more_hops(): circuit built!
Jun 30 11:23:50.000 [info] pathbias_count_build_success(): Got success count 3.000000/3.000000 for guard test002r ($22BA781A60C0CBB7FFAEA8858128427F67F60038)
Jun 30 11:23:50.000 [debug] circuit_has_opened(): calling circuit_has_opened()
```
### Possible fixes
Add a new function circpad_machine_event_circ_opened() called from circuituse.c when the circuit has opened.Tor: 0.4.8.x-freezeMike PerryMike Perryhttps://gitlab.torproject.org/tpo/core/tor/-/issues/40277Circuit Padding attack scenario through circpad_global_max_padding_pct2023-06-09T13:26:55ZGhost UserCircuit Padding attack scenario through circpad_global_max_padding_pctHello *,
I am from [RadicallyOpenSecurity](https://www.radicallyopensecurity.com/) and currently performing a short review of the [Padding Machines for Tor](https://nlnet.nl/project/TorPaddingMachines/) NGI Zero PET project.
During dis...Hello *,
I am from [RadicallyOpenSecurity](https://www.radicallyopensecurity.com/) and currently performing a short review of the [Padding Machines for Tor](https://nlnet.nl/project/TorPaddingMachines/) NGI Zero PET project.
During discussion with the padding machine author Tobias Pulls (@pulls), I noticed a general attack scenario that could be of relevance for the padding framework implementation.
The general idea of the attack is that a malicious Tor client can, under some circumstances, repeatedly force the circuit padding logic of middle relays to reply with more padding bytes than non-padding bytes. After some time of sustained attacks, the target middle relay will develop a statistical circuit padding overhead percentage which is over the the network-wide `circpad_global_max_padding_pct` limit (see [section 3.5](https://gitweb.torproject.org/tor.git/tree/doc/HACKING/CircuitPaddingDevelopment.md?id=33380f6b2770a3c4d9e2a9cc8a4b18c71a40571b#n614) of the documentation), at which point it will stop sending padding on any of its circuits.
Once the attack stops, the ratio between sent padding and sent non-padding of the relay goes back to normal over time while the relay serves non-padded cell data to legitimate clients, so it reverts automatically to a safe state again where it has working padding machines.
However, there are several notable details:
* To our knowledge, the currently rolled out padding machines can not be used to trigger this problem. This is likely only a problem for future machines with high volume that defend against website fingerprinting attacks.
* Due to the event-based nature of client padding machines, the suppression of padding generation at the middle relay means that the `CIRCPAD_EVENT_PADDING_RECV` transition on the client is never taken. This results in the client not sending any padding, or performing only a very limited subset of the regular padding behavior (depending on the machine definition). This amplifies the practical impact of the missing padding from the relay.
* Since the relay behavior change affects all of its circuits at once and may happen several times over a short time-span, the resulting traffic anomalies may be a fingerprinting opportunity for adversaries that are performing traffic analysis on the network.
* According to @pulls, the consequences of this attack on the relay server would only be logged at `info` level and may go unnoticed by relay operators.
* There is a threshold mechanism at the individual circuit machine level through the `relay->max_padding_percent` setting. At first glance, this appears to mitigate the described attack scenario. However, note that this limit is not applied to the first `relay->allowed_padding_count` number of cells, so an attacker can circumvent this limitation by repeatedly triggering new padding machines. Since it appears to be beneficial for padding machines to have this initial level of freedom over padding for their effectiveness against fingerprinting, the attack is unlikely to be mitigated with restrictive machine limits without also impacting the usefulness of the machines themselves.
* The attack can also be performed in the other direction by a malicious middle relay against clients. In this case, it would disable/degrade padding on all of the client circuits. However, we regard this variant as less impactful than client->relay attacks, in part because this attack is harder to scale.
Some initial recommendations:
* Higher severity logging when exceeding `circpad_global_max_padding_pct`.
* The consensus-provided global limits could be enforced *per circuit* instead of truly globally. The root of the problem is shared state across circuits. This would still make the global padding limits useful, since in the future there might be multiple machines active throughout the lifetime of a circuit.
* Based on the information from @pulls , the `CircuitPaddingDisabled` (see issue [28693](https://gitlab.torproject.org/legacy/trac/-/issues/28693)) switch might be sufficient as an emergency fail-safe in case of network-wide padding problems.
I would like to thank @pulls for walking me through the various aspects of related network behavior and providing other helpful suggestions.Mike PerryMike Perryhttps://gitlab.torproject.org/tpo/core/tor/-/issues/31653Padding cells sent with 0ms delay cause circuit failures2023-06-09T13:27:09ZpullsPadding cells sent with 0ms delay cause circuit failuresBelow appears to be a bug that results in a closed circuit due to a relay sending a padding cell (RELAY_COMMAND_DROP) to the client.
At links below you find code for two circuit padding machines:
1. circpad_machine_client_close_circuit_...Below appears to be a bug that results in a closed circuit due to a relay sending a padding cell (RELAY_COMMAND_DROP) to the client.
At links below you find code for two circuit padding machines:
1. circpad_machine_client_close_circuit_minimal
2. circpad_machine_relay_close_circuit_minimal
Machine 1 runs on a client on CIRCUIT_PURPOSE_C_GENERAL circuits with 3 hops as soon as CIRCPAD_CIRC_OPENED: the only thing it does is set a timer 1000s in the future and waits for the timer to expire. The purpose of the machine is to negotiate padding with the relay to activate Machine 2 on the middle relay.
Machine 2 runs at the middle relay and repeatedly sends a padding cell to the client 1 usec after the event CIRCPAD_EVENT_NONPADDING_SENT. In other words, for every non-padding cell we directly add a padding cell. At the client, this causes `relay_decrypt_cell(): Incoming cell at client not recognized. Closing.` for all circuits triggering Machine 2 at the relay. Closing a circuit by injecting padding cells seems unintended.
Here is part of a log from a client on info level showing how the machine registers, negotiates with the relay (starting Machine 2 at the relay), eventually fails to decrypt, and the circuit closes.
```
Sep 05 10:32:19.000 [info] circpad_setup_machine_on_circ(): Registering machine client_close_circuit_minimal to origin circ 3 (5)
Sep 05 10:32:19.000 [info] circpad_node_supports_padding(): Checking padding: supported
Sep 05 10:32:19.000 [info] circpad_negotiate_padding(): Negotiating padding on circuit 3 (5), command 2
Sep 05 10:32:19.000 [info] link_apconn_to_circ(): Looks like completed circuit to [scrubbed] does allow optimistic data for connection to [scrubbed]
Sep 05 10:32:19.000 [info] connection_ap_handshake_send_begin(): Sending relay cell 0 on circ 3296464733 to begin stream 20575.
Sep 05 10:32:19.000 [info] connection_ap_handshake_send_begin(): Address/port sent, ap socket 13, n_circ_id 3296464733
Sep 05 10:32:19.000 [info] rep_hist_note_used_port(): New port prediction added. Will continue predictive circ building for 2355 more seconds.
Sep 05 10:32:19.000 [info] connection_edge_process_inbuf(): data from edge while in 'waiting for circuit' state. Leaving it on buffer.
Sep 05 10:32:19.000 [info] exit circ (length 3): $[manually-scrubbed](open) $[manually-scrubbed](open) $[manually-scrubbed](open)
Sep 05 10:32:19.000 [info] pathbias_count_use_attempt(): Used circuit 3 is already in path state use attempted. Circuit is a General-purpose client currently open.
Sep 05 10:32:19.000 [info] link_apconn_to_circ(): Looks like completed circuit to [scrubbed] does allow optimistic data for connection to [scrubbed]
Sep 05 10:32:19.000 [info] connection_ap_handshake_send_begin(): Sending relay cell 0 on circ 3296464733 to begin stream 20576.
Sep 05 10:32:19.000 [info] connection_ap_handshake_send_begin(): Address/port sent, ap socket 14, n_circ_id 3296464733
Sep 05 10:32:19.000 [info] circpad_deliver_recognized_relay_cell_events(): Got padding cell on origin circuit 3.
Sep 05 10:32:20.000 [info] relay_decrypt_cell(): Incoming cell at client not recognized. Closing.
Sep 05 10:32:20.000 [info] circuit_receive_relay_cell(): relay crypt failed. Dropping connection.
Sep 05 10:32:20.000 [info] command_process_relay_cell(): circuit_receive_relay_cell (backward) failed. Closing.
Sep 05 10:32:20.000 [info] circpad_circuit_machineinfo_free_idx(): Freeing padding info idx 0 on circuit 3 (23)
Sep 05 10:32:20.000 [info] circpad_node_supports_padding(): Checking padding: supported
Sep 05 10:32:20.000 [info] circpad_negotiate_padding(): Negotiating padding on circuit 3 (23), command 1
Sep 05 10:32:20.000 [info] pathbias_send_usable_probe(): Sending pathbias testing cell to 0.233.9.53:25 on stream 20577 for circ 3.
Sep 05 10:32:20.000 [info] relay_decrypt_cell(): Incoming cell at client not recognized. Closing.
Sep 05 10:32:20.000 [info] circuit_receive_relay_cell(): relay crypt failed. Dropping connection.
Sep 05 10:32:20.000 [info] command_process_relay_cell(): circuit_receive_relay_cell (backward) failed. Closing.
Sep 05 10:32:20.000 [info] pathbias_send_usable_probe(): Got pathbias probe request for circuit 3 with outstanding probe
Sep 05 10:32:20.000 [info] pathbias_check_close(): Circuit 3 closed without successful use for reason 2. Circuit purpose 23 currently 1,open. Len 3.
Sep 05 10:32:20.000 [info] circuit_mark_for_close_(): Circuit 3296464733 (id: 3) marked for close at src/core/or/command.c:582 (orig reason: 2, new reason: 0)
Sep 05 10:32:20.000 [info] connection_edge_destroy(): CircID 0: At an edge. Marking connection for close.
Sep 05 10:32:20.000 [info] connection_edge_destroy(): CircID 0: At an edge. Marking connection for close.
Sep 05 10:32:20.000 [info] circuit_free_(): Circuit 0 (id: 3) has been freed.
```
If we delay sending the padding from the relay I cannot trigger the bug (see commented out fix in the Machine 2 function). With the code below, the code triggers on every circuit that has the machine negotiated.
Code: https://github.com/pylls/tor/tree/circuit-padding-relay-padding-bug (https://github.com/pylls/tor/blob/circuit-padding-relay-padding-bug/src/core/or/circuitpadding_machines.c#L460, as well as circuitpadding_machines.h and registration in circpad_machines_init() of circuitpadding.c).Mike PerryMike Perryhttps://gitlab.torproject.org/tpo/core/tor/-/issues/29084Ensure circuit padding RTT estimate handes var cells/wide creates2023-06-15T10:31:27ZGeorge KadianakisEnsure circuit padding RTT estimate handes var cells/wide createsThe use_rtt_estimate field in the circuit padding machines lets machines offset the inter-packet delays by a middle-node estimated RTT value of packets that go from the middle to the exit/website.
We abort this measurement if we get two...The use_rtt_estimate field in the circuit padding machines lets machines offset the inter-packet delays by a middle-node estimated RTT value of packets that go from the middle to the exit/website.
We abort this measurement if we get two or more cells back-to-back in either direction, as this indicates that the half-duplex request/response circuit setup and RELAY_BEGIN sequence has finished.
However, if we switch to a multi-cell circuit handshake, then we will need to take that into account when measuring RTT.
If RELAY_EARLY is used only for the first cell of a multi-cell EXTEND2 payload,
then we can just count time between RELAY_EARLIES. But the proposal currently says MAY, but not MUST for this behavior.Mike PerryMike Perry