Some relays have been getting this warning for the past few days. It's unclear what's causing it.
From briefly reading the code, it seems like someone is trying to establish an RP on the relay, while he has already asked the relay to be an RP in the same circuit. I'm not exactly sure if that user can succeed in establishing multiple RPs in one circuit. That is, it's still unclear whether this is a Tor client bug, or maybe a DoS attacker.
We should probably look into this message a bit more.
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Child items ...
Show closed items
Linked items 0
Link issues together to show that they're related.
Learn more.
I am experiencing the same thing. It happens every couple of minutes now. The first time starting on Apr 20, really infrequently (one a day or something). My relay is on v0.2.6.7 and was started on Apr 19, three hours before this message first appeared.
This happens every minute these days on moria5. Here's a hint:
May 21 12:06:06.383 [info] rend_mid_establish_rendezvous(): Received an ESTABLISH_RENDEZVOUS request on circuit 1773468333May 21 12:06:06.383 [info] rend_mid_establish_rendezvous(): Established rendezvous point on circuit 1773468333 for cookie F24FC724May 21 12:06:06.486 [info] rend_mid_establish_rendezvous(): Received an ESTABLISH_RENDEZVOUS request on circuit 1773468333May 21 12:06:06.486 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)
Unclear yet if it's a bug with our Tor client, or somebody else's intentionally misbehaving Tor client. Sure seems to happen a lot.
I just got this as well. I am running relay named VickMackey. I've only been up about a week and my platform is Tor 0.2.6.9 on Linux. Log entry is as follows.
Jun 21 08:29:11.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)
Looking at the code, clients send ESTABLISH_RENDEZVOUS cells in rend_client_send_establish_rendezvous() through rend_client_rendcirc_has_opened() which is calle only in circuit_has_opened(). Hence, from first glance there doesn't seem to be some sort of relaunch mechanism that would establish two rendezvous in the same circuit.
Next step is to check if this can happen through cannibalization or something. Otherwise, it does seem like a misbehaving Tor client.
Looking at the code, clients send ESTABLISH_RENDEZVOUS cells in rend_client_send_establish_rendezvous() through rend_client_rendcirc_has_opened() which is calle only in circuit_has_opened(). Hence, from first glance there doesn't seem to be some sort of relaunch mechanism that would establish two rendezvous in the same circuit.
Next step is to check if this can happen through cannibalization or something. Otherwise, it does seem like a misbehaving Tor client.
I don't see anything in the cannibalization process that could trigger a client to choose a circuit with the purpose of CIRCUIT_PURPOSE_C_ESTABLISH_REND. The circuit_find_to_cannibalize() only returns general purpose circuit.
I digged a bit here and what I think can possibly happen here (quite difficult to follow the code so somebody else should confirm) is that in circuit_get_open_circ_or_launch(), towards the end when we do open a new circuit with circuit_launch_by_extend_info(), if the circuit returned by this function happens to be open (CIRCUIT_STATE_OPEN), we call circuit_has_opened(). Also in circuit_send_next_onion_skin(), when the state of the circuit's cpath is CPATH_STATE_OPEN, we again call circuit_has_opened().
So, what could be happening (NOT 100% sure) is that we ask for a new circuit, it opens super fast where circuit_send_next_onion_skin() set the circuit state to open and then calls circuit_has_opened() and then following this it's called again in circuit_get_open_circ_or_launch() because the circuit we got in there is open. That can explain why this error is not at every establish rendezvous cell but just some of them in the network because it's a clear race condition if I'm right.
Someone with better knowledge of circuit building should look at that in more depth though because I'm a bit unsure of the fix (if any...).
Anyone wants to try to "confirm" my comment:8 ? :)
If it sounds plausible, the fix is not complicated but it simply needs someone to follow the code and somehow confirm that it could be possible? Deferring to 030 because I don't think this is critical for 029 at all. Feel free to rollback if you think otherwise.
Trac: Milestone: Tor: 0.2.9.x-final to Tor: 0.3.0.x-final
Recently got this message (16 December 2016 and respectively 18 December 2016) 3 times on 0.2.9.6-rc, so it's still happening.
Branch bug15618_030-testing in dgoulet's repo includes a patch that will help us confirm or infirm the theory described at comment:8 . I have compiled that branch and hardcoded the relay running it as RP, but can't properly test it at this time because of #21056 (moved) so we'll have to wait little more, I'll revert with results after.
The problem is the message:
[warn] BLAM. Circuit has_opened() called 3 times.
is logged on the client instance when trying to open more ( > 1 ) concurrent connections to the same onion service with the same (hardcoded) RP but different circuits because requests are sent with socks username / password isolation. Nothing unusual on the relay instance - no warnings at all.
Having the log at debug level on client instance I was able to see that the hardcoded RP is in majority of circuits hop number 4 which means cannibalization, because the RP counted from client side should be hop number 3, right?
Update: I guess we can infirm the theory @ comment:8. Running the testing branch I hit the initial warning today:
Jan 05 06:21:49.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)
...without the circuit has_opened() warning. This means the warning appears because of some other reason. We will see in #21084 (moved) if has_opened() called more than 2 times is a problem or not, but I think it's unrelated to this ticket.
My feeling is that the probability of this being a (intentionally) broken client relatively low, because I am getting it on a very low consensus weight relay way more often for say few (intentionally) broken clients would pick this relay in a circuit and too rarely in order to indicate a DoS attack of some sort.
Getting it as well on regular basis on my exit running latest master. Keeping this ticket open. I have a feeling it is related to introduction point circuits, but it's just a feeling.
Saw this today as well under 0.3.0.8 Debian Jessie
Jul 01 06:41:33.000 [notice] Tor 0.3.0.8 (git-72ffaeea27041e95) opening new log file.Jul 01 07:49:53.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jul 01 08:21:28.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jul 01 10:47:06.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jul 01 11:48:21.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jul 01 12:05:19.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jul 01 12:14:40.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jul 01 12:22:32.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jul 01 12:23:20.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jul 01 12:31:30.000 [notice] Heartbeat: Tor's uptime is 20 days 0:00 hours, with 3811 circuits open. I've sent 62231.98 GB and received 61576.32 GB.Jul 01 12:31:30.000 [notice] Circuit handshake stats since last time: 12900/12900 TAP, 111930/111930 NTor.Jul 01 12:31:30.000 [notice] Since startup, we have initiated 0 v1 connections, 0 v2 connections, 0 v3 connections, and 35407 v4 connections; and received 2434 v1 connections, 1737 v2 connections, 10107 v3 connections, and 977409 v4 connections.Jul 01 12:58:13.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jul 01 13:14:03.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jul 01 16:47:35.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jul 01 17:00:43.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jul 01 17:54:01.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jul 01 18:31:30.000 [notice] Heartbeat: Tor's uptime is 20 days 6:00 hours, with 3854 circuits open. I've sent 62438.97 GB and received 61780.52 GB.Jul 01 18:31:30.000 [notice] Circuit handshake stats since last time: 7931/7931 TAP, 136428/136428 NTor.Jul 01 18:31:30.000 [notice] Since startup, we have initiated 0 v1 connections, 0 v2 connections, 0 v3 connections, and 35782 v4 connections; and received 2450 v1 connections, 1756 v2 connections, 10159 v3 connections, and 993920 v4 connections.Jul 01 20:25:13.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jul 01 20:59:04.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jul 01 22:26:56.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jul 01 23:02:34.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jul 02 00:21:16.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jul 02 00:31:30.000 [notice] Heartbeat: Tor's uptime is 20 days 12:00 hours, with 7998 circuits open. I've sent 62679.06 GB and received 62018.22 GB.Jul 02 00:31:30.000 [notice] Circuit handshake stats since last time: 8533/8533 TAP, 168507/168507 NTor.Jul 02 00:31:30.000 [notice] Since startup, we have initiated 0 v1 connections, 0 v2 connections, 0 v3 connections, and 36075 v4 connections; and received 2470 v1 connections, 1769 v2 connections, 10252 v3 connections, and 1011297 v4 connections.Jul 02 02:33:25.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jul 02 04:25:30.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jul 02 05:22:09.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jul 02 06:30:03.000 [notice] Received reload signal (hup). Reloading config and resetting internal state.
My relay running 0.3.1.2-alpha-dev (git-1763aa058b08a1c5)reported this on Jul 01 (yesterday) and Jul 02 (today) more than usual (it usually logs this warn few times per month) but just yesterday and today 24 times, at few hours interval between each warning.
Maybe this occurs when the relay in question is being used as an introduction point (or HSDir maybe) that someone tries to attack / de-anonymize using some weird technique? I am keeping an eye on this and I can't find a logical explanation why sometimes we see it, sometimes not... not being able to reproduce sucks.
I've made this reproducible but with a change to the tor code... I wanted to test my theory that basically at least twoESTABLISH_RENDEZVOUS cells are sent to the RP on the same circuit. I've modified Tor to pin a RP I control and then made it call twice rend_client_rendcirc_has_opened() in circuit_has_opened().
And here you go, you can see this log being triggered on my relay (the RP):
Jul 02 18:31:47.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)
This means somehow that either the tor client can call twice circuit_has_opened() and which means it can send two cells on the same circuit (#21084 (moved)) or then some other client implementation out there is doing the wrong thing triggering warning everywhere.
We should probably downgrade that log_warn to a PROTOCOL warning because anyone out there can send two cells on the same circuit just "for fun"...
Quick summary after a discussion on IRC with asn. Basically, we agree that any custom client can trigger this warning so changing to a protocol warning seems wise. However, there is a possibility that the tor client does send twice the cell by two calls to circuit_has_opened() on the RP circuit. So, trying to fix/confirm #21084 (moved) is the thing to do.
In the meantime, we'll keep that log at warning level so if we ever fix #21084 (moved) and see that the warning has disappeared for a while, we'll know that we might have fix something. And in time, we'll change that log warn to a proto warn.
All this makes sense to me, except I cannot help to ask myself the following questions:
From all the relays I run, I see this warning on just one of them only, I see it heavily and the relay in question is an Exit relay with very low consensus weight (200 - 350). For example in 11 days and 18 hours of uptime for Tor, the warning is logged exactly 112 times.
A custom client triggering this simply cannot pick this particular relay so so many times in short periods, especially given its low consensus weight and especially that it is an Exit relay so it shouldn't even be used in the RP position anyway. If the custom client is over-picking this relay in a position (RP) that it shouldn't be given its flags, and it does not respect the load balancing logic by picking based on consensus weights, we want to know why.
A regular client affected by #21084 (moved) should just be using Tor so certainly it is at least questionable if it would choose a relay with low consensus weight, with the Exit flag in the RP position for so many times on a short period of time.
Good thinking about the downgrade from warn to proto warn - I strongly support leaving this at warning level at least until we fix/confirm #21084 (moved).
As suggested by arma on IRC, I am running ProtocolWarnings 1 on the relay where I experience this. Unfortunately, there is nothing more than we already have.
Each time this warning is printed (11 times in this particular log file), it is only surrounded by:
Oct 09 14:55:58.000 [warn] Attempt by [scrubbed] to open a stream on first hop of circuit. Closing.Oct 09 14:56:03.000 [warn] Attempt by [scrubbed] to open a stream on first hop of circuit. Closing.Oct 09 14:56:08.000 [warn] Attempt by [scrubbed] to open a stream on first hop of circuit. Closing.Oct 09 14:56:11.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Oct 09 14:56:13.000 [warn] Attempt by [scrubbed] to open a stream on first hop of circuit. Closing.Oct 09 14:56:18.000 [warn] Attempt by [scrubbed] to open a stream on first hop of circuit. Closing.Oct 09 14:56:23.000 [warn] Attempt by [scrubbed] to open a stream on first hop of circuit. Closing.
The surrounding messages are kind of irrelevant because this is by far the most common protocol warning I get. There are 14569 warnings about opening a stream on first hop of circuit in about ~36 hours.
I checked the relay for maintenance just now and surprise: there were over 100 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending) in the log file and continued to appear, so switched to log level debug and sent a HUP signal to the Tor daemon to see in depth what is going on.
We can be sure it's what dgoulet indicated at comment:27 (two ESTABLISH_RENDEZVOUS cells on the same circuit). I am providing some debug logs, filtered only for the circuit ID we are interested in since the original file is very large:
Oct 31 00:02:47.000 [debug] command_process_create_cell: Got a CREATE cell for circ_id 3320782221 on channel 496023 (0x805b1cd00)Oct 31 00:02:47.000 [debug] circuit_get_by_circid_channel_impl: circuit_get_by_circid_channel_impl() found nothing for circ_id 3320782221, channel ID 496023 (0x805b1cd00)Oct 31 00:02:47.000 [debug] circuitmux_attach_circuit: Attaching circuit 3320782221 on channel 496023 to cmux 0x80830a9a0Oct 31 00:02:47.000 [debug] circuit_get_by_circid_channel_impl: circuit_get_by_circid_channel_impl() returning circuit 0x8052ce4c0 for circ_id 3320782221, channel ID 496023 (0x805b1cd00)Oct 31 00:02:47.000 [info] rend_mid_establish_rendezvous: Received an ESTABLISH_RENDEZVOUS request on circuit 3320782221Oct 31 00:02:47.000 [info] rend_mid_establish_rendezvous: Established rendezvous point on circuit 3320782221 for cookie D21DCC69Oct 31 00:02:49.000 [debug] circuit_get_by_circid_channel_impl: circuit_get_by_circid_channel_impl() returning circuit 0x8052ce4c0 for circ_id 3320782221, channel ID 496023 (0x805b1cd00)Oct 31 00:02:49.000 [info] rend_mid_establish_rendezvous: Received an ESTABLISH_RENDEZVOUS request on circuit 3320782221Oct 31 00:02:49.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Oct 31 00:02:49.000 [debug] circuit_get_by_circid_channel_impl: circuit_get_by_circid_channel_impl() returning circuit 0x8052ce4c0 for circ_id 3320782221, channel ID 496023 (0x805b1cd00)Oct 31 00:02:49.000 [info] circuit_mark_for_close_: Circuit 0 (id: 0) marked for close at src/or/rendmid.c:272 (orig reason: 1, new reason: 1)Oct 31 00:02:49.000 [debug] channel_send_destroy: Sending destroy (circID 3320782221) on channel 0x805b1cd00 (global ID 496023)
At 02:47 we successfully established rendezvous with cookie D21DCC69 on this circuit, and at 02.49 another ESTABLISH_RENDEZVOUS was sent on the same circuit which made it mark the circuit for close (src/or/rendmid.c:272).
I still wonder why sometimes this happens heavily, like today (over 300 warnings till now) and sometimes not. The relay has low consensus weight and the Exit flag, so its probability to be chosen as RP should be 0 if we consider its flags and 0.00something if we only consider its consensus weight.
Hey s7r! You seems to have DEBUG logs there, you think you can attach them (or send them to me private if sensitive) to me. I would like to see a bit more of the inners of tor service side for this?
The fact that we see that "sometimes" and we see it in burst tells me that it is likely someone doing that either on purpose or a VERY buggy implementation. The tor client can't open 300 circuit to a service in a relatively short amount of time and all have the bug for those... seems VERY unlikely, we would see that more often.
Hey s7r! You seems to have DEBUG logs there, you think you can attach them (or send them to me private if sensitive) to me. I would like to see a bit more of the inners of tor service side for this?
Yes I have :) been hunting this for some time now. Was lucky to catch it live so immediately switched to debug log level. Got about 2 GB of debug logs (uncompressed).
I will send them to you via a secure channel, first because they might be sensitive for whoever was using this relay and second because I feel something very strange is going on and I don't want to tip anyone off which relays keep eyes on this.
The fact that we see that "sometimes" and we see it in burst tells me that it is likely someone doing that either on purpose or a VERY buggy implementation. The tor client can't open 300 circuit to a service in a relatively short amount of time and all have the bug for those... seems VERY unlikely, we would see that more often.
Those 300 warnings are spread out how over time?
Indeed I would say VERY VERY unlikely. It's maybe more than that, in less than 24 hours.
For example, only in the last 47 minutes a wc -l on the log file indicates 53 such warnings.
Running Tor version 0.3.1.9 (git-df96a13e9155c7bf), I'm seeing thousands of these in exit relay logs. These start almost immediately with the relay being turned on / off, and continue at this pace as long as its running:
Jan 02 13:52:36.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)
Jan 02 13:53:10.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)
Jan 02 13:53:34.000 [notice] Self-testing indicates your DirPort is reachable from the outside. Excellent. Publishing server descriptor.
Jan 02 13:53:40.000 [notice] Performing bandwidth self-test...done.
Jan 02 13:54:51.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)
I'm seeing the same behaviour on debian jessie with: 0.3.2.8-rc-1~d80.jessie+1
Jan 02 23:40:21.000 [notice] Bootstrapped 0%: Starting
Jan 02 23:40:32.000 [notice] Starting with guard context "default"
Jan 02 23:40:32.000 [notice] Bootstrapped 80%: Connecting to the Tor network
Jan 02 23:40:32.000 [notice] Signaled readiness to systemd
Jan 02 23:40:32.000 [notice] Opening Control listener on /var/run/tor/control
Jan 02 23:40:32.000 [notice] Self-testing indicates your ORPort is reachable from the outside. Excellent.
Jan 02 23:40:33.000 [notice] Bootstrapped 85%: Finishing handshake with first hop
Jan 02 23:40:33.000 [notice] Bootstrapped 90%: Establishing a Tor circuit
Jan 02 23:40:34.000 [notice] Tor has successfully opened a circuit. Looks like client functionality is working.
Jan 02 23:40:34.000 [notice] Bootstrapped 100%: Done
Jan 02 23:40:36.000 [notice] Self-testing indicates your DirPort is reachable from the outside. Excellent. Publishing server descriptor.
Jan 02 23:40:48.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)
cat notices.log | grep "Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous" | wc -l
688
Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)
appears roughly every two minutes in the logs. Restarts of Tor do not prevent the occurrence.
/var/log/tor/log (excerpt around restart of Tor)
Jan 06 16:01:24.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jan 06 16:01:26.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jan 06 16:03:14.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jan 06 16:03:46.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jan 06 16:06:25.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jan 06 16:07:51.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jan 06 16:08:09.000 [notice] Interrupt: we have stopped accepting new connections, and will shut down in 30 seconds. Interrupt again to exit now.Jan 06 16:08:39.000 [notice] Clean shutdown finished. Exiting.Jan 06 16:08:49.000 [notice] Tor 0.3.1.9 (git-df96a13e9155c7bf) opening log file.Jan 06 16:08:49.617 [notice] Tor 0.3.1.9 (git-df96a13e9155c7bf) running on Linux with Libevent 2.0.21-stable, OpenSSL 1.0.2g, Zlib 1.2.8, Liblzma 5.1.0alpha, and Libzstd N/A.Jan 06 16:08:49.618 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warningJan 06 16:08:49.619 [notice] Read configuration file "/usr/share/tor/tor-service-defaults-torrc".Jan 06 16:08:49.619 [notice] Read configuration file "/etc/tor/torrc".Jan 06 16:08:49.630 [notice] Based on detected system memory, MaxMemInQueues is set to 369 MB. You can override this by setting MaxMemInQueues by hand.Jan 06 16:08:49.633 [warn] Tor is running as an exit relay. If you did not want this behavior, please set the ExitRelay option to 0. If you do want to run an exit Relay, please set the ExitRelay option to 1 to disable this warning, and for forward compatibility.Jan 06 16:08:49.637 [notice] Opening Socks listener on 127.0.0.1:9050Jan 06 16:08:49.637 [notice] Opening Control listener on 127.0.0.1:9051Jan 06 16:08:49.637 [notice] Opening OR listener on 0.0.0.0:9001Jan 06 16:08:49.637 [notice] Opening Directory listener on 0.0.0.0:9030Jan 06 16:09:25.000 [notice] Parsing GEOIP IPv4 file /usr/share/tor/geoip.Jan 06 16:09:25.000 [notice] Parsing GEOIP IPv6 file /usr/share/tor/geoip6.Jan 06 16:09:26.000 [notice] Configured to measure statistics. Look for the *-stats files that will first be written to the data directory in 24 hours from now.Jan 06 16:09:32.000 [notice] Your Tor server's identity key fingerprint is 'drubnDe DE25A1868DFF9DCF4CF0A3B4B0FB7CD099B6840B'Jan 06 16:09:32.000 [notice] Bootstrapped 0%: StartingJan 06 16:11:37.000 [notice] Starting with guard context "default"Jan 06 16:11:37.000 [notice] Bootstrapped 80%: Connecting to the Tor networkJan 06 16:11:38.000 [notice] Signaled readiness to systemdJan 06 16:11:38.000 [notice] Opening Control listener on /var/run/tor/controlJan 06 16:11:39.000 [notice] Bootstrapped 85%: Finishing handshake with first hopJan 06 16:11:39.000 [notice] Self-testing indicates your ORPort is reachable from the outside. Excellent.Jan 06 16:11:39.000 [notice] Bootstrapped 90%: Establishing a Tor circuitJan 06 16:11:41.000 [notice] Tor has successfully opened a circuit. Looks like client functionality is working.Jan 06 16:11:41.000 [notice] Bootstrapped 100%: DoneJan 06 16:12:41.000 [notice] Self-testing indicates your DirPort is reachable from the outside. Excellent. Publishing server descriptor.Jan 06 16:12:42.000 [notice] Performing bandwidth self-test...done.Jan 06 16:15:27.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jan 06 16:16:44.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jan 06 16:17:32.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jan 06 16:19:36.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jan 06 16:20:12.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jan 06 16:22:34.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)Jan 06 16:22:55.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)
@dgoulet@asn are we still hunting this? Or should we make this a protocol warning?
The Exit relay where I have received so many of these warnings was taken offline for few weeks for the reason of changing office location. I have started it again something like little over 24 hours ago and after it got measured and left the Unmeasured state it generated about 190 of these warnings in less than 8 hours. Same story, just a relay (no client, no SocksPort, no onion service hosted).
Jun 23 14:35:07.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendezvous (pending)Jun 23 14:36:45.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendezvous (pending)Jun 23 14:39:28.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendezvous (pending)Jun 23 14:45:32.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendezvous (pending)Jun 23 14:48:14.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendezvous (pending)Jun 23 14:51:14.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendezvous (pending)Jun 23 14:57:34.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendezvous (pending)
It is running something recent of course: 0.3.5.0-alpha-dev (git-946428d27382fdd0+c46a0a06f).
I was able to catch one debug log that contains at least one line of the warning, but i am not sure they will be different than the ones I have sent you last time. I have checked:
Jun 23 13:24:09.000 [debug] connection_edge_process_relay_cell(): Now seen 825381 relay cells here (command 33, stream 0).Jun 23 13:24:09.000 [info] rend_mid_establish_rendezvous(): Received an ESTABLISH_RENDEZVOUS request on circuit 2529036982Jun 23 13:24:09.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendezvous (pending)Jun 23 13:24:09.000 [debug] circuit_get_by_circid_channel_impl(): circuit_get_by_circid_channel_impl() returning circuit 0x5618a4022540 for circ_id 2529036982, channel ID 7143 (0x5618a3108570)Jun 23 13:24:09.000 [info] circuit_mark_for_close_(): Circuit 0 (id: 0) marked for close at ../src/or/rendmid.c:286 (orig reason: 1, new reason: 1)Jun 23 13:24:09.000 [debug] connection_or_process_cells_from_inbuf(): 103: starting, inbuf_datalen 0 (0 pending in tls object).Jun 23 13:24:09.000 [debug] circuit_get_by_circid_channel_impl(): circuit_get_by_circid_channel_impl() returning circuit 0x5618a4022540 for circ_id 2529036982, channel ID 7143 (0x5618a3108570)Jun 23 13:24:09.000 [debug] circuitmux_append_destroy_cell(): Cmux at 0x5618a307d130 queued a destroy for circ 2529036982, cmux counter is now 1, global counter is now 1Jun 23 13:24:09.000 [debug] circuitmux_append_destroy_cell(): Primed a buffer.
email me directly for the complete file if you need it.
I don't see what could be so special about this relay and why is it logging this so so much. It's not even an ultra fast one with huge consensus weight.
Not certain but I suspect the recently crafted DoS mitigation support #24902 (moved) addresses the problem on the client side:
DoS mitigation since startup: . . . 1733 single hop clients refused.
If the above is true the exit message can probably be lowered from [warn] to [info]. If it makes you crazy enough tweaking the source for this and building would solve it sooner for your relays.
I don't understand how this could be related. I am obviously running a Tor version that includes the DoS protection.
Jun 24 09:50:55.000 [notice] DoS mitigation since startup: 0 circuits killed with too many cells. 0 circuits rejected, 0 marked addresses. 0 connections closed. 0 single hop clients refused.
Do we treat different single hop client is the relay is also an Exit? The thing is, I don't see these warnings on my middle relays and bridges, while I do run the same Tor version everywhere (alpha-dev).
Why would Exit relays treat rend circuits any different? Why would clients pick Exit relays in the RP position?