Tor issueshttps://gitlab.torproject.org/tpo/core/tor/-/issues2022-09-01T21:39:46Zhttps://gitlab.torproject.org/tpo/core/tor/-/issues/30526Tests should not load system geoip files.2022-09-01T21:39:46ZNick MathewsonTests should not load system geoip files.See legacy/trac#29702 for background. In that ticket we made sure that we stopped looking at torrc files, but we still sometimes look at geoipfiles. They are less risky, though, so this is a lower priority.See legacy/trac#29702 for background. In that ticket we made sure that we stopped looking at torrc files, but we still sometimes look at geoipfiles. They are less risky, though, so this is a lower priority.https://gitlab.torproject.org/tpo/core/tor/-/issues/30482unexpected warning: Invalid signature for service descriptor signing key:...2022-09-01T21:39:46Ztoralfunexpected warning: Invalid signature for service descriptor signing key: expiredI do wonder about
```
# tail -n 2 /tmp/notice2.log
May 12 10:42:13.000 [notice] DoS mitigation since startup: 10 circuits killed with too many cells. 13604 circuits rejected, 12 marked addresses. 106 connections closed. 1917 single hop c...I do wonder about
```
# tail -n 2 /tmp/notice2.log
May 12 10:42:13.000 [notice] DoS mitigation since startup: 10 circuits killed with too many cells. 13604 circuits rejected, 12 marked addresses. 106 connections closed. 1917 single hop clients refused.
May 12 14:30:03.000 [warn] Invalid signature for service descriptor signing key: expired
```
b/c it looks ok:
```
# tor --key-expiration sign -f /etc/tor/torrc2
May 12 16:27:26.845 [notice] Tor 0.4.0.5 running on Linux with Libevent 2.1.8-stable, OpenSSL LibreSSL 2.8.3, Zlib 1.2.11, Liblzma 5.2.4, and Libzstd N/A.
May 12 16:27:26.845 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning
May 12 16:27:26.845 [notice] Read configuration file "/etc/tor/torrc2".
May 12 16:27:26.849 [notice] Included configuration file or directory at recursion level 1: "/etc/tor/torrc.d/00_common".
May 12 16:27:26.849 [notice] Based on detected system memory, MaxMemInQueues is set to 8192 MB. You can override this by setting MaxMemInQueues by hand.
May 12 16:27:26.858 [notice] We were built to run on a 64-bit CPU, with OpenSSL 1.0.1 or later, but with a version of OpenSSL that apparently lacks accelerated support for the NIST P-224 and P-256 groups. Building openssl with such support (using the enable-ec_nistp_64_gcc_128 option when configuring it) would make ECDH much faster.
May 12 16:27:26.973 [notice] Your Tor server's identity key fingerprint is 'zwiebeltoralf2 509EAB4C5D10C9A9A24B4EA0CE402C047A2D64E6'
May 12 16:27:26.973 [notice] The signing certificate stored in /var/lib/tor/data2/keys/ed25519_signing_cert is valid until 2019-08-10 04:00:00.
signing-cert-expiry: 2019-08-10 04:00:00
```https://gitlab.torproject.org/tpo/core/tor/-/issues/29930Warning: can't unlink unverified-consensus on Windows2021-09-16T14:13:16ZteorWarning: can't unlink unverified-consensus on WindowsIn legacy/trac#28614, Vort says:
BTW, clean start of 0.4.0.3-alpha brings another problem:
`Mar 28 08:26:00.000 [warn] Failed to unlink C:\Users\Vort\AppData\Roaming\tor\unverified-microdesc-consensus: Permission denied`
Does this issu...In legacy/trac#28614, Vort says:
BTW, clean start of 0.4.0.3-alpha brings another problem:
`Mar 28 08:26:00.000 [warn] Failed to unlink C:\Users\Vort\AppData\Roaming\tor\unverified-microdesc-consensus: Permission denied`
Does this issue happen with an empty data directory?
Or does it only happen when unverified-microdesc-consensus exists?
I'm assigning this ticket to ahf, because he has a working Windows box with Tor.https://gitlab.torproject.org/tpo/core/tor/-/issues/29927Tor protocol errors causing silent dropped cells2022-09-01T21:39:46ZMike PerryTor protocol errors causing silent dropped cellsWhile testing vanguards, I've got some mystery cases client side where circuits are getting closed with END_CIRC_REASON_TORPROTOCOL, but Tor is not emmitting any log lines that correspond to this, even at debug level.
This is happening ...While testing vanguards, I've got some mystery cases client side where circuits are getting closed with END_CIRC_REASON_TORPROTOCOL, but Tor is not emmitting any log lines that correspond to this, even at debug level.
This is happening for circuits with purpose CIRCUIT_PURPOSE_C_REND_READY_INTRO_ACKED. Additionally, all circuits seem able to fail during construction with END_CIRC_REASON_TORPROTOCOL, with no Tor log messages even at debug loglevel. Possibly more ntor handshake failures, similar to legacy/trac#29700?
Finally, CIRCUIT_PURPOSE_C_INTRODUCE_ACKED circuits are getting closed with a END_CIRC_REASON_FINISHED after receiving an invalid cell, seemingly after they are done being used.
See also https://github.com/mikeperry-tor/vanguards/issues/37
The vanguards addon now outputs this bug number at INFO log level when this happens.https://gitlab.torproject.org/tpo/core/tor/-/issues/29830Use UndefinedBehaviorSanitizer when the UBSan configure checks pass, rather t...2022-09-01T21:39:45ZteorUse UndefinedBehaviorSanitizer when the UBSan configure checks pass, rather than the ASan configure checksconfigure: stop using UBSan when the compiler only supports ASan
When activating the undefined behaviour sanitiser (UBSan), configure.ac
checked the address sanitiser (ASan) variables, instead of the UBSan
variables.
Fixes bug ...configure: stop using UBSan when the compiler only supports ASan
When activating the undefined behaviour sanitiser (UBSan), configure.ac
checked the address sanitiser (ASan) variables, instead of the UBSan
variables.
Fixes bug (this one); bugfix on 0.2.9.1-alpha.https://gitlab.torproject.org/tpo/core/tor/-/issues/29777Rate-limit "Problem bootstrapping" warnings to one every 5 seconds2022-02-07T19:39:00ZteorRate-limit "Problem bootstrapping" warnings to one every 5 secondsLet's put a rate-limit on warnings like this:
```
Jan 29 11:36:27.000 [warn] Problem bootstrapping. Stuck at 0% (starting): Starting. (Network is unreachable; NOROUTE; count 11; recommendation warn; host 322C6E3A973BC10FC36DE3037AD27BC89...Let's put a rate-limit on warnings like this:
```
Jan 29 11:36:27.000 [warn] Problem bootstrapping. Stuck at 0% (starting): Starting. (Network is unreachable; NOROUTE; count 11; recommendation warn; host 322C6E3A973BC10FC36DE3037AD27BC89F14723B at 212.83.154.33:8443)
```https://gitlab.torproject.org/tpo/core/tor/-/issues/29744Streams sometimes stall for up to 1 hour without making any progress2022-09-01T21:29:27ZKarsten LoesingStreams sometimes stall for up to 1 hour without making any progressWe're measuring Tor performance using our OnionPerf tool by regularly downloading 5 MiB files over Tor. Some of these measurements run longer than 1 hour, after which a timeout in OnionPerf aborts them, or run for up to 30 minutes until ...We're measuring Tor performance using our OnionPerf tool by regularly downloading 5 MiB files over Tor. Some of these measurements run longer than 1 hour, after which a timeout in OnionPerf aborts them, or run for up to 30 minutes until they complete. (For comparison, 99% of successful runs complete within roughly two minutes.)
I noticed one particular source of slowness which I think is the reason for the application timeouts after 1 hour and for some of the 1% slowest successful runs: streams stall for seconds or minutes and would even stall for hours if we let them, without making any progress; and suddenly they make progress until they complete or stall again.
I'm attaching four graphs showing this problem. All these graphs show download progress over time with time on x and progress on y. Each gray bar is one measurement. The black line starts at the bottom of its gray bar and goes up to the top of that bar as more data is received. The number on the right is the stream ID.
The first two graphs show application timeouts, the last two show the slowest 1% of successful runs. First and third show downloads from a public server, second and fourth from an onion server.
Note that not all runs have this problem of stalling as described above. Some of the more obvious cases are:
- Page 3, stream ID 436971: that stream basically does nothing for over half an hour and then completes within seconds.
- Page 3, stream ID 436986: same as before, just with a shorter stalling period.
Other cases have different issues. For example, stream ID 34117 on page 3 is rather slow for most of the time and then suddenly gets faster at the end. However, it does not stall.
I do have tor logs and tor controller event logs for these cases. Here's a log containing many relevant STREAM and STREAM_BW events: https://people.torproject.org/~karsten/volatile/streams-2019-02-18.log.xz (61.1K)
These measurements have been made using tor versions 0.2.9.11-dev and 0.3.0.7-dev.
I can provide more data. But rather than uploading everything, please let me know what data would be most useful, and I'll provide just that.https://gitlab.torproject.org/tpo/core/tor/-/issues/29698Edge case that causes improper circuit prioritization for one scheduling run2021-02-11T03:17:17ZpastlyEdge case that causes improper circuit prioritization for one scheduling run= The Problem, Shortly
A circuit that goes from very busy for a long time, to 100% idle for a long time, and then needs to send traffic again will be incorrectly deprioritized the first time it gets scheduled.
= The Problem, Illustrate...= The Problem, Shortly
A circuit that goes from very busy for a long time, to 100% idle for a long time, and then needs to send traffic again will be incorrectly deprioritized the first time it gets scheduled.
= The Problem, Illustrated
Consider a circuit that is very very busy for a significant length of time (minutes). There's constant traffic flowing in one (or both, but let's just say one) direction on this circuit, leading to it earning for itself a high `cell_count` EWMA value (thus a low priority for scheduling). _Assume it is the only circuit on its channel_.
Now assume it suddenly stops sending traffic but stays open. It stays this way for significant length of time (many 10s of seconds) such that _its `cell_count` EWMA value should be essentially zero, but it hasn't actually been updated yet_ since this value isn't updated until a cell has been transmitted (see `circuitmux_notify_xmit_cells`).
At this point in time the relay is still servicing some number of low-traffic circuits _on other channels_. Maybe it has always been handling these circuits. Whatever. It doesn't matter. What matters is at this point in time there's lots of low-traffic circuits needing scheduling. Because they are low-traffic, these circuits have `cell_count` EWMA values that are relatively low (thus a high priority for scheduling).
Now what happens when that original high-traffic circuit stops being totally idle? What happens when it wants to send another 1000, 100, or even just 1 cell?
It gets put into KIST `channels_pending` smart list like any other circuit. In fact there are a bunch of low-bandwidth circuits in there with it. Observe what happens when KIST starts scheduling its collection of pending channels:
KIST loops over and over until its list of pending channels is empty. Each time it gets the channel with the current best-priority circuit, schedules one cell, _updates the appropriate `cell_count`_, and puts the channel back in the pending list if necessary.
**All those low-traffic circuits will be serviced first because they have low `cell_count` values (high priority) as compared to the outdated `cell_count` value for the original high-traffic circuit.**
When the circuit finally gets to send its first cell after its long period of inactivity, its `cell_count` EWMA value is corrected to be near zero. That's fine. _But it should have been updated before scheduling decisions were made so that it was the first one to be scheduled_.
= A solution
Add a `touch` function in the circuitmux channel interface that tells the circuitmux and whatever its policy is to update its circuit priorities if desired.
Before entering the main scheduling loop, call this `touch` function on all the pending channels. In the case of the EWMA policy, the `touch` function would ultimately drill down to something like
```
ewma_touch(circuitmux_policy_data_t *pol_data)
{
ewma_policy_data_t *pol = NULL;
unsigned int tick;
double fractional_tick;
tor_assert(pol_data);
pol = TO_EWMA_POL_DATA(pol_data);
/* Rescale the EWMAs if needed */
tick = cell_ewma_get_current_tick_and_fraction(&fractional_tick);
if (tick != pol->active_circuit_pqueue_last_recalibrated) {
scale_active_circuits(pol, tick);
}
}
```
(Which you might observe is essentially the first part of `ewma_notify_xmit_cells(...)`).https://gitlab.torproject.org/tpo/core/tor/-/issues/29528UndefinedBehaviorSanitizer errors should fail the unit tests2022-09-01T21:29:27ZteorUndefinedBehaviorSanitizer errors should fail the unit testsIn legacy/trac#29527, clang's UndefinedBehaviorSanitizer logs a failure, but the unit test succeeds.
We should fail the unit tests on sanitizer errors. (There might be an environmental variable to fail on errors?)In legacy/trac#29527, clang's UndefinedBehaviorSanitizer logs a failure, but the unit test succeeds.
We should fail the unit tests on sanitizer errors. (There might be an environmental variable to fail on errors?)https://gitlab.torproject.org/tpo/core/tor/-/issues/29520Makefile: let target "micro-revision.i" be a prerequisite of the default target2022-09-01T21:29:27ZtoralfMakefile: let target "micro-revision.i" be a prerequisite of the default targetRealized today, that a "make fuzzers" after a "make distclean" won't work due to
```
src/lib/version/git_revision.c:15:10: fatal error: micro-revision.i: No such file or directory
```
(tested at tor-0.4.0.1-alpha-98-g6c173d00f)Realized today, that a "make fuzzers" after a "make distclean" won't work due to
```
src/lib/version/git_revision.c:15:10: fatal error: micro-revision.i: No such file or directory
```
(tested at tor-0.4.0.1-alpha-98-g6c173d00f)https://gitlab.torproject.org/tpo/core/tor/-/issues/28509Limit relay bandwidth self-tests based on RelayBandwidthRate, not BandwidthRate2022-09-01T21:29:27ZteorLimit relay bandwidth self-tests based on RelayBandwidthRate, not BandwidthRateDiscovered while working on legacy/trac#22453.
This is a bug on 0.2.0.1-alpha, where RelayBandwidthRate was introduced.Discovered while working on legacy/trac#22453.
This is a bug on 0.2.0.1-alpha, where RelayBandwidthRate was introduced.https://gitlab.torproject.org/tpo/core/tor/-/issues/28097Get the actual Windows version from Kernel32.dll2022-02-07T19:39:00ZteorGet the actual Windows version from Kernel32.dllWindows 8.1 and later pretend to be Windows 8 (legacy/trac#28096).
If we want to display the real Windows version, we can use GetFileVersionInfo() to check the version of Kernel32.dll:
https://docs.microsoft.com/en-au/windows/desktop/Sy...Windows 8.1 and later pretend to be Windows 8 (legacy/trac#28096).
If we want to display the real Windows version, we can use GetFileVersionInfo() to check the version of Kernel32.dll:
https://docs.microsoft.com/en-au/windows/desktop/SysInfo/getting-the-system-versionhttps://gitlab.torproject.org/tpo/core/tor/-/issues/27331Non-fatal assertion ent->cdm_diff_status != CDM_DIFF_PRESENT failed in cdm_di...2022-09-01T21:29:26ZTracNon-fatal assertion ent->cdm_diff_status != CDM_DIFF_PRESENT failed in cdm_diff_ht_check_and_note_pending at src/or/consdiffmgr.c:272tor in relay mode
os: FreeBSD 11.2
```
Bug: Non-fatal assertion ent->cdm_diff_status != CDM_DIFF_PRESENT failed in cdm_diff_ht_check_and_note_pending at src/or/consdiffmgr.c:272. Stack trace: (on Tor 0.3.4.7-rc 8465a8d84647c349)
Bug: ...tor in relay mode
os: FreeBSD 11.2
```
Bug: Non-fatal assertion ent->cdm_diff_status != CDM_DIFF_PRESENT failed in cdm_diff_ht_check_and_note_pending at src/or/consdiffmgr.c:272. Stack trace: (on Tor 0.3.4.7-rc 8465a8d84647c349)
Bug: 0x11ae5f8 <log_backtrace+0x48> at /usr/local/bin/tor (on Tor 0.3.4.7-rc 8465a8d84647c349)
Bug: 0x11c914a <tor_bug_occurred_+0x10a> at /usr/local/bin/tor (on Tor 0.3.4.7-rc 8465a8d84647c349)
Bug: 0x114d019 <consdiffmgr_rescan+0x8a9> at /usr/local/bin/tor (on Tor 0.3.4.7-rc 8465a8d84647c349)
Bug: 0x801b4eda0 <event_base_assert_ok_nolock_+0x9d0> at /usr/local/lib/libevent-2.1.so.6 (on Tor 0.3.4.7-rc 8465a8d84647c349)
Bug: 0x801b4af4e <event_base_loop+0x50e> at /usr/local/lib/libevent-2.1.so.6 (on Tor 0.3.4.7-rc 8465a8d84647c349)
Bug: 0x1079ed4 <do_main_loop+0x5f4> at /usr/local/bin/tor (on Tor 0.3.4.7-rc 8465a8d84647c349)
Bug: 0x107ba7d <tor_run_main+0xbd> at /usr/local/bin/tor (on Tor 0.3.4.7-rc 8465a8d84647c349)
Bug: 0x1076d5c <tor_main+0x4c> at /usr/local/bin/tor (on Tor 0.3.4.7-rc 8465a8d84647c349)
Bug: 0x1076bf9 <main+0x19> at /usr/local/bin/tor (on Tor 0.3.4.7-rc 8465a8d84647c349)
Bug: 0x1076af0 <_start+0x180> at /usr/local/bin/tor (on Tor 0.3.4.7-rc 8465a8d84647c349)
```
**Trac**:
**Username**: a_phttps://gitlab.torproject.org/tpo/core/tor/-/issues/27049"No circuits are opened" messages with onion services2022-10-17T10:12:18ZMike Perry"No circuits are opened" messages with onion servicesIf a Tor instance is only doing onion service activity, and circuits time out, then the Tor client thinks that circuits aren't opened and gives them a full minute to complete. It also complains about this in the logs at notice level, lik...If a Tor instance is only doing onion service activity, and circuits time out, then the Tor client thinks that circuits aren't opened and gives them a full minute to complete. It also complains about this in the logs at notice level, like this:
```
Aug 06 02:55:30.000 [notice] No circuits are opened. Relaxed timeout for circuit 21570 (a Hidden service: Pre-built vanguard circuit 4-hop circuit in state doing handshakes with channel state open) to 60000ms. However, it appears the circuit has timed out anyway.
```
The fix is simple. circuit_any_opened_circuits() in circuitlist.c is only counting circuits as opened if they use the DEFAULT_ROUTE_LEN. We just need to count >= DEFAULT_ROUTE_LEN to count them.Mike PerryMike Perryhttps://gitlab.torproject.org/tpo/core/tor/-/issues/26993Tor silently ignores hidden directory when it isn't writable2022-09-01T21:29:26Zyurivict271Tor silently ignores hidden directory when it isn't writableI added HiddenServiceDir and HiddenServicePort to torrc.
When the empty HS directory has permissions 0600, tor starts without creating the HS and without complaining. It should either fail to start, or print a warning that the HS direct...I added HiddenServiceDir and HiddenServicePort to torrc.
When the empty HS directory has permissions 0600, tor starts without creating the HS and without complaining. It should either fail to start, or print a warning that the HS directory isn't writable.
When permissions is 300 though, tor complains that 'Directory x cannot be read'.
tor-0.3.3.7 on FreeBSD-12https://gitlab.torproject.org/tpo/core/tor/-/issues/26434Bug: Emptied a dirserv buffer, but it's still spooling!2022-06-22T15:28:55ZRoger DingledineBug: Emptied a dirserv buffer, but it's still spooling!On moria1, running valgrind so things are slower than usual, I got these two Bug: lines (client IP address replaced with "[ip]"):
```
Jun 20 18:11:24.982 [info] connection_handle_write_impl(): tls error ([ip]:65063). breaking.
Jun 20 18:...On moria1, running valgrind so things are slower than usual, I got these two Bug: lines (client IP address replaced with "[ip]"):
```
Jun 20 18:11:24.982 [info] connection_handle_write_impl(): tls error ([ip]:65063). breaking.
Jun 20 18:11:24.982 [info] connection_or_notify_error(): called for [ip]:65063
Jun 20 18:11:24.982 [info] connection_close_immediate(): fd 571, type OR, state open, 1024 bytes on outbuf.
Jun 20 18:11:25.000 [warn] connection_dir_finished_flushing(): Bug: Emptied a dirserv buffer, but it's still spooling! (on Tor 0.3.5.0-alpha-dev e4e949e901400c85)
Jun 20 18:11:25.000 [warn] connection_mark_for_close_internal_(): Bug: Duplicate call to connection_mark_for_close at src/or/directory.c:5229 (first at src/or/main.c:1218) (on Tor 0.3.5.0-alpha-dev e4e949e901400c85)
Jun 20 18:11:25.000 [info] connection_edge_reached_eof(): conn (fd -1) reached eof. Closing.
Jun 20 18:11:25.001 [info] connection_free_minimal(): Freeing linked Exit connection [open] with 0 bytes on inbuf, 0 on outbuf.
```
I'm putting this in the 0.3.5 milestone since that's the version I was running, but I bet it's a bug in earlier releases too.https://gitlab.torproject.org/tpo/core/tor/-/issues/26294attacker can force intro point rotation by ddos2023-05-31T18:22:18ZRoger Dingledineattacker can force intro point rotation by ddosCurrently, an onion service's intro points each expire (intentionally rotate) after receiving rand(16384, 16384*2) intro requests.
Imagine an attacker who generates many introduction attempts. Since each intro attempt can take its own p...Currently, an onion service's intro points each expire (intentionally rotate) after receiving rand(16384, 16384*2) intro requests.
Imagine an attacker who generates many introduction attempts. Since each intro attempt can take its own path to the target intro point, the bottleneck will be the introduction circuit itself. Let's say that intro circuit can sustain 500KBytes/s of traffic. That's 1000 intro requests per second coming in -- so after 24ish seconds (rand(16,32)), that intro point will expire: the onion service will pick a new one and start publishing new onion descriptors.
If the intro circuit can handle 1MByte/s, then rotation will happen after 12ish seconds.
With three intro circuits, each receiving intro requests at a different rate, we could end up changing our descriptor even more often than this. There are at least four impacts from this attack:
(1) Onion services spend energy and bandwidth generating new intro circuits, and publishing new descriptors to list them.
(2) Clients might get the last onion descriptor, not the next one, and so they'll attempt to introduce to a circuit that's no longer listening.
(3) The intro points themselves get a surprise 16k-32k incoming circuits, probably plus a lot more after that because the attacker wouldn't know when to stop. Not only that, but for v2 onion services these circuits use the slower TAP as the circuit handshake at the intro point.
(4) The HSDirs get a new descriptor every few seconds, which aside from the bandwidth and circuit load, tells them that the onion service is under attack like this.
Intro points that can handle several megabytes of traffic per second will keep up and push the intro requests back to the onion service, thus hastening the rotation. Intro points that *can't* handle that traffic will become congested and no fun to use for others during the period of the attack.
The reason we rotate after 16k-32k requests is because the intro point keeps a replay cache, to avoid ever responding to a given intro request more than once.
One direction would be to work on bumping up the size of the replay cache, or designing a different data structure like a bloom filter so we can scale the replay cache better. I think we could succeed there. The benefits would be to (1) and (2) and (4) above, i.e. onion services won't spend so much time making new descriptors, and clients will be more likely to use an onion descriptor that's still accurate. The drawback would be to (3), where the hotspots last longer, that is, the poor intro point feels the damage for a longer period of time.
Taking a step back, I think there are two directions we can go here. Option one, we can try to scale to handle the load. We would focus on load balancing better, like reacting to an attack by choosing super fast intro points, and either choosing fast middle nodes too, or some fancier approach like having multiple circuits to your intro point. Option two, we recognize that this volume of introduction requests represents a problem in itself, and try to introduce defenses at the intro point level. Here we focus on proof of work schemes or other ways to slow down the flow, or we improve the protocol to pass along hints about how to sort the intro requests by priority.https://gitlab.torproject.org/tpo/core/tor/-/issues/26076test_keygen.sh and test_key_expiration.sh fail on Windows2022-09-01T21:29:26ZTractest_keygen.sh and test_key_expiration.sh fail on Windows```
FAIL: src/test/test_keygen.sh
=============================
May 11 01:50:30.175 [warn] Path for GeoIPFile (<default>) is relative and will resolve to C:\projects\appveyor\i686-w64-mingw32\<default>. Is this what you wanted?
May 11 01...```
FAIL: src/test/test_keygen.sh
=============================
May 11 01:50:30.175 [warn] Path for GeoIPFile (<default>) is relative and will resolve to C:\projects\appveyor\i686-w64-mingw32\<default>. Is this what you wanted?
May 11 01:50:30.175 [warn] Path for GeoIPv6File (<default>) is relative and will resolve to C:\projects\appveyor\i686-w64-mingw32\<default>. Is this what you wanted?
Tor didn't declare that there would be no encryption
FAIL src/test/test_keygen.sh (exit status: 5)
SKIP: src/test/fuzz_static_testcases.sh
```
Edit: when we fix this bug, we should revert legacy/trac#26830 and legacy/trac#26853, which skips these tests on Windows
**Trac**:
**Username**: saperhttps://gitlab.torproject.org/tpo/core/tor/-/issues/25729UTF8 encoded TORRC does NOT parse non-Latin paths2022-09-01T21:29:26ZTracUTF8 encoded TORRC does NOT parse non-Latin pathsUnpack [this Tor archive](https://linx.li/selif/tor-utf8-fails.7z) to C:\
It will create the following hierarchy:
_C:\Проверка\Tor_ (for executables, libraries and torrc)
_C:\Проверка\Tor\Data_ (for data and geoip)
Configuration file, ...Unpack [this Tor archive](https://linx.li/selif/tor-utf8-fails.7z) to C:\
It will create the following hierarchy:
_C:\Проверка\Tor_ (for executables, libraries and torrc)
_C:\Проверка\Tor\Data_ (for data and geoip)
Configuration file, torrc, is encoded UTF8.
It has this line: _DataDirectory C:\Проверка\Tor\Data_
If I run tor.exe -f torrc, the output as follows
''[warn] Error creating directory C:\Проверка\Tor\Data: No such file or directory
[warn] Failed to parse/validate config: Couldn't create private data directory "C:\Проверка\Tor\Data"''
Now let’s replace UTF8 encoded torrc with [ANSI encoded torrc](https://linx.li/selif/torrc-ansi.7z) and Tor works as expected.
**Trac**:
**Username**: Fleminghttps://gitlab.torproject.org/tpo/core/tor/-/issues/25461main event-loop spins consuming 100% of a CPU core running scheduler_set_chan...2022-06-24T14:22:34ZDhalgrenmain event-loop spins consuming 100% of a CPU core running scheduler_set_channel_stateLately have observed my exit hitting 100% cpu on the main even-loop thread, sometimes continuously, sometimes cyclically. Captured full-debug of recent cyclical event where CPU started at 30% and rose to 100%, for about one cycle. Chop...Lately have observed my exit hitting 100% cpu on the main even-loop thread, sometimes continuously, sometimes cyclically. Captured full-debug of recent cyclical event where CPU started at 30% and rose to 100%, for about one cycle. Chopped the 1G log into eight slices and took a simple function call-count histogram. What's notable is not an increase of calls during saturation, but a reduction of several that seem to relate to connection close events (conn_close_if_marked, flush_chunk). Left column is for the first slice where CPU was 30%, right column is for fourth slice where cpu was 100%. Functions with less than 1000 calls not included below, but complete histograms attached. Wrote about this on tor-relays:
https://lists.torproject.org/pipermail/tor-relays/2018-March/014730.html
This might be an attack of some kind, or perhaps a misbehavior related to the KIST scheduler.
```
append_cell_to_circuit_queue 6787 append_cell_to_circuit_queue 7280
channel_flush_from_first_active_circuit 6781 channel_flush_from_first_active_circuit 7190
channel_process_cell 11904 channel_process_cell 11813
channel_write_packed_cell 120301 channel_write_packed_cell 126330
channel_write_to_kernel 8588 channel_write_to_kernel 10048
circuit_consider_stop_edge_reading 146965 circuit_consider_stop_edge_reading 152665
circuit_get_by_circid_channel_impl 14128 circuit_get_by_circid_channel_impl 13468
circuit_receive_relay_cell 11483 circuit_receive_relay_cell 11341
circuit_resume_edge_reading 1203 circuit_resume_edge_reading 1231
conn_close_if_marked 39033 conn_close_if_marked 779
conn_read_callback 14743 conn_read_callback 15645
conn_write_callback 4531 conn_write_callback 4447
connection_add_impl 1023 connection_add_impl 739
connection_bucket_refill_helper 14787 connection_bucket_refill_helper 15842
connection_buf_read_from_socket 16196 connection_buf_read_from_socket 17152
connection_connect 1016 connection_connect 732
connection_connect_sockaddr 1016 connection_connect_sockaddr 732
connection_edge_package_raw_inbuf 237303 connection_edge_package_raw_inbuf 255347
connection_edge_process_relay_cell 22219 connection_edge_process_relay_cell 22332
connection_exit_begin_conn 3165 connection_exit_begin_conn 2315
connection_exit_connect 1050 connection_exit_connect 772
connection_handle_write_impl 9240 connection_handle_write_impl 10539
connection_or_process_cells_from_inbuf 20042 connection_or_process_cells_from_inbuf 20448
flush_chunk 38192 flush_chunk 12
flush_chunk_tls 22283 flush_chunk_tls 24061
free_outbuf_info_by_ent 8588 free_outbuf_info_by_ent 10047
outbuf_table_add 8588 outbuf_table_add 10014
read_to_chunk 6856 read_to_chunk 7254
relay_lookup_conn 8459 relay_lookup_conn 8525
relay_send_command_from_edge_ 119963 relay_send_command_from_edge_ 128738
rep_hist_note_exit_bytes 13913 rep_hist_note_exit_bytes 14534
scheduler_set_channel_state 126896 scheduler_set_channel_state 133353
update_socket_info 6719 update_socket_info 7160
update_socket_written 120297 update_socket_written 126327
```