Trac issueshttps://gitlab.torproject.org/legacy/trac/-/issues2020-11-13T13:39:33Zhttps://gitlab.torproject.org/legacy/trac/-/issues/33029dir-auth: Dir auths should resume sending 503's but never to relays or other ...2020-11-13T13:39:33ZDavid Gouletdgoulet@torproject.orgdir-auth: Dir auths should resume sending 503's but never to relays or other dir authsThis is a child ticket from #33018.
The idea here is that even if we hit the global write limit (bw), we should not return 503 code but rather answer another directory authority.
Dirauth _must_ be able to talk to each other at all time...This is a child ticket from #33018.
The idea here is that even if we hit the global write limit (bw), we should not return 503 code but rather answer another directory authority.
Dirauth _must_ be able to talk to each other at all time regardless of the bandwidth state.
Setting 043 milestone because this should be considered a bug and could even be considered for backport since dirauth are suffering from this at the moment.Tor: 0.4.2.x-finalDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/legacy/trac/-/issues/34400control: HSFETCH command fails to validate v2 addresses2020-06-13T15:53:46ZDavid Gouletdgoulet@torproject.orgcontrol: HSFETCH command fails to validate v2 addressesIn `handle_control_hsfetch()`:
```
} else if (strcmpstart(arg1, v2_str) == 0 &&
rend_valid_descriptor_id(arg1 + v2_str_len) &&
base32_decode(digest, sizeof(digest), arg1 + v2_str_len,
...In `handle_control_hsfetch()`:
```
} else if (strcmpstart(arg1, v2_str) == 0 &&
rend_valid_descriptor_id(arg1 + v2_str_len) &&
base32_decode(digest, sizeof(digest), arg1 + v2_str_len,
REND_DESC_ID_V2_LEN_BASE32) ==
REND_DESC_ID_V2_LEN_BASE32) {
```
The above snippet is how we validate the v2 address for the `HSFETCH` command. The `base32_decode()` function returns the number of bytes _decoded_ and thus it should returns on success `sizeof(digest)`, not the total length of the base32 address (20 vs 32).
Issue was introduced in commit `a517daa56f5848d25ba79617a1a7b82ed2b0a7c0` meaning since 0.4.1.1-alpha (ticket #28913).
I noticed this because I recently updated the bad HSDirscanner Tor to use our latest and it broke the scanner.Tor: 0.4.5.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/32672Reject 0.2.9 and 0.4.0 in dirserv_rejects_tor_version()2020-06-13T15:53:43ZteorReject 0.2.9 and 0.4.0 in dirserv_rejects_tor_version()We should modify dirserv_rejects_tor_version() to keep it up to date with our supported versions:
* After 1 January 2020, we should reject all versions less than 0.3.5.
* After 2 February 2020, we should reject the 0.4.0 series, but allo...We should modify dirserv_rejects_tor_version() to keep it up to date with our supported versions:
* After 1 January 2020, we should reject all versions less than 0.3.5.
* After 2 February 2020, we should reject the 0.4.0 series, but allow the 0.3.5 series
https://trac.torproject.org/projects/tor/wiki/org/teams/NetworkTeam/CoreTorReleases#Current
After these dates, we should get arma to test this change, then merge it.
After we merge, we should create a ticket in 0.4.4 to reject 0.4.1.Tor: 0.4.2.x-finalNeel Chauhanneel@neelc.orgNeel Chauhanneel@neelc.orghttps://gitlab.torproject.org/legacy/trac/-/issues/34251Fix edge case handling in Rust protover is supported2020-06-13T15:53:39ZteorFix edge case handling in Rust protover is supportedTor's Rust FFI for protocol_list_supports_protocol_or_later() returns true for the empty protocol list.
In C, the function returns false, but this behaviour is undocumented.
This bug doesn't affect protocol_list_supports_protocol() in ...Tor's Rust FFI for protocol_list_supports_protocol_or_later() returns true for the empty protocol list.
In C, the function returns false, but this behaviour is undocumented.
This bug doesn't affect protocol_list_supports_protocol() in Rust, because the Rust error checks are done in a different order.
I'll add a quick fix to #33222, but someone else will need to do the backport. We might want to do the Rust error checks in the same order, too.Tor: 0.4.5.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/34248Declare HSIntro=5 in Tor's rust protover implementation2020-06-13T15:53:38ZteorDeclare HSIntro=5 in Tor's rust protover implementationMy protover tests for #33222 fail in Rust, because Tor's rust protover doesn't declare HSIntro=5.
I'll do a quick fix in #33222, but I want to leave the backport to David.My protover tests for #33222 fail in Rust, because Tor's rust protover doesn't declare HSIntro=5.
I'll do a quick fix in #33222, but I want to leave the backport to David.Tor: 0.4.5.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/34204Downgrade Travis stem version to a commit where tests pass.2020-06-13T15:53:32ZNick MathewsonDowngrade Travis stem version to a commit where tests pass.Due to https://github.com/torproject/stem/issues/63 our CI is failing. Let's downgrade to a working version of Stem unless it gets fixed right away.
I have a test PR at https://github.com/torproject/tor/pull/1889 ; if CI passes, I'll m...Due to https://github.com/torproject/stem/issues/63 our CI is failing. Let's downgrade to a working version of Stem unless it gets fixed right away.
I have a test PR at https://github.com/torproject/tor/pull/1889 ; if CI passes, I'll make PRs for the other branches and put this in needs_review.Tor: 0.4.4.x-finalNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/legacy/trac/-/issues/34130Tor won't start with seccomp sandbox when compiled with --enable-nss2020-06-13T15:53:27ZJigsaw52Tor won't start with seccomp sandbox when compiled with --enable-nssAfter compiling tor with the --enable-nss flag, starting tor with "Sandbox 1" on torrc results on the following error:
```
May 06 21:47:46.198 [notice] Tor 0.4.4.0-alpha-dev (git-42dfcd0ae3f7a872) running on Linux with Libevent 2.1.8-st...After compiling tor with the --enable-nss flag, starting tor with "Sandbox 1" on torrc results on the following error:
```
May 06 21:47:46.198 [notice] Tor 0.4.4.0-alpha-dev (git-42dfcd0ae3f7a872) running on Linux with Libevent 2.1.8-stable, NSS 3.35, Zlib 1.2.11, Liblzma 5.2.2, and Libzstd 1.3.3.
May 06 21:47:46.198 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning
May 06 21:47:46.198 [notice] This version is not a stable Tor release. Expect more bugs than usual.
May 06 21:47:46.198 [notice] Read configuration file "/home/daniel/Desktop/torrc_sandbox".
May 06 21:47:46.200 [notice] Opening Socks listener on 127.0.0.1:9050
May 06 21:47:46.200 [notice] Opened Socks listener on 127.0.0.1:9050
May 06 21:47:46.000 [notice] Parsing GEOIP IPv4 file /usr/local/share/tor/geoip.
May 06 21:47:46.000 [notice] Parsing GEOIP IPv6 file /usr/local/share/tor/geoip6.
May 06 21:47:46.000 [warn] TLS error PR_NO_ACCESS_RIGHTS_ERROR while constructing a client TLS context: Access Denied
May 06 21:47:46.000 [err] Error creating TLS context for Tor client.
May 06 21:47:46.000 [err] Error initializing keys; exiting
```Tor: 0.4.3.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/33880Confusing "Your relay has a very large number of connections to other relays"...2020-06-13T15:53:02ZRoger DingledineConfusing "Your relay has a very large number of connections to other relays" relay messageA new relay operator reports this complaint in their logs, showing up hourly:
```
Your relay has a very large number of connections to other relays. Is your
outbound address the same as your relay address? Found 13 connections to 8
relay...A new relay operator reports this complaint in their logs, showing up hourly:
```
Your relay has a very large number of connections to other relays. Is your
outbound address the same as your relay address? Found 13 connections to 8
relays. Found 13 current canonical connections, in 0 of which we were a
non-canonical peer. 5 relays had more than 1 connection, 0 had more than 2, and
0 had more than 4 connections.
```
I checked, and their outbound address was the same as their relay address.
Upon investigation, it looks like the redundant connections are to directory authorities.
My theory is that the change from #17592 (which went into 0.3.1.1-alpha, commit d5a151a0) is responsible: while before that canonical relay-to-relay connections would expire after either side first reached its randomized "15 to 22.5 minute" timeout, now they expire after either side reaches its "45 to 75 minute" timeout. And since directory authorities test reachability every 1280 seconds (around 21.3 minutes), that means it is expected that most relays will have duplicate canonical connections with directory authorities.
Possible fixes:
(A) Change the notice-level log to make it clearer that it's not scary, or at least it's not actionable. Maybe that means making it info-level so nobody will see it. Probably not the best option, assuming there *are* cases where we do want relay operators to hear it.
(B) In channel_check_for_duplicates(), change `MIN_RELAY_CONNECTIONS_TO_WARN 5` to a high enough number that even if we have 2 canonical conns per authority, plus a bit more, the log message still doesn't trigger.
(C) In channel_check_for_duplicates(), skip over connections to directory authorities in some way, since we know they will be special.
(D) Make connections to or from directory authorities expire quicker, on the theory that they don't really need the same level of padding protection as other connections.
(E) Your idea here?
I'd be fine with any of B,C,D. Whichever one can be done with an easy, short, and non-invasive patch is my favorite. Maybe that's "B, raise it to 30"? That would make the message trigger when we have connections to more than 30 relays and also we have more than 45 connections open. Or we could pick the more conservative "raise it to 40", on the theory that small numbers are more likely to have edge cases and less indicative of major network problems anyway.
And while we're at it, it might be smart to say in the log message what action we want the relay operator take, e.g. "Please report:".Tor: unspecifiedNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/legacy/trac/-/issues/33837Tor.framework Unknown type name 'dispatch_queue_t'2020-06-13T15:52:56ZteorTor.framework Unknown type name 'dispatch_queue_t'From https://trac.torproject.org/projects/tor/ticket/33522#comment:2
>
> Replying to [tla](#33522):
> > It has a Travis-CI configuration, which I just updated to work on the latest macOS/Xcode image:
> >
> > https://github.com/iCepa/To...From https://trac.torproject.org/projects/tor/ticket/33522#comment:2
>
> Replying to [tla](#33522):
> > It has a Travis-CI configuration, which I just updated to work on the latest macOS/Xcode image:
> >
> > https://github.com/iCepa/Tor.framework/blob/master/.travis.yml
> >
> >
> > Currently, we have issues in getting past Tor 0.4.0.6 on iOS. When I try to use a newer core, I get this error message:
> >
> > > Unknown type name 'dispatch_queue_t'
> >
> > in CFStream of Apple's CoreFoundation framework.
> >
> >
> > But "dispatch_queue_t" is actually a valid symbol from Apple's Foundation libraries.
> >
> > So somehow, it gets cancelled out through something which changed in Tor recently.
>
> This looks like a bug in tor's code, or perhaps in the Tor.framework embedding scripts.
>
> We'd be happy to help you diagnose this issue.
>
> Can you tell us the first release that has this issue? Is it 0.4.1, 0.4.2, or 0.4.3 ?
> Have you done a git bisect, to track down the commit that introduced the issue?
>
> Let's open a separate ticket, so we can fix this bug in tor's code.
> Or help you find a workaround when you embed tor.
I can't see dispatch_queue_t in Tor's code.
Perhaps we're defining some preprocessor symbols, or including a header that conflicts with dispatch_queue_t's header.
We don't know which release this bug was introduced in. But Tor 0.4.0.6 does not have this error.Tor: 0.4.3.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/33788Check the return value of tor_inet_ntop() and tor_inet_ntoa()2020-06-13T15:52:44ZteorCheck the return value of tor_inet_ntop() and tor_inet_ntoa()The following functions don't check the return value of tor_inet_ntop() or tor_inet_ntoa():
IPv6, could be serious:
* evdns_callback(), multiple times
IPv4 only, unlikely to be a serious bug:
* tor_dup_ip()
* fmt_addr32()
* evdns_wildc...The following functions don't check the return value of tor_inet_ntop() or tor_inet_ntoa():
IPv6, could be serious:
* evdns_callback(), multiple times
IPv4 only, unlikely to be a serious bug:
* tor_dup_ip()
* fmt_addr32()
* evdns_wildcard_check_callback()
These functions should log a bug log using BUG() or tor_assert_nonfatal(), and return an error. (Or for the formatting functions, a sensible placeholder string.)
We will also need to make their callers check for the error.Tor: 0.4.4.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/33768Make tor_inet_pton() handle bad addresses consistently on Windows2020-06-13T15:52:40ZteorMake tor_inet_pton() handle bad addresses consistently on Windowstor_inet_pton() handles bad addresses differently on Windows and Linux/macOS.
For example, the address: "2000::1a00::1000:fc098" (two "::") fails this test on Windows, but succeeds on Linux and macOS:
https://github.com/torproject/tor/p...tor_inet_pton() handles bad addresses differently on Windows and Linux/macOS.
For example, the address: "2000::1a00::1000:fc098" (two "::") fails this test on Windows, but succeeds on Linux and macOS:
https://github.com/torproject/tor/pull/1831/commits/05f4f93722d46c0e8f1d09b4dea4bf5d1743d93f#diff-048243cd6d9ed36dda0944181d8ec8abR1729
Let's fix this bug and backport it.
In general, we should make all the functions in this file behave identically:
* zero any out parameters at the start of the function
* zero any out parameters on failureTor: 0.4.4.x-finalDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/legacy/trac/-/issues/33757Fix log message when multiple tors try the same data directory2020-06-13T15:52:37ZteorFix log message when multiple tors try the same data directoryWhen a user launches two tor processes in the same data directory, they get these confusing logs:
```
Mar 29 17:45:08.000 [warn] It looks like another Tor process is running with the same data directory. Waiting 5 seconds to see if it g...When a user launches two tor processes in the same data directory, they get these confusing logs:
```
Mar 29 17:45:08.000 [warn] It looks like another Tor process is running with the same data directory. Waiting 5 seconds to see if it goes away.
Mar 29 17:45:13.000 [err] No, it's still there. Exiting.
Mar 29 17:45:13.000 [err] set_options(): Bug: Acting on config options left us in a broken state. Dying. (on Tor 0.4.2.7 )
Mar 29 17:45:13.000 [err] Reading config failed--see warnings above.
Mar 29 19:55:41.000 [notice] Catching signal TERM, exiting cleanly.
```
Full logs:
https://lists.torproject.org/pipermail/tor-relays/2020-March/018306.html
I don't think the bug log should be there.Tor: 0.4.4.x-finalGeorge KadianakisGeorge Kadianakishttps://gitlab.torproject.org/legacy/trac/-/issues/33686Tor 0.4.2.7 wont get past bootstrap phase with Sandbox 1 enabled2020-06-13T15:52:29ZcypherpunksTor 0.4.2.7 wont get past bootstrap phase with Sandbox 1 enabledI get this output from the terminal only when Sandbox 1 is enabled and it did not happen on earlier versions:
Mar 21 21:07:21.000 [notice] Bootstrapped 0% (starting): Starting
Mar 21 21:10:10.000 [notice] Starting with guard context "...I get this output from the terminal only when Sandbox 1 is enabled and it did not happen on earlier versions:
Mar 21 21:07:21.000 [notice] Bootstrapped 0% (starting): Starting
Mar 21 21:10:10.000 [notice] Starting with guard context "default"
Mar 21 21:10:10.000 [warn] tor_bug_occurred_(): Bug: src/lib/evloop/workqueue.c:353: workerthread_new: This line should not have been reached. (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: Tor 0.4.2.7: Line unexpectedly reached at workerthread_new at src/lib/evloop/workqueue.c:353. Stack trace: (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: tor(log_backtrace_impl+0x47) [0x600404] (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: tor(tor_bug_occurred_+0xa9) [0x5fcea2] (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: tor(threadpool_new+0x12b) [0x5ae6b0] (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: tor(cpu_init+0x59) [0x4be66e] (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: tor(run_tor_main_loop+0x9f) [0x4b3064] (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: tor(tor_run_main+0xb85) [0x4b3f16] (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: tor(tor_main+0x23) [0x4b1f98] (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: tor(main+0x13) [0x4b1c24] (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: /lib/arm-linux-gnueabihf/libc.so.6(__libc_start_main+0x97) [0xb694b524] (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [err] Can't launch worker thread.
Mar 21 21:10:10.000 [warn] tor_bug_occurred_(): Bug: src/lib/evloop/workqueue.c:519: threadpool_start_threads: This line should not have been reached. (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: Tor 0.4.2.7: Line unexpectedly reached at threadpool_start_threads at src/lib/evloop/workqueue.c:519. Stack trace: (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: tor(log_backtrace_impl+0x47) [0x600404] (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: tor(tor_bug_occurred_+0xa9) [0x5fcea2] (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: tor(threadpool_new+0x15b) [0x5ae6e0] (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: tor(cpu_init+0x59) [0x4be66e] (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: tor(run_tor_main_loop+0x9f) [0x4b3064] (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: tor(tor_run_main+0xb85) [0x4b3f16] (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: tor(tor_main+0x23) [0x4b1f98] (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: tor(main+0x13) [0x4b1c24] (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: /lib/arm-linux-gnueabihf/libc.so.6(__libc_start_main+0x97) [0xb694b524] (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] tor_bug_occurred_(): Bug: src/lib/evloop/workqueue.c:563: threadpool_new: This line should not have been reached. (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: Tor 0.4.2.7: Line unexpectedly reached at threadpool_new at src/lib/evloop/workqueue.c:563. Stack trace: (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: tor(log_backtrace_impl+0x47) [0x600404] (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: tor(tor_bug_occurred_+0xa9) [0x5fcea2] (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: tor(threadpool_new+0x183) [0x5ae708] (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: tor(cpu_init+0x59) [0x4be66e] (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: tor(run_tor_main_loop+0x9f) [0x4b3064] (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: tor(tor_run_main+0xb85) [0x4b3f16] (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: tor(tor_main+0x23) [0x4b1f98] (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: tor(main+0x13) [0x4b1c24] (on Tor 0.4.2.7 )
Mar 21 21:10:10.000 [warn] Bug: /lib/arm-linux-gnueabihf/libc.so.6(__libc_start_main+0x97) [0xb694b524] (on Tor 0.4.2.7 )
============================================================ T= 1584825010
Tor 0.4.2.7 died: Caught signal 11
tor(+0x19631a)[0x60031a]
tor(threadpool_register_reply_event+0x19)[0x5ae80e]
tor(threadpool_register_reply_event+0x19)[0x5ae80e]
tor(cpu_init+0x61)[0x4be676]
tor(run_tor_main_loop+0x9f)[0x4b3064]
tor(tor_run_main+0xb85)[0x4b3f16]
tor(tor_main+0x23)[0x4b1f98]
tor(main+0x13)[0x4b1c24]
/lib/arm-linux-gnueabihf/libc.so.6(__libc_start_main+0x97)[0xb694b524]
[1]+ Illegal instruction tor -f torrcTor: 0.4.4.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/33673Use the right DLLs and pkg-config path on Appveyor2020-06-13T15:52:26ZteorUse the right DLLs and pkg-config path on AppveyorWe want to future-proof our Appveyor CI against dll and pkg-config issues.
Split off from #33643, which is the urgent CI fix.We want to future-proof our Appveyor CI against dll and pkg-config issues.
Split off from #33643, which is the urgent CI fix.Tor: 0.3.5.x-finalteorteorhttps://gitlab.torproject.org/legacy/trac/-/issues/32449Appveyor: OpenSSL version mismatch in unit tests, 2019 edition2020-06-13T15:52:21ZteorAppveyor: OpenSSL version mismatch in unit tests, 2019 edition```
crypto/openssl_version: [forking]
FAIL ../src/test/test_crypto.c:237: OpenSSL library version 1.1.1d did not begin with header version 1.1.1c.
[openssl_version FAILED]
```
https://ci.appveyor.com/project/torproject/tor/builds/28...```
crypto/openssl_version: [forking]
FAIL ../src/test/test_crypto.c:237: OpenSSL library version 1.1.1d did not begin with header version 1.1.1c.
[openssl_version FAILED]
```
https://ci.appveyor.com/project/torproject/tor/builds/28757237/job/ul9m0fnglaqtw3oy?fullLog=true#L3511Tor: 0.3.5.x-finalteorteorhttps://gitlab.torproject.org/legacy/trac/-/issues/33643Appveyor: OpenSSL version mismatch in unit tests, 2020 edition2020-06-13T15:52:21ZteorAppveyor: OpenSSL version mismatch in unit tests, 2020 editionIt's happened again:
```
OpenSSL library version 1.1.1d did not begin with header version 1.1.1e.
```
https://ci.appveyor.com/project/torproject/tor/builds/31549942/job/v0i9svtg78gqln1v#L6380
Just like #32449, #28574, #28399 and #25942....It's happened again:
```
OpenSSL library version 1.1.1d did not begin with header version 1.1.1e.
```
https://ci.appveyor.com/project/torproject/tor/builds/31549942/job/v0i9svtg78gqln1v#L6380
Just like #32449, #28574, #28399 and #25942.
We think our tests are fragile, because they are not copying the necessary libraries into `${env:build}/src/test` from `C:/mingw32/lib`:
```
ssl
crypto
lzma
event
zstd
```
We already copy zlib and ssp at https://github.com/ahf/tor/blob/master/.appveyor.yml#L98-L99 .
These libraries might have different dll prefixes or suffixes, for example, OpenSSL is:
```
/mingw32/bin/libcrypto-1_1.dll
/mingw32/bin/libssl-1_1.dll
```
https://packages.msys2.org/package/mingw-w64-i686-openssl
We might also want to copy these libraries into the same directory as the tor executable `${env:build}/src/app`, before we run the tests that launch tor.Tor: 0.3.5.x-finalNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/legacy/trac/-/issues/33623sendme: Change default emit cell version from 0 to 12020-06-13T15:52:14ZDavid Gouletdgoulet@torproject.orgsendme: Change default emit cell version from 0 to 1In #32774, the consensus is telling every relay to emit SENDME v1.
We should change the default hardcoded value from 0 to 1 as well.In #32774, the consensus is telling every relay to emit SENDME v1.
We should change the default hardcoded value from 0 to 1 as well.Tor: 0.4.2.x-finalDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/legacy/trac/-/issues/33619Resolve TROVE-2020-0042020-06-13T15:52:14ZNick MathewsonResolve TROVE-2020-004This is a remotely triggerable memory leak on relays and clients, found by tobias pulls.
The issue is that when circpad_setup_machine_on_circ() is reached with an inconsistent internal configuration, it fails to free an object that is r...This is a remotely triggerable memory leak on relays and clients, found by tobias pulls.
The issue is that when circpad_setup_machine_on_circ() is reached with an inconsistent internal configuration, it fails to free an object that is replaced. It logs a bug warning, but that isn't enough.
Tobias Pulls found that this code was actually reachable, though, and results in a memory leak.Tor: 0.4.1.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/33491tor_bug_occurred_: Bug: src/core/or/dos.c:697: dos_new_client_conn: Non-fatal...2020-06-13T15:51:55ZTractor_bug_occurred_: Bug: src/core/or/dos.c:697: dos_new_client_conn: Non-fatal assertion !(entry == NULL) failed. (Future instances of this warning will be silenced.) (on Tor 0.4.2.6 )Hi there,
Receiving the below report, searched for 'src/core/or/dos.c:697' no hits, so opening ticket, please let me know if further info is needed to troubleshoot.
FreeBSD <<hostname>> 12.1-RELEASE-p2 FreeBSD 12.1-RELEASE-p2 GENERIC ...Hi there,
Receiving the below report, searched for 'src/core/or/dos.c:697' no hits, so opening ticket, please let me know if further info is needed to troubleshoot.
FreeBSD <<hostname>> 12.1-RELEASE-p2 FreeBSD 12.1-RELEASE-p2 GENERIC amd64
```
Mar 1 13:53:33 <<hostname>> Tor[86742]: tor_bug_occurred_: Bug: src/core/or/dos.c:697: dos_new_client_conn: Non-fatal assertion !(entry == NULL) failed. (Future instances of this warning will be silenced.) (on Tor 0.4.2.6 )
Mar 1 13:53:33 <<hostname>> Tor[86742]: Bug: Tor 0.4.2.6: Non-fatal assertion !(entry == NULL) failed in dos_new_client_conn at src/core/or/dos.c:697. Stack trace: (on Tor 0.4.2.6 )
Mar 1 13:53:33 <<hostname>> Tor[86742]: Bug: 0x12f4acc <log_backtrace_impl+0x5c> at /usr/local/bin/tor (on Tor 0.4.2.6 )
Mar 1 13:53:33 <<hostname>> Tor[86742]: Bug: 0x12f0b76 <tor_bug_occurred_+0x1d6> at /usr/local/bin/tor (on Tor 0.4.2.6 )
Mar 1 13:53:33 <<hostname>> Tor[86742]: Bug: 0x118d9a5 <channel_do_open_actions+0xf5> at /usr/local/bin/tor (on Tor 0.4.2.6 )
Mar 1 13:53:33 <<hostname>> Tor[86742]: Bug: 0x118d88e <channel_change_state_open+0x2e> at /usr/local/bin/tor (on Tor 0.4.2.6 )
Mar 1 13:53:33 <<hostname>> Tor[86742]: Bug: 0x1191a57 <channel_tls_handle_state_change_on_orconn+0x67> at /usr/local/bin/tor (on Tor 0.4.2.6 )
Mar 1 13:53:33 <<hostname>> Tor[86742]: Bug: 0x1144946 <connection_or_set_state_open+0x26> at /usr/local/bin/tor (on Tor 0.4.2.6 )
Mar 1 13:53:33 <<hostname>> Tor[86742]: Bug: 0x11923ee <channel_tls_handle_cell+0x88e> at /usr/local/bin/tor (on Tor 0.4.2.6 )
Mar 1 13:53:33 <<hostname>> Tor[86742]: Bug: 0x1140d52 <connection_or_process_inbuf+0x152> at /usr/local/bin/tor (on Tor 0.4.2.6 )
Mar 1 13:53:33 <<hostname>> Tor[86742]: Bug: 0x114ddad <connection_handle_read+0x8fd> at /usr/local/bin/tor (on Tor 0.4.2.6 )
Mar 1 13:53:33 <<hostname>> Tor[86742]: Bug: 0x119e3ee <connection_add_impl+0x23e> at /usr/local/bin/tor (on Tor 0.4.2.6 )
Mar 1 13:53:33 <<hostname>> Tor[86742]: Bug: 0x8013d72b3 <event_base_assert_ok_nolock_+0xc23> at /usr/local/lib/libevent-2.1.so.7 (on Tor 0.4.2.6 )
Mar 1 13:53:33 <<hostname>> Tor[86742]: Bug: 0x8013d318f <event_base_loop+0x53f> at /usr/local/lib/libevent-2.1.so.7 (on Tor 0.4.2.6 )
Mar 1 13:53:33 <<hostname>> Tor[86742]: Bug: 0x11a0881 <do_main_loop+0xf1> at /usr/local/bin/tor (on Tor 0.4.2.6 )
Mar 1 13:53:33 <<hostname>> Tor[86742]: Bug: 0x113de68 <tor_run_main+0x128> at /usr/local/bin/tor (on Tor 0.4.2.6 )
Mar 1 13:53:33 <<hostname>> Tor[86742]: Bug: 0x113c656 <tor_main+0x66> at /usr/local/bin/tor (on Tor 0.4.2.6 )
Mar 1 13:53:33 <<hostname>> Tor[86742]: Bug: 0x113c309 <main+0x19> at /usr/local/bin/tor (on Tor 0.4.2.6 )
Mar 1 13:54:37 <<hostname>> Tor[86742]: tor_bug_occurred_: Bug: src/core/or/dos.c:697: dos_new_client_conn: Non-fatal assertion !(entry == NULL) failed. (Future instances of this warning will be silenced.) (on Tor 0.4.2.6 )
Mar 1 13:54:37 <<hostname>> Tor[86742]: Bug: Tor 0.4.2.6: Non-fatal assertion !(entry == NULL) failed in dos_new_client_conn at src/core/or/dos.c:697. Stack trace: (on Tor 0.4.2.6 )
Mar 1 13:54:37 <<hostname>> Tor[86742]: Bug: 0x12f4acc <log_backtrace_impl+0x5c> at /usr/local/bin/tor (on Tor 0.4.2.6 )
Mar 1 13:54:37 <<hostname>> Tor[86742]: Bug: 0x12f0b76 <tor_bug_occurred_+0x1d6> at /usr/local/bin/tor (on Tor 0.4.2.6 )
Mar 1 13:54:37 <<hostname>> Tor[86742]: Bug: 0x118d9a5 <channel_do_open_actions+0xf5> at /usr/local/bin/tor (on Tor 0.4.2.6 )
Mar 1 13:54:37 <<hostname>> Tor[86742]: Bug: 0x118d88e <channel_change_state_open+0x2e> at /usr/local/bin/tor (on Tor 0.4.2.6 )
Mar 1 13:54:37 <<hostname>> Tor[86742]: Bug: 0x1191a57 <channel_tls_handle_state_change_on_orconn+0x67> at /usr/local/bin/tor (on Tor 0.4.2.6 )
Mar 1 13:54:37 <<hostname>> Tor[86742]: Bug: 0x1144946 <connection_or_set_state_open+0x26> at /usr/local/bin/tor (on Tor 0.4.2.6 )
Mar 1 13:54:37 <<hostname>> Tor[86742]: Bug: 0x11923ee <channel_tls_handle_cell+0x88e> at /usr/local/bin/tor (on Tor 0.4.2.6 )
Mar 1 13:54:37 <<hostname>> Tor[86742]: Bug: 0x1140d52 <connection_or_process_inbuf+0x152> at /usr/local/bin/tor (on Tor 0.4.2.6 )
Mar 1 13:54:37 <<hostname>> Tor[86742]: Bug: 0x114ddad <connection_handle_read+0x8fd> at /usr/local/bin/tor (on Tor 0.4.2.6 )
Mar 1 13:54:37 <<hostname>> Tor[86742]: Bug: 0x119e3ee <connection_add_impl+0x23e> at /usr/local/bin/tor (on Tor 0.4.2.6 )
Mar 1 13:54:37 <<hostname>> Tor[86742]: Bug: 0x8013d72b3 <event_base_assert_ok_nolock_+0xc23> at /usr/local/lib/libevent-2.1.so.7 (on Tor 0.4.2.6 )
Mar 1 13:54:37 <<hostname>> Tor[86742]: Bug: 0x8013d318f <event_base_loop+0x53f> at /usr/local/lib/libevent-2.1.so.7 (on Tor 0.4.2.6 )
Mar 1 13:54:37 <<hostname>> Tor[86742]: Bug: 0x11a0881 <do_main_loop+0xf1> at /usr/local/bin/tor (on Tor 0.4.2.6 )
Mar 1 13:54:37 <<hostname>> Tor[86742]: Bug: 0x113de68 <tor_run_main+0x128> at /usr/local/bin/tor (on Tor 0.4.2.6 )
Mar 1 13:54:37 <<hostname>> Tor[86742]: Bug: 0x113c656 <tor_main+0x66> at /usr/local/bin/tor (on Tor 0.4.2.6 )
Mar 1 13:54:37 <<hostname>> Tor[86742]: Bug: 0x113c309 <main+0x19> at /usr/local/bin/tor (on Tor 0.4.2.6 )
```
**Trac**:
**Username**: sjcjonkerTor: 0.3.5.x-finalDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/legacy/trac/-/issues/33469INTERNAL ERROR: Raw assertion failed at src/lib/malloc/map_anon.c:239: lock_r...2020-06-13T15:51:53ZTracINTERNAL ERROR: Raw assertion failed at src/lib/malloc/map_anon.c:239: lock_result == 0I tried updating to latest stable version and I have this error after a couple of minutes:
```
Feb 27 14:38:04.987 [notice] Tor 0.4.2.6 (git-971a6beff5a53434) running on Windows Server 2003 with Libevent 2.1.8-stable, OpenSSL 1.1.1d, Zli...I tried updating to latest stable version and I have this error after a couple of minutes:
```
Feb 27 14:38:04.987 [notice] Tor 0.4.2.6 (git-971a6beff5a53434) running on Windows Server 2003 with Libevent 2.1.8-stable, OpenSSL 1.1.1d, Zlib 1.2.11, Liblzma N/A, and Libzstd N/A.
Feb 27 14:38:05.003 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning
Feb 27 14:38:05.018 [notice] Read configuration file "U:\2\Server\TOR\tor.ini".
Feb 27 14:38:05.018 [notice] Based on detected system memory, MaxMemInQueues is set to 2048 MB. You can override this by setting MaxMemInQueues by hand.
Feb 27 14:38:05.034 [warn] You specified a public address '0.0.0.0:8080' for SocksPort. Other people on the Internet might find your computer and use it as an open proxy. Please don't allow this unless you have a good reason.
Feb 27 14:38:05.034 [notice] Opening Socks listener on 0.0.0.0:8080
Feb 27 14:38:05.034 [notice] Opened Socks listener on 0.0.0.0:8080
Feb 27 14:38:05.034 [notice] Opening Control listener on 127.0.0.1:9051
Feb 27 14:38:05.049 [notice] Opened Control listener on 127.0.0.1:9051
Feb 27 14:38:05.049 [notice] Opening OR listener on 0.0.0.0:9001
Feb 27 14:38:05.049 [notice] Opened OR listener on 0.0.0.0:9001
Feb 27 14:38:05.049 [notice] Opening Directory listener on 0.0.0.0:9030
Feb 27 14:38:05.049 [notice] Opened Directory listener on 0.0.0.0:9030
============================================================ T= 1582807176
INTERNAL ERROR: Raw assertion failed in Tor 0.4.2.6 (git-971a6beff5a53434) at src/lib/malloc/map_anon.c:239: lock_result == 0
```
**Trac**:
**Username**: m95dTor: unspecified