Tor issueshttps://gitlab.torproject.org/tpo/core/tor/-/issues2023-05-02T02:18:49Zhttps://gitlab.torproject.org/tpo/core/tor/-/issues/18517meek is broken in Tor Browser 6.0a32023-05-02T02:18:49ZGeorg Koppenmeek is broken in Tor Browser 6.0a3meek does not work any longer in Tor Browser 6.0a3. It seems this is caused by an underlying bug in tor. After some amount of testing and bisecting commit 23b088907fd23da417f5caf2b7b5f664f317ef4a is the first that introduces the new beha...meek does not work any longer in Tor Browser 6.0a3. It seems this is caused by an underlying bug in tor. After some amount of testing and bisecting commit 23b088907fd23da417f5caf2b7b5f664f317ef4a is the first that introduces the new behavior. Trying to start meek with it results in
```
Mar 10 13:50:53.000 [notice] Ignoring directory request, since no bridge nodes are available yet.
Mar 10 13:50:54.000 [notice] Delaying directory fetches: No running bridges
```
and nothing thereafter: the startup is stalled.Tor: 0.2.8.x-finalteorteorhttps://gitlab.torproject.org/tpo/core/tor/-/issues/18460Relays and bridges are not counting directory requests coming in via IPv62023-04-01T17:44:34ZKarsten LoesingRelays and bridges are not counting directory requests coming in via IPv6While testing my legacy/trac#8786 branch I found that relays and bridges are currently not counting directory requests coming in via IPv6 at all. The reason is the `if` in the following code snippet from `directory_handle_command_get()`...While testing my legacy/trac#8786 branch I found that relays and bridges are currently not counting directory requests coming in via IPv6 at all. The reason is the `if` in the following code snippet from `directory_handle_command_get()`:
```
struct in_addr in;
tor_addr_t addr;
if (tor_inet_aton((TO_CONN(conn))->address, &in)) {
tor_addr_from_ipv4h(&addr, ntohl(in.s_addr));
geoip_note_client_seen(GEOIP_CLIENT_NETWORKSTATUS,
&addr, NULL,
time(NULL));
```
`tor_inet_aton` expects an IPv4 address in dotted-quad notation and returns 0 if it's given an IPv6 address.
When digging deeper into Git history, I found that I had changed that code to `&TO_CONN(conn)->addr` 4 years ago and then again to the code above in 4741aa4 because "Roger notes that address and addr are two different things."
I _think_ this was a mistake and that we can fix this by just reverting 4741aa4. I'll post a branch in a minute that I tested using Chutney's "bridges+ipv6" network (together with teor's legacy/trac#17153 fix).
Please correct me if we should really use `address` here instead of `addr`. In that case we'll probably want to look if `address` contains an IPv6 address string and handle that separately.Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/17864Wait for busy authorities / fallback dir servers2022-10-11T23:40:46ZteorWait for busy authorities / fallback dir serversIn 6c443e987d, we made the following change when selecting directory mirrors from the consensus:
Tweak the 9969 fix a little
If we have busy nodes and excluded nodes, then don't retry with the
excluded ones enabled. Ins...In 6c443e987d, we made the following change when selecting directory mirrors from the consensus:
Tweak the 9969 fix a little
If we have busy nodes and excluded nodes, then don't retry with the
excluded ones enabled. Instead, wait for the busy ones to be nonbusy.
We should do the same thing when selecting hard-coded authorities / fallback dir servers.
I have a patch for this.Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/17674circuit_handle_first_hop doesn't respect ExtendAllowPrivateAddresses2022-10-11T23:40:45Zteorcircuit_handle_first_hop doesn't respect ExtendAllowPrivateAddressescircuit_extend checks ExtendAllowPrivateAddresses, but by then it's too late, we've already connected in circuit_handle_first_hop.
This seems to be a DoS risk.
onionskin_answer handles local connections as a special case using channel_...circuit_extend checks ExtendAllowPrivateAddresses, but by then it's too late, we've already connected in circuit_handle_first_hop.
This seems to be a DoS risk.
onionskin_answer handles local connections as a special case using channel_is_local, so we might actually be making some that serve some useful purpose. (What is that purpose?)
Do we really need to allow connections to our own address from ourselves?
It might be a good idea to refuse to build circuits to ourselves in circuit_handle_first_hop if ExtendAllowPrivateAddresses is 0, and then see what falls over. Unfortunately, this can't be tested using chutney.Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/15775Add IPv4 Fallback Directory List to tor, active by default2022-10-11T23:40:45ZteorAdd IPv4 Fallback Directory List to tor, active by defaultweasel writes on tor-dev:
Tor has included a feature to fetch the initial consensus from nodes
other than the authorities for a while now. We just haven't shipped a
list of alternate locations for clients to go to yet.
Reasons why we ...weasel writes on tor-dev:
Tor has included a feature to fetch the initial consensus from nodes
other than the authorities for a while now. We just haven't shipped a
list of alternate locations for clients to go to yet.
Reasons why we might want to ship tor with a list of additional places
where clients can find the consensus is that it makes authority
reachability and BW less important.
At the last Tor dev meeting we came up with a list of arbitrary
requirements that nodes should meet to be included in this list.
We want them to have been around and using their current key, address,
and port for a while now (120 days), and have been running, a guard, and
a v2 directory mirror for most of that time.
I have written a script to come up with a list of notes that match our
criteria. It's currently at
https://www.palfrader.org/volatile/fallback-dir/get-fallback-dir-candidates
It currently produces
https://www.palfrader.org/volatile/2015-04-17-VjBkc8DWV8c/list
See https://lists.torproject.org/pipermail/tor-dev/2015-April/008674.html
This file current has 329 entries, and takes up approximately 32kB.
If we hard-coded it in the binary like the authorities, it would increase the binary size by approximately 2% on my platform.
Edit: nickm favours putting it in `torrc.defaults`
Edit 2: weasel notes `torrc.defaults` is for package maintainers. Putting it in a list of strings in the code. Much like the authorities.
Do we expect this in by 0.2.7?
Edit: Yes
Do we want to work on a signed file first (legacy/trac#15774)?
(A signed file needs a well-defined threat model and signature verification has to work without access to the authorities or fallback directories.)
Edit: No clear threat model, defer.Tor: 0.2.8.x-finalteorteorhttps://gitlab.torproject.org/tpo/core/tor/-/issues/13208What's the average number of hsdir fetches before we get the hsdesc?2022-04-18T18:31:15ZRoger DingledineWhat's the average number of hsdir fetches before we get the hsdesc?Hidden services publish their hsdesc to six hsdir relays, once an hour.
Then relays come and go, changing the set of six that clients will compute when deciding which relay to fetch from.
Also, both hidden services and clients only fet...Hidden services publish their hsdesc to six hsdir relays, once an hour.
Then relays come and go, changing the set of six that clients will compute when deciding which relay to fetch from.
Also, both hidden services and clients only fetch a new consensus every 2-4 hours, so they will be perenially a few hours behind.
This could pretty easily result in a situation where (due to different knowledge on the hidden service's part) the hidden service doesn't publish to all six that it's "supposed" to, and (due to different knowledge on the client's part) the client doesn't pick from the same six that the hidden service published to, and (due to churn in the relays) the six that the hidden service published to might not remain the right six from a global perspective.
Realistically, do these skews matter?
We could imagine doing an experiment where we follow the client algorithm and find out the average number of fetches we do before we get an answer (or give up).Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/17659BUG : connection_ap_mark_as_pending_circuit()2021-10-13T19:38:33ZcypherpunksBUG : connection_ap_mark_as_pending_circuit()
```
Nov 23 [...] [warn] connection_ap_mark_as_pending_circuit(): Bug: What?? pending_entry_connections already contains 0x81b488f8! (on Tor 0.2.8.0-alpha-dev 35bfd782eae29646)
Nov 23 [...] [warn] connection_ap_attach_pending(): Bug: 0x8...
```
Nov 23 [...] [warn] connection_ap_mark_as_pending_circuit(): Bug: What?? pending_entry_connections already contains 0x81b488f8! (on Tor 0.2.8.0-alpha-dev 35bfd782eae29646)
Nov 23 [...] [warn] connection_ap_attach_pending(): Bug: 0x81e10248 is no longer in circuit_wait. Why is it on pending_entry_connections? (on Tor 0.2.8.0-alpha-dev 35bfd782eae29646)
Nov 23 [...] [warn] connection_ap_attach_pending(): Bug: 0x81ab0338 is no longer in circuit_wait. Why is it on pending_entry_connections? (on Tor 0.2.8.0-alpha-dev 35bfd782eae29646)
Nov 23 [...] [warn] connection_ap_attach_pending(): Bug: 0x81a007d8 is no longer in circuit_wait. Why is it on pending_entry_connections? (on Tor 0.2.8.0-alpha-dev 35bfd782eae29646)
Nov 23 [...] [warn] connection_ap_attach_pending(): Bug: 0x81b45800 is no longer in circuit_wait. Why is it on pending_entry_connections? (on Tor 0.2.8.0-alpha-dev 35bfd782eae29646)
Nov 23 [...] [warn] connection_ap_attach_pending(): Bug: 0x81e10170 is no longer in circuit_wait. Why is it on pending_entry_connections? (on Tor 0.2.8.0-alpha-dev 35bfd782eae29646)
Nov 23 [...] [warn] connection_ap_attach_pending(): Bug: 0x81b279e8 is no longer in circuit_wait. Why is it on pending_entry_connections? (on Tor 0.2.8.0-alpha-dev 35bfd782eae29646)
Nov 23 [...] [warn] connection_ap_attach_pending(): Bug: 0x81d66cb8 is no longer in circuit_wait. Why is it on pending_entry_connections? (on Tor 0.2.8.0-alpha-dev 35bfd782eae29646)
Nov 23 [...] [warn] connection_ap_attach_pending(): Bug: 0x81e11558 is no longer in circuit_wait. Why is it on pending_entry_connections? (on Tor 0.2.8.0-alpha-dev 35bfd782eae29646)
Nov 23 [...] [warn] connection_ap_attach_pending(): Bug: 0x816458a8 is no longer in circuit_wait. Why is it on pending_entry_connections? (on Tor 0.2.8.0-alpha-dev 35bfd782eae29646)
Nov 23 [...] [warn] connection_ap_attach_pending(): Bug: 0x815c11a0 is no longer in circuit_wait. Why is it on pending_entry_connections? (on Tor 0.2.8.0-alpha-dev 35bfd782eae29646)
Nov 23 [...] [warn] connection_ap_attach_pending(): Bug: 0x81d4ddb8 is no longer in circuit_wait. Why is it on pending_entry_connections? (on Tor 0.2.8.0-alpha-dev 35bfd782eae29646)
Nov 23 [...] [warn] connection_ap_attach_pending(): Bug: 0x81563fa0 is no longer in circuit_wait. Why is it on pending_entry_connections? (on Tor 0.2.8.0-alpha-dev 35bfd782eae29646)
Nov 23 [...] [warn] connection_ap_attach_pending(): Bug: 0x81b45f80 is no longer in circuit_wait. Why is it on pending_entry_connections? (on Tor 0.2.8.0-alpha-dev 35bfd782eae29646)
Nov 23 [...] [warn] connection_ap_attach_pending(): Bug: 0x8129f718 is no longer in circuit_wait. Why is it on pending_entry_connections? (on Tor 0.2.8.0-alpha-dev 35bfd782eae29646)
Nov 23 [...] [warn] connection_ap_attach_pending(): Bug: 0x814c7350 is no longer in circuit_wait. Why is it on pending_entry_connections? (on Tor 0.2.8.0-alpha-dev 35bfd782eae29646)
Nov 23 [...] [warn] connection_ap_attach_pending(): Bug: 0x8158c3b0 is no longer in circuit_wait. Why is it on pending_entry_connections? (on Tor 0.2.8.0-alpha-dev 35bfd782eae29646)
Nov 23 [...] [warn] connection_ap_attach_pending(): Bug: 0x819f4a48 is no longer in circuit_wait. Why is it on pending_entry_connections? (on Tor 0.2.8.0-alpha-dev 35bfd782eae29646)
```Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/9971for_discovery option in add_an_entry_guard() is confusingly named2021-09-16T14:35:28ZRoger Dingledinefor_discovery option in add_an_entry_guard() is confusingly namedIn legacy/trac#9946 I added a new argument "for_discovery" to add_an_entry_guard(). Nick prefers "provisional" or "probationary".
In parallel, I think we should probably rename the made_contact field in entry guard t, to be *why* we're ...In legacy/trac#9946 I added a new argument "for_discovery" to add_an_entry_guard(). Nick prefers "provisional" or "probationary".
In parallel, I think we should probably rename the made_contact field in entry guard t, to be *why* we're remembering that we've made contact, rather than simply that we have.
And lastly, we should do something about the godawful number of int arguments that add_an_entry_guard() now takes.Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/15585/src/util.c refactoring2021-09-16T14:35:00ZTrac/src/util.c refactoringI've tried to refactor the /util.c code as it seems to lack the unified codestyle and I tried to make it more readable and better looking, like adding the braces to every if block and so on.
The branch name is "fix_util" and I attach th...I've tried to refactor the /util.c code as it seems to lack the unified codestyle and I tried to make it more readable and better looking, like adding the braces to every if block and so on.
The branch name is "fix_util" and I attach the patch generated via git format-patch master --stdout > fix_util.patch".
The original repository is https://gitweb.torproject.org/tor.git
**Trac**:
**Username**: arcadiaqTor: 0.2.8.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/17739Refactor clock skew warning code to avoid duplication2021-09-16T14:34:35ZteorRefactor clock skew warning code to avoid duplicationThe following functions contain very similar clock skew code:
* connection_dir_client_reached_eof
* channel_tls_process_netinfo_cell
* or_state_load
We should unify this code to reduce redundancy and increase consistency.The following functions contain very similar clock skew code:
* connection_dir_client_reached_eof
* channel_tls_process_netinfo_cell
* or_state_load
We should unify this code to reduce redundancy and increase consistency.Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/17608Refactor accept/reject * redundancy checks out of policies_parse_exit_policy_...2021-09-16T14:34:34ZteorRefactor accept/reject * redundancy checks out of policies_parse_exit_policy_internalpolicies_parse_exit_policy_internal would be a lot easier to read if the code that implements `found_final_effective_entry` was in its own function.policies_parse_exit_policy_internal would be a lot easier to read if the code that implements `found_final_effective_entry` was in its own function.Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/17590Decouple connection_ap_handshake_attach_circuit from nearly everything.2021-09-16T14:34:34ZNick MathewsonDecouple connection_ap_handshake_attach_circuit from nearly everything.Long ago we used to call connection_ap_handshake_attach_circuit() only in a few places, since connection_ap_attach_pending() attaches all the pending connections, and does so regularly. But this turned out to have a performance problem:...Long ago we used to call connection_ap_handshake_attach_circuit() only in a few places, since connection_ap_attach_pending() attaches all the pending connections, and does so regularly. But this turned out to have a performance problem: it would introduce a delay to launching or connecting a stream.
We couldn't just call connection_ap_attach_pending() every time we make a new connection, since it walks the whole connection list. So we started calling connection_ap_attach_pending all over, instead! But that's kind of ugly and messes up our callgraph.
But we have an opportunity to make Tor simpler!
* We can make connection_ap_attach_pending() linear in the number of pending entry connections, rather than in the number of total connections.
* If we do that, we can make it get called from whenever there is a pending entry connection from run_main_loop_once() or somewhere.
* And if we do that, we can just put connections on a pending-list, rather than calling connection_ap_attach_pending() on them directly.
This will simplify tor, simplify our callgraph, and -- with the help of legacy/trac#17589 -- break the blob into multiple smaller strongly connected components.Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/17589Decouple connection_dir_request_failed() from directory_initiate_command_rend()2021-09-16T14:34:34ZNick MathewsonDecouple connection_dir_request_failed() from directory_initiate_command_rend()Instead of calling connection_dir_request_failed() directly, directory_initiate_command_rend() should mark the failed connection and have it cleaned up later. This would prevent recursive invocation of directory_initiate_command_rend(), ...Instead of calling connection_dir_request_failed() directly, directory_initiate_command_rend() should mark the failed connection and have it cleaned up later. This would prevent recursive invocation of directory_initiate_command_rend(), and remove 12 functions from the Blob.
I'd do this right now, but I want to test that the code actually does the right thing.Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/8195tor and capabilities2021-08-23T15:19:19Zweasel (Peter Palfrader)tor and capabilitiesWe should figure out what it takes to keep the CAP_NET_BIND_SERVICE capability when changing the user away from root, so that we can re-open low listening ports later again.We should figure out what it takes to keep the CAP_NET_BIND_SERVICE capability when changing the user away from root, so that we can re-open low listening ports later again.Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/17027policies_parse_exit_policy_internal should block all IPv4 and IPv6 local addr...2021-08-23T15:18:34Zteorpolicies_parse_exit_policy_internal should block all IPv4 and IPv6 local addressesCurrently it just handles a single IPv4 address, allowing IPv6 exits to be connected to on their IPv6 address, or multihomed IPv4 exits to be connected to on their other IPv4 addresses.
This is a potential security issue, as it allows c...Currently it just handles a single IPv4 address, allowing IPv6 exits to be connected to on their IPv6 address, or multihomed IPv4 exits to be connected to on their other IPv4 addresses.
This is a potential security issue, as it allows connections to local ports on an exit.Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/18479Avoid overflow in tor_timegm when time_t is 32 bit2021-08-23T15:17:35ZteorAvoid overflow in tor_timegm when time_t is 32 bittor_timegm overflows for dates in and after 2038 when time_t is 32-bit. Instead, we should warn and return an error.
This is a bugfix on 3c4b4c8ca in tor-0.0.2pre14.tor_timegm overflows for dates in and after 2038 when time_t is 32-bit. Instead, we should warn and return an error.
This is a bugfix on 3c4b4c8ca in tor-0.0.2pre14.Tor: 0.2.8.x-finalGeorge KadianakisGeorge Kadianakishttps://gitlab.torproject.org/tpo/core/tor/-/issues/18214exit policy wrongly displayed in globe, atlas etc.2021-08-23T15:17:35Ztoralfexit policy wrongly displayed in globe, atlas etc.I do have in torrc :
```
...
# restrict the reduced exit policy here further due to too many abuse tickets from AS (mostly port scans)
#
ExitPolicy reject *:20-21
ExitPolicy reject *:22
ExitPolicy reject *:23
ExitPolicy reject *:80
ExitP...I do have in torrc :
```
...
# restrict the reduced exit policy here further due to too many abuse tickets from AS (mostly port scans)
#
ExitPolicy reject *:20-21
ExitPolicy reject *:22
ExitPolicy reject *:23
ExitPolicy reject *:80
ExitPolicy reject *:554
ExitPolicy reject *:8000
ExitPolicy reject *:8080
# this is the copy of the reduced exit policy from: https://trac.torproject.org/projects/tor/wiki/doc/ReducedExitPolicy
#
ExitPolicy accept *:20-21 # FTP
ExitPolicy accept *:22 # SSH
ExitPolicy accept *:23 # Telnet
ExitPolicy accept *:43 # WHOIS
...
```
but https://globe.torproject.org/#/relay/F1BE15429B3CE696D6807F4D4A58B1BFEC45C822 shows :
```
...
reject 173.193.197.194:*
reject *:80
accept *:43
...
```
This is a remote problem, using stem I do get all rejected ports from my tor process :
```
ms-magpie ~ # python3 /home/tfoerste/test.py | tr ',' '\n' | grep 'reject \*' | xargs
reject *:20-21 reject *:22 reject *:23 reject *:80 reject *:554 reject *:8000 reject *:8080 reject *:*
```Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/18208Refresh Exit policy when interface addresses change2021-08-23T15:17:35ZteorRefresh Exit policy when interface addresses changeSince 0.2.7.3, we've incorporated Exit relays' interface addresses in reject lines in their Exit policies.
But we haven't been refreshing those exit policies when interface addresses change.Since 0.2.7.3, we've incorporated Exit relays' interface addresses in reject lines in their Exit policies.
But we haven't been refreshing those exit policies when interface addresses change.Tor: 0.2.8.x-finalteorteorhttps://gitlab.torproject.org/tpo/core/tor/-/issues/18162Potential heap corruption in smartlist_add(), smartlist_insert()2021-08-23T15:17:35ZGeorge KadianakisPotential heap corruption in smartlist_add(), smartlist_insert()Here follows vulnerability report by `Guido Vranken` reported through the Tor bug bounty program.
The attack requires minimum 16GB of available memory on the victim's host, so it's quite hard to be exploited.
## Walkthrough of the vuln...Here follows vulnerability report by `Guido Vranken` reported through the Tor bug bounty program.
The attack requires minimum 16GB of available memory on the victim's host, so it's quite hard to be exploited.
## Walkthrough of the vulnerability
`smartlist_add` and `smartlist_insert` both invoke `smartlist_ensure_capacity` prior adding an element to the list in order to ensure that sufficient memory is available, to `exit()` if not enough memory is available and to detect requests for an invalid size:
```
static INLINE void
smartlist_ensure_capacity(smartlist_t *sl, int size)
{
#if SIZEOF_SIZE_T > SIZEOF_INT
#define MAX_CAPACITY (INT_MAX)
#else
#define MAX_CAPACITY (int)((SIZE_MAX / (sizeof(void*))))
#define ASSERT_CAPACITY
#endif
if (size > sl->capacity) {
int higher = sl->capacity;
if (PREDICT_UNLIKELY(size > MAX_CAPACITY/2)) {
#ifdef ASSERT_CAPACITY
/* We don't include this assertion when MAX_CAPACITY == INT_MAX,
* since int size; (size <= INT_MAX) makes analysis tools think we're
* doing something stupid. */
tor_assert(size <= MAX_CAPACITY);
#endif
higher = MAX_CAPACITY;
} else {
while (size > higher)
higher *= 2;
}
sl->capacity = higher;
sl->list = tor_reallocarray(sl->list, sizeof(void*),
((size_t)sl->capacity));
}
#undef ASSERT_CAPACITY
#undef MAX_CAPACITY
}
```
On a typical 64-bit system, `SIZEOF_INT` is 4 and `SIZEOF_SIZE_T` is 8. Consequently, `MAX_CAPACITY` is `INT_MAX`, which is 0x7FFFFFFF as can be seen in torint.h:
```
#ifndef INT_MAX
#if (SIZEOF_INT == 4)
#define INT_MAX 0x7fffffffL
#elif (SIZEOF_INT == 8)
#define INT_MAX 0x7fffffffffffffffL
#else
#error "Can't define INT_MAX"
#endif
#endif
```
So `MAX_CAPACITY` is 0x7FFFFFFF. Now assume that that many (0x7FFFFFFF) items have already been added to a smartlist via smartlist_add(sl, value).
smartlist_add() is:
```
void
smartlist_add(smartlist_t *sl, void *element)
{
smartlist_ensure_capacity(sl, sl->num_used+1);
sl->list[sl->num_used++] = element;
}
```
If `sl->num_used` is 0x7FFFFFFF prior to invoking `smartlist_add`, then the next `smartlist_add` is effectively:
```
void
smartlist_add(smartlist_t *sl, void *element)
{
smartlist_ensure_capacity(sl, -2147483648);
sl->list[2147483647] = element;
sl->num_used = -2147483648
}
```
This is the case since we are dealing with a signed 32 bit integer, and 2147483647 + 1 equals -2147483647.
All of the code in `smartlist_ensure_capacity` is wrapped inside the following `if` block:
```
if (size > sl->capacity) {
}
```
The expression -2147483648 > 2147483647 equals false, thus the code inside the block is not executed.
What actually causes the segmentation fault is that a negative 32 bit integer is used to compute a the location of array index on a 64 bit memory layout, ie., the next call to smartlist_add is effectively:
```
void
smartlist_add(smartlist_t *sl, void *element)
{
smartlist_ensure_capacity(sl, -2147483647); // Note that this is effective do-nothing code, as explained above
sl->list[-2147483648] = element;
sl->num_used = -2147483647
}
```
## Discussion
The requirement for 16 gigabytes of memory is considerable.
Triggering the vulnerability obviously also requires some code path which will invoke `smartlist_add` or `smartlist_insert` upon the same smartlist at the attacker's behest. Moreover, such a code path may have the side effect that it requires a separate allocation for each object that is added to the list; `smartlist_add` takes a pointer argument after all -- usually, but not always, this pointer refers to freshly allocated memory. Exceptions to this rule are static strings and pointers to a place in a large string or buffer that was already extant.
Once a vulnerable code path has been discovered, then it ultimately boils down to how much memory a user's machine is able to allocate in order to corrupt the heap.
Despite these constraints, smartlists form a considerable portion of the infrastructure of your code (I count some 380+ occurrences of `smartlist_add`/`smartlist_insert` in the .c files using grep, that is excluding the test/ directory) and as such it's probably wise to revise the checks in `smartlist_ensure_capacity`.Tor: 0.2.8.x-finalNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/tpo/core/tor/-/issues/18123Avoid a local port-stealing attack on Windows2021-08-23T15:17:35ZteorAvoid a local port-stealing attack on WindowsOn Windows, Tor is vulnerable to a port-stealing attack described on this StackOverflow post under the "Windows" heading:
https://stackoverflow.com/questions/14388706/socket-options-so-reuseaddr-and-so-reuseport-how-do-they-differ-do-the...On Windows, Tor is vulnerable to a port-stealing attack described on this StackOverflow post under the "Windows" heading:
https://stackoverflow.com/questions/14388706/socket-options-so-reuseaddr-and-so-reuseport-how-do-they-differ-do-they-mean-t
In short, another app can set SO_REUSEADDR, then bind to a port that Tor is already bound to, stealing all future connections to Tor.
Therefore, I think we should set SO_EXCLUSIVEADDRUSE on all listener sockets on Windows, which prevents this attack.
We could do this near make_socket_reusable in connection_listener_new. make_socket_reusable already has a comment about this issue.Tor: 0.2.8.x-final