Tor issueshttps://gitlab.torproject.org/tpo/core/tor/-/issues2022-05-02T20:24:41Zhttps://gitlab.torproject.org/tpo/core/tor/-/issues/7148Even better parameter voting protocol2022-05-02T20:24:41ZNick MathewsonEven better parameter voting protocolOur current parameter voting protocol is backwards in how many voters need to exist for a parameter before we can vote for it. Right now we accept the parameter into the consensus if it has a majority of all authorities, or at least 3 a...Our current parameter voting protocol is backwards in how many voters need to exist for a parameter before we can vote for it. Right now we accept the parameter into the consensus if it has a majority of all authorities, or at least 3 authorities. But that fails when most authorities are abstaining: 3 rogue authorities could force the value of an unset parameter to whatever they want.
A stopgap solution (for which roger is writing a ticket) is for all authorities to vote on all parameters, and to have most/all authorities begin voting on any new parameter before we release software that looks for it.
But surely we can do better than that.
We need to write a little proposal for this before the little-proposal deadline to implement it in 0.2.4.https://gitlab.torproject.org/tpo/core/tor/-/issues/19162Make it even harder to become HSDir2023-03-13T09:57:24ZGeorge KadianakisMake it even harder to become HSDirIn legacy/trac#8243 we started requiring `Stable` flag for becoming HSDirs, but this is still not hard enough for motivated adversaries. Hence we need to make it even harder for a relay to become HSDir, so that only relays that have been...In legacy/trac#8243 we started requiring `Stable` flag for becoming HSDirs, but this is still not hard enough for motivated adversaries. Hence we need to make it even harder for a relay to become HSDir, so that only relays that have been around for long get the flag. After prop224 gets deployed, there will be less incentive for adversaries to become HSDirs since they won't be able to harvest onion addresses.
Until then, our current plan is to increase the bandwidth and uptime required to become an HSDir to something almost unreasonable. For example requiring an uptime of over 6 months, or maybe requiring that the relay is in the top 1/4th of uptimes on the network.Tor: unspecifiedRoger DingledineRoger Dingledinehttps://gitlab.torproject.org/tpo/core/tor/-/issues/16894Check all logging output is appropriately escaped / escaped_safe_str_client2022-02-07T19:38:03ZteorCheck all logging output is appropriately escaped / escaped_safe_str_clientSecurity bugs like legacy/trac#16891 show up every so often, where sensitive input is logged, rather than being obscured. Similarly, client input is sometimes logged unsanitised (I fixed one of these in the directory request logging code...Security bugs like legacy/trac#16891 show up every so often, where sensitive input is logged, rather than being obscured. Similarly, client input is sometimes logged unsanitised (I fixed one of these in the directory request logging code about 9-12 months ago.)
It would be great if someone could review all the strings that are logged by Tor, and categorise them into:
* static or calculated internally: trusted, log as-is
* externally provided: unsanitised, use escaped()
* sensitive client information: use escaped_safe_str_client()
Do we want this in 0.2.7, or should we leave it until 0.2.8?https://gitlab.torproject.org/tpo/core/tor/-/issues/20212Tor can be forced to open too many circuits by embedding .onion resources2023-09-13T01:01:26ZGATor can be forced to open too many circuits by embedding .onion resourcesA malicious web page or an exit node* can force Tor to open too many new circuits by embedding resources from multiple .onion domains.
I could observe up to 50 new circuits per second, and a total of a few hundred circuits in less than ...A malicious web page or an exit node* can force Tor to open too many new circuits by embedding resources from multiple .onion domains.
I could observe up to 50 new circuits per second, and a total of a few hundred circuits in less than a half minute.
The embedded HS domains don't need to exist, Tor will still open an new internal circuit for each .onion domain to download the descriptors.
I guess forcing clients to make too many circuits may enable certain attacks, even though the circuits are internal.
Maybe Tor (or Tor Browser) could cap the number of new circuits opened within a time window. I can't think of a realistic use case for loading resources from tens of different hidden services.
*: only when the connection is unencrypted HTTPhttps://gitlab.torproject.org/tpo/core/tor/-/issues/23113Manage DNS state better when "All nameservers have failed"2022-09-01T21:31:33ZteorManage DNS state better when "All nameservers have failed"We should downgrade this warning when it only happens for a short period of time (or a small number of requests), or when it happens in response to a malformed request.
This warning is causing operators to make sub-optimal DNS server ch...We should downgrade this warning when it only happens for a short period of time (or a small number of requests), or when it happens in response to a malformed request.
This warning is causing operators to make sub-optimal DNS server choices: for example, avoiding using a local cache in favour of remote resolvers.
Sometimes changing the local resolver makes a difference:
https://trac.torproject.org/projects/tor/ticket/1936#comment:12
Sometimes it happens in response to malformed requests:
https://trac.torproject.org/projects/tor/ticket/11600#comment:6
Sometimes it's harmless:
https://trac.torproject.org/projects/tor/ticket/11600#comment:7
Because it's followed by:
```
[notice] eventdns: Nameserver <ISP-resolver2>:53 is back up
```https://gitlab.torproject.org/tpo/core/tor/-/issues/33375Stop advertising an IPv6 exit policy when DNS is broken for IPv62022-02-28T19:41:05ZteorStop advertising an IPv6 exit policy when DNS is broken for IPv6When `dns_seems_to_be_broken_for_ipv6()`, exits should stop advertising an IPv6 exit policy.
Here's a rough design:
* when `dns_seems_to_be_broken_for_ipv6()` is first set to 1, mark the relay descriptor dirty
* when rebuilding the desc...When `dns_seems_to_be_broken_for_ipv6()`, exits should stop advertising an IPv6 exit policy.
Here's a rough design:
* when `dns_seems_to_be_broken_for_ipv6()` is first set to 1, mark the relay descriptor dirty
* when rebuilding the descriptor, check `dns_seems_to_be_broken_for_ipv6()` before including an IPv6 exit policy
* reset `dns_seems_to_be_broken_for_ipv6()` periodically, maybe every 1-3 days?https://gitlab.torproject.org/tpo/core/tor/-/issues/29927Tor protocol errors causing silent dropped cells2022-09-01T21:39:46ZMike PerryTor protocol errors causing silent dropped cellsWhile testing vanguards, I've got some mystery cases client side where circuits are getting closed with END_CIRC_REASON_TORPROTOCOL, but Tor is not emmitting any log lines that correspond to this, even at debug level.
This is happening ...While testing vanguards, I've got some mystery cases client side where circuits are getting closed with END_CIRC_REASON_TORPROTOCOL, but Tor is not emmitting any log lines that correspond to this, even at debug level.
This is happening for circuits with purpose CIRCUIT_PURPOSE_C_REND_READY_INTRO_ACKED. Additionally, all circuits seem able to fail during construction with END_CIRC_REASON_TORPROTOCOL, with no Tor log messages even at debug loglevel. Possibly more ntor handshake failures, similar to legacy/trac#29700?
Finally, CIRCUIT_PURPOSE_C_INTRODUCE_ACKED circuits are getting closed with a END_CIRC_REASON_FINISHED after receiving an invalid cell, seemingly after they are done being used.
See also https://github.com/mikeperry-tor/vanguards/issues/37
The vanguards addon now outputs this bug number at INFO log level when this happens.https://gitlab.torproject.org/tpo/core/tor/-/issues/31183Situational symlink attacks on ControlPortWriteToFile etc.2022-09-01T21:39:47ZGeorge KadianakisSituational symlink attacks on ControlPortWriteToFile etc.Here is a bug report from paldium on hackerone. It's basically a very situational and restricted local priviledge escalation against certain setups and threat models:
```
# Summary
It is possible to change permissions of files through ...Here is a bug report from paldium on hackerone. It's basically a very situational and restricted local priviledge escalation against certain setups and threat models:
```
# Summary
It is possible to change permissions of files through tor or
gain write access to newly created ones. The target file can be
chosen by a local attacker if an adjusted configuration is used.
#How to reproduce
- Given is a FreeBSD or Mac OS X system.
- Tor is configured with a torrc file containing this line:
ControlPortWriteToFile /tmp/control.txt
- Optionally (race condition) this line is included as well:
ControlPortFileGroupReadable 1
- /tmp is a directory with sticky bit, i.e. trwxrwxrwx
##or
- Given is a Unix-like (Linux) system.
- Tor is configured with a torrc file containing this line:
ControlPortWriteToFile /tmp/proof/control.txt
- Optionally (race condition) this line is included as well:
ControlPortFileGroupReadable 1
- /tmp/proof is writable by the local attacker
When tor starts, then it will eventually write all available control
ports into file configured by "ControlPortWriteToFile". This file is not
written directly into, but a temporary file is used (the extension
".tmp" is added to file name). If this file already exists, then it is
simply truncated.
See src/lib/fs/files.c start_writing_to_file for details, especially
line 321 and following.
The problem is that an attacker can simply create the temporary file
before tor gets the chance to. On Mac this attack works by default
against /tmp, but Linux has a protection against symlink attacks on
directories with sticky bit like /tmp or /var/tmp. Therefore it takes an
unusual configuration on Linux or a different (regular) directory.
Let the attacker create a file which tor, even with dropped privileges,
can write to:
`attacker$ install -m 666 /dev/null /tmp/control.txt.tmp`
The tor process will use this file without adjusting the permissions,
because O_CREAT was not actually performed. Instead, O_TRUNC simply
truncated the file to remove existing content.
Afterwards tor will write content to this file, which the attacker could
simply override at this point. But there is no real need to, because
the target file will still keep the permissions of this temporary file
after rename.
See src/lib/fs/files.c replace_file for details. Basically it's a
simple rename() call which therefore removes the target inode and
renames the temporary file to the target one (in this setup).
At this point, the first attack is finished. The file /tmp/control.txt
is under control of the attacker. If "ControlPortFileGroupReadable 1"
is not given, this exploit code works on Linux systems as well (you
can skip the proof/ part on Mac due to lack of /tmp protection):
```In torrc: ControlPortWriteToFile /tmp/proof/control.txt
attacker$ install -Dm 666 /dev/null /tmp/proof/control.txt.tmp
root$ tor -f torrc
attacker$ ls -l /tmp/proof/control.txt
-rw-rw-rw- 1 attacker attacker 0 Jun 24 22:59 /tmp/proof/control.txt```
##Second attack
The second attack uses a race condition. Because /tmp/control.txt is
under control of the attacker, the file can be deleted and replaced with
a symbolic link to a target which we want to get group read permissions
for. This is a possible scenario if "ControlPortFileGroupReadable 1" is
given.
In this case, chmod() is called in tor process. See
src/feature/control/control.c control_ports_write_to_file for more
details, especially line 149 for more details (chmod).
The system call chmod() uses a file path as an argument. There is no
guarantee that the file path still refers to the file which has been
created by the tor process. Furthermore, chmod() follows symbolic
links, therefore the referenced target file is adjusted by chmod.
This is obviously a race condition but can be used to gain read access
to files which -- even by configuration -- should be private to the
tor user.
The attacker still needs group-read permissions, otherwise the newly
revealed files still cannot be accessed.
##Example
`tor just renamed controlled /tmp/control.txt.tmp to /tmp/control.txt`
`attacker$ rm /tmp/control.txt`
`attacker$ ln -s /var/lib/tor /tmp/control.txt`
`tor calls chmod on /tmp/control.txt and therefore on /var/lib/tor`
The attacker, if part of tor's group, can access /var/lib/tor now as well
(only read, but hierarchy is known through other tor installations)
#Solution
Use fchmod() on a file which has been opened with open() and O_NOFOLLOW
to prevent changing any files which have been reached through symbolic
links. Preventing symbolic links is already a great deal in file
src/lib/fs/dir.c check_private_dir, line 85. Could be implemented as
tor_chmod() to fix other chmod() calles in the code as well.
Use mkstemp() to atomically create a temporary file which has the
guaranteed permission 600 and owned by the tor user.
## Impact
A local attacker can modify the content of files which are considered trusted by dependent tools, e.g. the control port file.
Also a local attacker can extend privileges of files which are supposed to be private to the tor user and not readable by the group.
Timeline:
2019-06-25 10:24:59 +0000: @paldium (comment)
To clarify one aspesct about attack 1: Of course it is not a tor issue if someone creates a world writable directory and lets tor create files in there. Everyone could simply override the resulting file and tor will never be able to prevent that. Therefore, the rename()-issue is a non-issue on Linux. But as a user I would expect that temporary files within /tmp with sticky bits are properly handled.
But attack 2 (race condition) should even be prevented in a world writable directory (or writable by a possibly malicious user). Therefore I consider attack 2 on Linux to be unlikely but plausible.
I would like to add a patch to this, but I am not sure how you want to handle file permissions.
The nicest approach would be to supply the permissions to the function which creates the file, but that would be a no-op on Windows.
Otherwise tor_chmod could open() the file and at least prevent symbolic link attacks. Yet, files could still be modified which are not the ones we created.
As this is a design decision, I haven't started writing a patch because I do not know which one you would prefer.
```
Here is some further information:
```
ControlPortWriteFile is one example. The same attack scenarios are true for:
- ExtORPortCookieAuthFileGroupReadable 1
- CookieAuthFileGroupReadable 1
- DataDirectoryGroupReadable 1
- CacheDirectoryGroupReadable 1
- KeyDirectoryGroupReadable 1
```
Here is some of my analysis for attack scenarios:
```
Hello @paldium,
thanks for submitting this report.
Here is the best attacks we could find given the bugs you gave us:
Attack 1) The most realistic attack scenario I could think of is a system where the attacker is a local user who cannot establish outgoing connections, but is able to overwrite the ControlPortWriteToFile file, and replaces it with an attacker controlled IP:PORT, and then a controller program connects to the evil IP:PORT thereby deanonymizing the user. This seems to be a very artificial scenario, which assumes a particular threat model, and a tor with specific configuration parameters, and a very specific system,.
Attack 2) The most realistic attack scenario here would be the attacker using this "read anything controlled by Tor" race condition attack to learn the private keys of an onion service that are on the same system but not able to read them... I'm actually not sure if that would work, but I think it's possible. This also assumes a particular threat-model, a multi-user system, and specific configuration parameters.
@paldium, would it be possible to outline the various solutions we have for fixing this issue?I don't think specifying the permissions in the torrc is a nice thing from a UX perspective. Perhaps not following symlinks is a start but not the whole thing? Maybe we should just abort if there is a dangerous configuration? I wonder how prevalent this is.
```
and here is suggestions on patching:
```
Hi @asn,
I agree here, the attack is impossible against default setups and takes quite specific steps to be exploitable.
To fix this vulnerability at its root, I recommend to adjust the function `start_writing_to_file` in `src/lib/fs/files.c`. The system call `mkstemp` (included in POSIX) guarantees to create a unique file name for the (optional) temporary file. This way, an attacker cannot prepare a file before tor tries to create it. In case of conflict, mkstemp iterates through a huge pool of possible names and if all fail, it returns -1.
It must be checked if mkstemp is a viable option on all target systems, especially Windows. But it's POSIX, so it should work.
The attached patch performs these changes, but breaks a test which would be redundant then (it fails because it tries to create a temporary file beforehand, which `mkstemp` successfully prevents).
###Next improvement to consider (for attack 2):
As far as I understand the code, possible modes for files are:
- 0600 (default)
- 0640 (if configuration requests group-writable files, non-Windows systems only)
- 0400 (tor-gencert, which is not expected to run on Windows according to manual page)
If the special case 0400 is handled in tor-gencert directly, the functions in `src/lib/fs/files.c` can be further reduced in their feature set: Just add a "group readable" attribute to these functions and remove the explicit `mode` (if present):
- start_writing_to_stdio_file
- start_writing_to_file
- write_str_to_file
- write_bytes_to_file
All these functions would use 0600 by default and only support 0640 if the boolean "group readable" flag is set to true -- and that will only happen on non-Windows systems.
With these changes in place, the remaining `chmod` calls (except for the control socket) can be removed and that will also fix the second attack (and gives full atomic control to the newly introduced `fchmod` call):
###Before:
```
if (write_str_to_file(options->ControlPortWriteToFile, joined, 0) < 0) {
log_warn(LD_CONTROL, "Writing %s failed: %s",
options->ControlPortWriteToFile, strerror(errno));
}
#ifndef _WIN32
if (options->ControlPortFileGroupReadable) {
if (chmod(options->ControlPortWriteToFile, 0640)) {
log_warn(LD_FS,"Unable to make %s group-readable.",
options->ControlPortWriteToFile);
}
}
#endif /* !defined(_WIN32) */
```
###After:
```
if (write_str_to_file(options->ControlPortWriteToFile,
options->ControlPortFileGroupReadable, joined, 0) < 0) {
log_warn(LD_CONTROL, "Writing %s failed: %s",
options->ControlPortWriteToFile, strerror(errno));
}
```https://gitlab.torproject.org/tpo/core/tor/-/issues/33156DoS subsystem should compare IPv6 /642022-10-11T23:41:45ZteorDoS subsystem should compare IPv6 /64s7r writes:
> Our internal DoS defense subsystem should also treat prefixes instead of
> addresses, because right now with a client with a /64 public IPv6 prefix
> assigned to it I could hammer via IPv6 guards without triggering the DoS...s7r writes:
> Our internal DoS defense subsystem should also treat prefixes instead of
> addresses, because right now with a client with a /64 public IPv6 prefix
> assigned to it I could hammer via IPv6 guards without triggering the DoS
> defense.
https://lists.torproject.org/pipermail/tor-dev/2020-February/014144.html
We could make this change by:
* only putting the first /64 of each IPv6 address in the filter list, and
* only checking the first /64 of each new IPv6 connectionhttps://gitlab.torproject.org/tpo/core/tor/-/issues/31022Tor's windows "--service install" should warn if it installs on a global writ...2023-09-13T17:24:46ZGeorge KadianakisTor's windows "--service install" should warn if it installs on a global writeable pathSeems like there is a platform-specific (windows) configuration-specific (requires multi-user setup, and specific install proceedure) local root exploit on Windows, if "--service install" is used on the wrong directory level.
In the fut...Seems like there is a platform-specific (windows) configuration-specific (requires multi-user setup, and specific install proceedure) local root exploit on Windows, if "--service install" is used on the wrong directory level.
In the future we should warn if "--service install" is used insecurely, and we should provide installer wizards to do this right.
IMO this is a very unlikely issue so I assigned it to 042, but feel free to move if you think so.
Report inlined:
```
Title: When tor.exe is running as a Windows service, it may be subject to privilege escalation
Scope: None
Weakness: Privilege Escalation
Severity: Low
Link: https://hackerone.com/reports/602533
Date: 2019-06-06 18:17:39 +0000
By: @xiaoyinl
Details:
According to https://2019.www.torproject.org/docs/faq#NTService, you can run Tor as a Windows service. To install Tor as a service, you run `tor --service install`. However, the installed Tor service uses the same tor.exe image path as the service path. The Tor service runs under `NT authority\local service` account, so if an admin unzips tor.exe into a folder that is writable by non-admin users (e.g. C:\tor), then a malicious standard user can gain LocalService privilege by planting a malicious DLL into the folder where tor.exe is located.
To make things worse, it's common that admins unzip tor.exe into a nonadmin-writable directory, because if it's unzipped into one of the admins' user directories (like Downloads, Documents, etc.), then the service won't even run, because LocalService account has no access to admin's directories. Actually, the OP of https://trac.torproject.org/projects/tor/ticket/29345 "fixed" his problem by unzipping tor into C:\\:
> In fact, if you extract tor files in a Tor folder located in C:\ you probably won't have this problem of permissions
This unfortunately made him vulnerable to privilege escalation.
**Reproduce**:
1. download Tor from https://www.torproject.org/dist/torbrowser/8.5.1/tor-win32-0.3.5.8.zip
2. unzip it into C:\\tor-win32-0.3.5.8.
3. Open an admin command prompt, run C:\\tor-win32-0.3.5.8\\Tor\\tor.exe --service install
4. Log in a standard Windows user, create a malicious iphlpapi.dll, and copy this file into C:\\tor-win32-0.3.5.8\\Tor\\
5. Restart your system. The malicious iphlpapi.dll should run.
**Fix**:
To fix this bug, when installed as a service, copy Tor's executable folder into a protected directory, like C:\\Program Files, or C:\\Windows. Then use the protected tor.exe as the service path.
## Impact
A malicious Windows local standard user can gain LocalService privilege. He can then deanonymize Tor traffic, and can interfere other Windows services running on LocalService account.
2019-06-07 10:04:29 +0000: @xiaoyinl (comment)
This report is about local privilege escalation. There is no social engineering involved. The attacker is a **local** non-administrator user, so the attacker can copy the malicious dll file to `C:\tor-win32-0.3.5.8\Tor\` himself. Then the attacker can have access to LocalService data files and Registry hives.
```https://gitlab.torproject.org/tpo/core/tor/-/issues/40849configure hardening: _FORTIFY_SOURCE=3 support2023-08-30T12:42:23Zcypherpunksconfigure hardening: _FORTIFY_SOURCE=3 supportPlease add _FORTIFY_SOURCE=3 support to the configure script.
Compared to _FORTIFY_SOURCE=2, _FORTIFY_SOURCE=3 should cover more cases and improve security hardening:
https://developers.redhat.com/articles/2022/09/17/gccs-new-fortificat...Please add _FORTIFY_SOURCE=3 support to the configure script.
Compared to _FORTIFY_SOURCE=2, _FORTIFY_SOURCE=3 should cover more cases and improve security hardening:
https://developers.redhat.com/articles/2022/09/17/gccs-new-fortification-level
https://developers.redhat.com/articles/2023/02/06/how-improve-application-security-using-fortifysource3
Currently it uses _FORTIFY_SOURCE=2 only.
https://gitlab.torproject.org/tpo/core/tor/-/blob/aeb2e24a75bd5cbe7fab9f49cda01ac111c55433/configure.ac#L1327
Debian 12 Bookworm's glibc and GCC version should bring support for _FORTIFY_SOURCE=3https://gitlab.torproject.org/tpo/core/tor/-/issues/40880TROVE 2023 004 - Implement the fix and release C-tor2023-11-04T20:57:14ZDavid Gouletdgoulet@torproject.orgTROVE 2023 004 - Implement the fix and release C-torThis ticket tracks TROVE-2023-004 that is a high severity issue.
Confidential discussions going on here: https://gitlab.torproject.org/tpo/core/tor/-/issues/40874This ticket tracks TROVE-2023-004 that is a high severity issue.
Confidential discussions going on here: https://gitlab.torproject.org/tpo/core/tor/-/issues/40874Tor: 0.4.7.x-post-stableDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.org