The Tor Project issueshttps://gitlab.torproject.org/groups/tpo/-/issues2020-09-22T15:11:40Zhttps://gitlab.torproject.org/tpo/core/tor/-/issues/2178We launch dummy descriptor fetches more often than needed2020-09-22T15:11:40ZNick MathewsonWe launch dummy descriptor fetches more often than neededRight now, we have code in update_router_descriptor_downloads() to launch a fetch for authority.z if we want to learn our IP from a directory fetch. We do this if:
* We're a server
* We don't have the Address option set
* At le...Right now, we have code in update_router_descriptor_downloads() to launch a fetch for authority.z if we want to learn our IP from a directory fetch. We do this if:
* We're a server
* We don't have the Address option set
* At least 20 minutes have passed since we last launched a router descriptor download
* At least 20 minutes have passed since we last launched a
Per discussion in bug legacy/trac#652, we could be even more quiet about launching these fetches. We could also require that
* At least 20 minutes have passed since we last launched *any* appropriate directory op.
* At least 20 minutes have passed since we got a new incoming connection on what we think our IP is.
* At least 20 minutes have passed since we got confirmation of our current IP in a NETINFO cell
We could also make the "20 minutes" value configurable by a networkstatus parameter.
This is a minor issue, since the current behavior is inelegant, but not actually hurting anything.Tor: 0.4.5.x-freezeDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/2667Exits should block reentry into the tor network2023-08-23T19:53:08ZMike PerryExits should block reentry into the tor networkWith proposal 110, we blocked the ability of Tor clients to use the Tor protocol for an unbounded amplification attack to destroy the Tor network. However, we still have not completely prevented this attack. It is still possible to tunne...With proposal 110, we blocked the ability of Tor clients to use the Tor protocol for an unbounded amplification attack to destroy the Tor network. However, we still have not completely prevented this attack. It is still possible to tunnel tor over tor by using exits to connect back to other tor nodes. This property can still be used to execute the unbounded amplification attack on the Tor network, or just on the tor directory authorities.
One fix for this would be to add code to exit nodes to implicitly add all of the IP + ORport combinations of all other relays to their exit policy reject lines, or otherwise block this connection at some other level.David Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/tpa/team/-/issues/9721blog aggregator for Tor project members & friends?2020-09-30T19:52:53ZErinn Clarkblog aggregator for Tor project members & friends?Today was I was reading through http://planet.debian.org and thinking how I wished there were a Tor equivalent. There are some problems with this, the first one being that Tor developers are not frequent bloggers, and, to the extent that...Today was I was reading through http://planet.debian.org and thinking how I wished there were a Tor equivalent. There are some problems with this, the first one being that Tor developers are not frequent bloggers, and, to the extent that they are, pretty much all of the relevant stuff ends up on our official blog. There is also the issue that if this is an "official" project by us, it may be subject to some kind of speech-policing because of funders. (Maybe this is not an issue? I think a well-curated blog is unlikely to trigger problems, but would like advice here.)
So, all that said, we know researchers, academic and otherwise, who write interesting blog entries, in addition to a wider community of privacy & security advocates. I think this would be a fun way to get people drawn into the community as well as giving us a more or less central area to point people to if they want to keep in touch with what's going on from people we trust, rather than having to rely on often-crappy news articles.
Thoughts?
And BTW I offer to be involved in setup & maintenance of such a service . Assigning to ponies because I effectively want a pony here.Alexander Færøyahf@torproject.orgAlexander Færøyahf@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/11101Bridges should report implementation versions of their pluggable transports2024-03-05T15:17:58ZRoger DingledineBridges should report implementation versions of their pluggable transportsOur bridges now run a variety of pluggable transports. What if there's a bug in, say, the Scramblesuit implementation (like it appears there is)? If we fix the bug, how do bridgedb or the Tor clients know whether the Scramblesuit bridge ...Our bridges now run a variety of pluggable transports. What if there's a bug in, say, the Scramblesuit implementation (like it appears there is)? If we fix the bug, how do bridgedb or the Tor clients know whether the Scramblesuit bridge they just learned about is one of the new (updated) ones or one of the old (buggy) ones?
One option would be for Tor to include a version for each supported PT in its bridge (or extrainfo) descriptor, so if we turn out to not want to use certain versions for certain situations, we can do it.
Are there better options than this one?Tor: 0.4.9.x-freezeDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/12401Document EntryGuardPathBias in doc/state-contents.txt2021-09-16T14:35:28ZGeorge KadianakisDocument EntryGuardPathBias in doc/state-contents.txtWe should document the newly added `EntryGuardPathBias` and `EntryGuardPathUseBias` to `doc/state-contents.txt`.We should document the newly added `EntryGuardPathBias` and `EntryGuardPathUseBias` to `doc/state-contents.txt`.Nick MathewsonNick Mathewsonhttps://gitlab.torproject.org/tpo/core/tor/-/issues/13753Validate is_canonical more thoroughly2021-08-23T15:18:34ZNick MathewsonValidate is_canonical more thoroughlyWe use is_canonical to tell whether we should extend a circuit over a channel... but we should also double-check it as we are extending that circuit, to make sure we didn't mess up.
Also, we should audit the code that sets is_canonical....We use is_canonical to tell whether we should extend a circuit over a channel... but we should also double-check it as we are extending that circuit, to make sure we didn't mess up.
Also, we should audit the code that sets is_canonical.
* [x] Do we always look at is_canonical when picking a channel?
* [x] Do we always look at is_canonical when extending?
* [x] Is is_canonical set correctly?Nick MathewsonNick Mathewsonhttps://gitlab.torproject.org/tpo/core/tor/-/issues/17197Use CRLF for all text files written on Windows, accept either CRLF or LF on a...2020-07-09T16:34:11ZteorUse CRLF for all text files written on Windows, accept either CRLF or LF on all platformsIn legacy/trac#17501, stem becomes confused because some text files written on Windows use CRLF, and others use LF.
We could use CRLF for all text files written on Windows, and accept either CRLF or LF on all platforms.
Here is a list ...In legacy/trac#17501, stem becomes confused because some text files written on Windows use CRLF, and others use LF.
We could use CRLF for all text files written on Windows, and accept either CRLF or LF on all platforms.
Here is a list of files from DataDirectory with their line endings on Windows:
```
CRLF cached-certs
CRLF cached-consensus
LF cached-descriptors
LF cached-descriptors.new
CRLF cached-microdesc-consensus
LF cached-microdescs
LF cached-microdescs.new
CRLF state
```
We might want to review all files written by tor, including those only written by hidden services and any other components.Tor: unspecifiedAlexander Færøyahf@torproject.orgAlexander Færøyahf@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/19560running tor trying to access its ed25519_signing_secret_key, log message too ...2022-07-08T15:07:58Zweasel (Peter Palfrader)running tor trying to access its ed25519_signing_secret_key, log message too loudI keep my key files away from the running tor instance.
For some reason, tor seems to want to re-open them regularly:
```
Jul 04 08:17:09.000 [warn] Could not open "/var/lib/tor/keys/ed25519_signing_secret_key": Permission denied
```
I...I keep my key files away from the running tor instance.
For some reason, tor seems to want to re-open them regularly:
```
Jul 04 08:17:09.000 [warn] Could not open "/var/lib/tor/keys/ed25519_signing_secret_key": Permission denied
```
It probably shouldn't want that.Nick MathewsonNick Mathewsonhttps://gitlab.torproject.org/tpo/network-health/metrics/collector/-/issues/20983Stop sanitizing contact information from bridge descriptors2023-05-15T14:02:55ZcypherpunksStop sanitizing contact information from bridge descriptorscontext:
https://lists.torproject.org/pipermail/tor-dev/2016-December/011756.html
Why does CollecTor remove ContactInfo from bridge descriptors?
Publishing the ContactInfo should not (directly) reveal the bridge location?
use-case for...context:
https://lists.torproject.org/pipermail/tor-dev/2016-December/011756.html
Why does CollecTor remove ContactInfo from bridge descriptors?
Publishing the ContactInfo should not (directly) reveal the bridge location?
use-case for that data:
bridge group detection
If plain publishing is not acceptable how about generating a random string replacement for a given ContactInfo string.
https://lists.torproject.org/pipermail/tor-dev/2016-December/011761.html
That mapping contactInfo -> random id should remain static for at least 24 hours.Metrics OKR Q1 - Q2 2022https://gitlab.torproject.org/tpo/core/tor/-/issues/21044ORPort self reachability test happens also when it shouldn't2020-08-06T14:38:06Zs7rORPort self reachability test happens also when it shouldn'tI think we did not cover all cases when the self reachability test before publishing descriptors was introduced.
I am running a bridge with `PublishServerDescriptor 0` and `ORPort 127.0.0.1:443` because I want to run it just to do some...I think we did not cover all cases when the self reachability test before publishing descriptors was introduced.
I am running a bridge with `PublishServerDescriptor 0` and `ORPort 127.0.0.1:443` because I want to run it just to do some responsible testing without hammering the public Guards used by other clients. The bridge is configured with `PublishServerDescriptor 0` so it means I don't need a descriptor, I don't intend to make the bridge (or relay) public.
When a bridge is run in the conditions described above the log is spammed (exactly one log message at every 20 minutes) with:
```
[warn] Your server (PUBLIC_IP:443) has not managed to confirm that its ORPort is reachable. Relays do not publish descriptors until their ORPort and DirPort are reachable. Please check your firewalls, ports, address, /etc/hosts file, etc.
```
and
```
[warn] The IPv4 ORPort address 127.0.0.1 does not match the descriptor address PUBLIC_IP. If you have a static public IPv4 address, use 'Address <IPv4>' and 'OutboundBindAddress <IPv4>'. If you are behind a NAT, use two ORPort lines: 'ORPort <PublicPort> NoListen' and 'ORPort <InternalPort> NoAdvertise'.
```
What it did wrong:
- It guessed the public IP address and tried to make the self test on that address, regardless it's not the address explicitly configured at `ORPort`. `Address` is not set in this setup.
- Based on the second log message, I think it even overwritten the address used with `ORPort` with the public IP address that it guessed and built the descriptor.
- It infinitely tries once every 20 minutes and logs a message that the descriptor cannot be published (and my intention based on the options configured is exactly not to publish one even if the tests were successful).
What Tor should do:
- Bypass the protocol to guess `Address` (the public IP address) when `ORPort` / `DirPort` is explicitly configured as a loopback address or NAT address. This will have a logic follow-up (which I think we already do, but want to make sure) like this:
- Bypass self tests when `ORPort` / `DirPort` address is explicitly configured as a loopback address or NAT address (simplest thing would be to treat these cases as like `AssumeReachable 1` is set). Such addresses cannot be tested from the public internet anyway.
- `PublishServerDescriptor 0` maybe should not even build a descriptor at all, or at least bypass the self tests in this case too, it does not make sense to try to test something we never want to publish. Or, only make 1 attempt to test and log a message stating success or failure.
legacy/trac#19919 is kind of related, it treats as it should the cases where `ORPort` is explicitly configured as a public address. In this ticket we cover an extension for cases where `ORPort` is a loopback or NAT address.Tor: 0.4.5.x-freezeNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/tpo/network-health/metrics/collector/-/issues/21219Remove old descriptor files from out/ after archiving2020-11-27T16:26:36ZTom Rittertom@ritter.vgRemove old descriptor files from out/ after archivingUnless I'm mistaken (or misconfigured) -- which is entirely possible -- collector will accumulate uncompressed data in out/ indefinitely, long after it's been archived in archive/ and will no longer be modified.
This takes up a lot of d...Unless I'm mistaken (or misconfigured) -- which is entirely possible -- collector will accumulate uncompressed data in out/ indefinitely, long after it's been archived in archive/ and will no longer be modified.
This takes up a lot of disk space and it'd be nice to
a) get confirmation I can remove data from out/ than is older than N months (2? 3?)
b) have it deleted automagically (or at least with a config setting)Karsten LoesingKarsten Loesinghttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/21314snowflake-client needs to stop using my network when I'm not giving it requests2020-11-23T18:41:46ZRoger Dingledinesnowflake-client needs to stop using my network when I'm not giving it requestsI started my Tor Browser, and told it to use snowflake, and it did. Then I changed my mind and told it to stop using snowflake. Now, apparently there's a bug in Tor where Tor is supposed to kill snowflake-client when there are no more br...I started my Tor Browser, and told it to use snowflake, and it did. Then I changed my mind and told it to stop using snowflake. Now, apparently there's a bug in Tor where Tor is supposed to kill snowflake-client when there are no more bridge lines in my torrc that want to use it. But ignoring that Tor bug, snowflake-client should also be defensive for me. Right now it is touching the broker every 10 seconds, looking for a snowflake, even though it is getting no requests. That can't be good for scalability or for the broker or for the users.Snowflake in Tor Browser 10.5Cecylia BocovichCecylia Bocovichhttps://gitlab.torproject.org/tpo/applications/tor-browser/-/issues/21740Make sure Mozilla's own emoji font on Windows/Linux does not interfere with o...2022-08-04T10:08:12ZGeorg KoppenMake sure Mozilla's own emoji font on Windows/Linux does not interfere with our font fingerprinting defenseMozilla ships an own emoji font to Windows and Linux users (https://bugzilla.mozilla.org/show_bug.cgi?id=1231701). We should make sure that does not interfere with our font fingerprinting defense.Mozilla ships an own emoji font to Windows and Linux users (https://bugzilla.mozilla.org/show_bug.cgi?id=1231701). We should make sure that does not interfere with our font fingerprinting defense.Tor Browser: 11.0 Issues with previous releasePier Angelo VendramePier Angelo Vendramehttps://gitlab.torproject.org/tpo/network-health/metrics/onionoo/-/issues/21933Fix deserialization of UTF-8 characters in details statuses and documents2022-11-10T11:53:33ZKarsten LoesingFix deserialization of UTF-8 characters in details statuses and documentsWhile looking into the encoding issue of different Onionoo instances producing different contact string encodings (legacy/trac#15813), I tracked down a somewhat related issue with UTF-8 characters all being converted to `?`.
The issue i...While looking into the encoding issue of different Onionoo instances producing different contact string encodings (legacy/trac#15813), I tracked down a somewhat related issue with UTF-8 characters all being converted to `?`.
The issue is related to how we're avoiding to store UTF-8 characters in details statuses and details documents and instead escaping those characters. We're doing this correctly for the serialization part but incorrectly for the deserialization part.
We have two choices here. We could either give up on the escaping part and just store UTF-8 characters directly. Or we could fix the deserialization part. I have a fix for the latter and ran out of time for the former, but maybe the former would be the better fix. I'm including my fix here anyway:
```
diff --git a/src/main/java/org/torproject/onionoo/docs/DocumentStore.java b/src/main/java/org/torproject/onionoo/docs/DocumentStore.java
index 39d6271..b6b1c4c 100644
--- a/src/main/java/org/torproject/onionoo/docs/DocumentStore.java
+++ b/src/main/java/org/torproject/onionoo/docs/DocumentStore.java
@@ -496,22 +496,25 @@ public class DocumentStore {
if (!parse) {
return this.retrieveUnparsedDocumentFile(documentType,
documentString);
- } else if (documentType.equals(DetailsDocument.class)
- || documentType.equals(BandwidthDocument.class)
+ } else if (documentType.equals(BandwidthDocument.class)
|| documentType.equals(WeightsDocument.class)
|| documentType.equals(ClientsDocument.class)
|| documentType.equals(UptimeDocument.class)) {
return this.retrieveParsedDocumentFile(documentType,
documentString);
+ } else if (documentType.equals(DetailsStatus.class)
+ || documentType.equals(DetailsDocument.class)) {
+ if (documentType.equals(DetailsStatus.class)) {
+ documentString = "{" + documentString + "}";
+ }
+ documentString = StringUtils.replace(documentString, "\\u", "\\\\u");
+ return this.retrieveParsedDocumentFile(documentType, documentString);
} else if (documentType.equals(BandwidthStatus.class)
|| documentType.equals(WeightsStatus.class)
|| documentType.equals(ClientsStatus.class)
|| documentType.equals(UptimeStatus.class)
|| documentType.equals(UpdateStatus.class)) {
return this.retrieveParsedStatusFile(documentType, documentString);
- } else if (documentType.equals(DetailsStatus.class)) {
- return this.retrieveParsedDocumentFile(documentType, "{"
- + documentString + "}");
} else {
log.error("Parsing is not supported for type "
+ documentType.getName() + ".");
```
The main difference here is the `StringUtils.replace()` part. Without this line (so, current master) we would pass a string containing `\uxxxx` to `Gson.fromJson()` which would unescape it and turn it into the corresponding UTF-8 character. So far so good. But when we would later write this status or document back to disk, `DocumentStore#writeToFile` will write these bytes to disk as `US-ASCII`, and that will replace all UTF-8 characters with `?`.
The patch fixes this by replacing `\uxxxx` in the file content with `\\uxxxx` which `Gson.fromJson()` will not consider an escaped UTF-8 character. We do have code in place that reverses this double-escaping, see `DetailsStatus#unescapeJson`. So, the patch fixes the problem by keeping things escaped until they are used.
Again, I think the cleaner fix would be to give up on escaping UTF-8 characters and just switch to UTF-8. The part that might make this a little harder is that we'll have to make sure that this works correctly for existing files. And if it does not, we'll need to special-case those somehow. But maybe the patch above helps to come up with this cleaner fix.HiroHirohttps://gitlab.torproject.org/tpo/web/blog/-/issues/22397Add a (single) onion service for the new tor blog2021-11-16T14:10:59ZteorAdd a (single) onion service for the new tor blogWhen we asked for this for the old blog, it wasn't technically feasible (or it was a legacy system, so we decided not to do it).
I hope that onion service compatibility (mainly URL rewrites) was one of the requirements for the new blog.When we asked for this for the old blog, it wasn't technically feasible (or it was a legacy system, so we decided not to do it).
I hope that onion service compatibility (mainly URL rewrites) was one of the requirements for the new blog.Launch support's Forum and Blog migrationJérôme Charaouilavamind@torproject.orgJérôme Charaouilavamind@torproject.orghttps://gitlab.torproject.org/tpo/core/tor/-/issues/23126HSDirs should publish some count about new-style onion addresses (v3 metrics)2021-12-13T20:37:24ZRoger DingledineHSDirs should publish some count about new-style onion addresses (v3 metrics)Right now we have an ongoing estimate of the total number of onion addresses published to the HSDirs:
https://metrics.torproject.org/hidserv-dir-onions-seen.html
How many of those are 224-style onion addresses, and how many of them are ...Right now we have an ongoing estimate of the total number of onion addresses published to the HSDirs:
https://metrics.torproject.org/hidserv-dir-onions-seen.html
How many of those are 224-style onion addresses, and how many of them are legacy-style onion addresses?
I see a `rep_hist_stored_maybe_new_hs()` for the v2-style descriptors, and I think I see a
```
/* XXX: Update HS statistics. We should have specific stats for v3. */
```
for the v3-style descriptors.
So I think that means that the graph is only showing v2-style onions, and we have no infrastructure for noticing trends with v3 style onions.
I also suspect that noticing trends is harder with v3-style onions, since each descriptor the hsdir sees is standalone, and it's not possible (without knowing the onion address) to link two descriptors to the same address.
So our only chance at estimating total number of v3 onion addresses is to know the publishing habits of v3 style onion services (how many descriptors per how much time period), and then publish the total number of descriptors we see, and folks can do some math afterwards to estimate number of running services? In any case we can see if the number goes up or down over time.
Or maybe there is some even better design? :)
The reason I bring it up now is (a) if we want to get any code into relays, we need to do it sufficiently before we need it, so it can get rolled out, and (b) I see discussions about bugs with v3-style onion services publishing every 2 minutes, and while we're fixing those we should keep in mind how handy it would be to be able to predict how many descriptors a new onion service will publish per time period on average.Tor: 0.4.5.x-stableGeorge KadianakisGeorge Kadianakishttps://gitlab.torproject.org/tpo/applications/tor-browser-build/-/issues/23631Improve sudo need2021-03-01T16:46:05ZTom Rittertom@ritter.vgImprove sudo needRight now the Tor Browser build takes a long time, and sudo is needed periodically throughout it. This means you have to either run it as root, babysit it, or set your user account up with passwordless sudo. All of those kinda stink.
It...Right now the Tor Browser build takes a long time, and sudo is needed periodically throughout it. This means you have to either run it as root, babysit it, or set your user account up with passwordless sudo. All of those kinda stink.
It's be cool if we could improve that a bit. Ideas:
- Write a setuid program that execs the necessary commands but provides input and directory filtering (directory path either compiled in or read from a root-owned file I guess)
- Same idea but instead of setuid, it's set up to be run with passwordless sudo
- Somehow request sudo access in the beginning and retain it through the whole script (without running everything as root)Tor Browser: 10.5boklmboklmhttps://gitlab.torproject.org/tpo/network-health/metrics/relay-search/-/issues/24045Measure and map overloaded or over-weighted relays2023-01-12T13:00:08ZteorMeasure and map overloaded or over-weighted relaysFrom legacy/trac#21394, it looks like some exits are allocated too much consensus weight, and then they fail.
Can we calculate and map or graph the bandwidth to consensus weight ratios of relays?
This would help us find out if the chan...From legacy/trac#21394, it looks like some exits are allocated too much consensus weight, and then they fail.
Can we calculate and map or graph the bandwidth to consensus weight ratios of relays?
This would help us find out if the changes we make are helping to allocate bandwidth more evenly.HiroHirohttps://gitlab.torproject.org/tpo/applications/tor-browser/-/issues/24686In Tor Browser context, should network.http.tailing.enabled be set to false?2022-10-14T19:38:30ZcypherpunksIn Tor Browser context, should network.http.tailing.enabled be set to false?Here's what `network.http.tailing.enabled` does: https://www.janbambas.cz/firefox-57-delays-requests-tracking-domains/ It depends on Disconnect's tracking list.
In Tor Browser context I'm not sure whether this would be beneficial.Here's what `network.http.tailing.enabled` does: https://www.janbambas.cz/firefox-57-delays-requests-tracking-domains/ It depends on Disconnect's tracking list.
In Tor Browser context I'm not sure whether this would be beneficial.Sponsor 131 - Phase 3 - Major ESR 102 Migrationrichardrichardhttps://gitlab.torproject.org/tpo/applications/tor-browser-spec/-/issues/24945Tor Browser design doc says it whitelists flash and gnash as plugins2024-02-13T20:04:29ZRoger DingledineTor Browser design doc says it whitelists flash and gnash as pluginsThe Tor Browser design doc says "we also patch the Firefox source code to prevent the load of any plugins except for Flash and Gnash. Even for Flash and Gnash, we also patch Firefox to prevent loading them into the address space until th...The Tor Browser design doc says "we also patch the Firefox source code to prevent the load of any plugins except for Flash and Gnash. Even for Flash and Gnash, we also patch Firefox to prevent loading them into the address space until they are explicitly enabled."
If this is so, we should probably change Tor Browser to just prevent all plugins, including Flash and Gnash.
And if it is no longer so, we should fix the wrong statement in the design doc.
Noticed in legacy/trac#10885.