Trac issueshttps://gitlab.torproject.org/legacy/trac/-/issues2020-07-06T14:05:06Zhttps://gitlab.torproject.org/legacy/trac/-/issues/30009consider trocla for secrets management in puppet2020-07-06T14:05:06Zanarcatconsider trocla for secrets management in puppetsecrets generated by puppet currently use a custom hkdf function that is homegrown. the ad-hoc standard for this in the puppet community i'm usually working with is [trocla](https://github.com/duritong/trocla) which is [well integrated w...secrets generated by puppet currently use a custom hkdf function that is homegrown. the ad-hoc standard for this in the puppet community i'm usually working with is [trocla](https://github.com/duritong/trocla) which is [well integrated with puppet](https://github.com/duritong/puppet-trocla).
Trocla generates, on the fly, a strong random password for each key you ask it. It also supports various hashing mechanisms (bcrypt, pgsql, x509, etc) so that the Puppet client never actually sees the cleartext. It seems like a better approach than sending the cleartext like we currently do.
So I'd like to start using this for new code and possibly convert existing code to this, if that's acceptable.anarcatanarcathttps://gitlab.torproject.org/legacy/trac/-/issues/29863Add disk space monitoring for snowflake infrastructure2020-07-06T14:05:05ZCecylia BocovichAdd disk space monitoring for snowflake infrastructureWe've run out of disk space at both the snowflake bridge (#26661, #28390) and the broker (#29861), which has caused snowflake to stop working. We've set up rotating and compressed logs but it would be nice to have some disk space monitor...We've run out of disk space at both the snowflake bridge (#26661, #28390) and the broker (#29861), which has caused snowflake to stop working. We've set up rotating and compressed logs but it would be nice to have some disk space monitoring to alert us if/when this happens again
Also, as discussed on IRC, we should eventually move the broker to a TPA machine.https://gitlab.torproject.org/legacy/trac/-/issues/29684setup a grafana server somewhere2020-07-06T14:05:05Zanarcatsetup a grafana server somewherePrometheus on its own is nice, but the graphs are not that great. We should setup Grafana on top of that instead.
Grafana is a pain in the bottom to install in Debian: there are upstream packages, but they are [a mess](https://bugs.debi...Prometheus on its own is nice, but the graphs are not that great. We should setup Grafana on top of that instead.
Grafana is a pain in the bottom to install in Debian: there are upstream packages, but they are [a mess](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=835210#42) so my approach has been to use Docker so far.
I guess we can use the test server for this for now.
note there is a puppet module for grafana, which supports deploying both with the upstream debian package and docker: https://forge.puppet.com/puppet/grafanaanarcatanarcathttps://gitlab.torproject.org/legacy/trac/-/issues/33262Prop 313: 3. Write a Script that Counts IPv6 Relays in the Consensus2020-07-02T19:47:20ZteorProp 313: 3. Write a Script that Counts IPv6 Relays in the ConsensusWe want to write a script that generates statistics for relays that:
1. have an IPv6 ORPort,
2. support IPv6 clients,
3. support IPv6 reachability checks, and
4. support IPv6 reachability checks, and IPv6 clients.
The first two ...We want to write a script that generates statistics for relays that:
1. have an IPv6 ORPort,
2. support IPv6 clients,
3. support IPv6 reachability checks, and
4. support IPv6 reachability checks, and IPv6 clients.
The first two statistics have no dependencies. The last two statistics depend on the "Relay=3" subprotocol in #33226.
The script should calculate:
* the number of relays, and
* the consensus weight fraction of relays.
In order to provide easy access to these statistics, we propose
that the script should:
* download a consensus (or read an existing consensus), and
* calculate and report these statistics.
We could write this script using Python 3 and Stem:
https://stem.torproject.org
The following consensus weight fractions should divide by the total
consensus weight:
* have an IPv6 ORPort (all relays have an IPv4 ORPort), and
* support IPv6 reachability checks (all relays support IPv4 reachability).
The following consensus weight fractions should divide by the
"usable Guard" consensus weight:
* support IPv6 clients, and
* support IPv6 reachability checks and IPv6 clients.
"Usable Guards" have the Guard flag, but do not have the Exit flag. If the
Guard also has the BadExit flag, the Exit flag should be ignored.
The script should check that Wgd is 0. If it is not, the script
should log a warning about the accuracy of the "Usable Guard" statistics.
See proposal 313, section 3:
https://gitweb.torproject.org/torspec.git/tree/proposals/313-relay-ipv6-stats.txt#n82Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/33226Prop 311: 5. Declare Support for Subprotocol Version "Relay=3"2020-07-02T19:46:44ZteorProp 311: 5. Declare Support for Subprotocol Version "Relay=3"This ticket depends on relay IPv6 extends in #33220.
We reserve Tor subprotocol "Relay=3" for tor versions where:
* relays may perform IPv6 extends, and
* bridges might not perform IPv6 extends,
as described in this proposal.
See p...This ticket depends on relay IPv6 extends in #33220.
We reserve Tor subprotocol "Relay=3" for tor versions where:
* relays may perform IPv6 extends, and
* bridges might not perform IPv6 extends,
as described in this proposal.
See proposal 311, section 5:
https://gitweb.torproject.org/torspec.git/tree/proposals/311-relay-ipv6-reachability.txt#n601Tor: 0.4.4.x-finalteorteorhttps://gitlab.torproject.org/legacy/trac/-/issues/34129Use STUN to determine NAT behaviour of peers2020-06-30T16:07:44ZCecylia BocovichUse STUN to determine NAT behaviour of peersIn investigating high proxy failure rates at clients (#33666) and the logistics of running our own STUN server (#25591), I came across [RFC5780](https://tools.ietf.org/html/rfc5780), which outlines steps to identify NATs with "endpoint i...In investigating high proxy failure rates at clients (#33666) and the logistics of running our own STUN server (#25591), I came across [RFC5780](https://tools.ietf.org/html/rfc5780), which outlines steps to identify NATs with "endpoint independent mapping and filtering".
[Section 4.3](https://tools.ietf.org/html/rfc5780#section-4.3) outlines how a client can use a STUN server with an alternate IP address (returned in the first STUN binding request response) to determine how restrictive their NAT is.
This would be useful to match up clients with snowflake proxies that have compatible NATs. We still have the following questions:
- ~~are there public STUN servers that support this feature?~~
Yes there are several candidates.
- ~~does the pion/stun library we use support this feature for STUN clients?~~
Not yet but we can implement the feature.
- If we're able to implement our own STUN server behind a domain-fronted connection (#25591), how can we implement this functionality?
I see at least one open source STUN server implementation that claims to support this (written in C): https://github.com/coturn/coturnCecylia BocovichCecylia Bocovichhttps://gitlab.torproject.org/legacy/trac/-/issues/33364Could not connect to the bridge.2020-06-30T16:04:35ZcypherpunksCould not connect to the bridge.Console error message:
Firefox can’t establish a connection to the server at wss://snowflake.freehaven.net/.
Relevant code at: snowflake.js:867:9
A ping from the command prompt to this subdomain succeeds. It's just Firefox that can't ...Console error message:
Firefox can’t establish a connection to the server at wss://snowflake.freehaven.net/.
Relevant code at: snowflake.js:867:9
A ping from the command prompt to this subdomain succeeds. It's just Firefox that can't connect.Cecylia BocovichCecylia Bocovichhttps://gitlab.torproject.org/legacy/trac/-/issues/19026Remove local LAN address ICE candidates2020-06-30T15:54:29ZDavid Fifielddcf@torproject.orgRemove local LAN address ICE candidatesICE candidates can contain local LAN addresses as well as external addresses. For example, here's a redacted transcript from the Snowflake JS proxy:
```
a=candidate:4077567720 1 udp 2122260223 192.168.1.5 51282 typ host generation 0
a=ca...ICE candidates can contain local LAN addresses as well as external addresses. For example, here's a redacted transcript from the Snowflake JS proxy:
```
a=candidate:4077567720 1 udp 2122260223 192.168.1.5 51282 typ host generation 0
a=candidate:8564102000 1 udp 1686052607 X.X.X.X 51282 typ srflx raddr 192.168.1.5 rport 51282 generation 0
a=candidate:3179889176 1 tcp 1518280447 192.168.1.5 52256 typ host tcptype passive generation 0
```
If it's possible, we should filter them out to prevent revealing more information than necessary. Serene and I guessed that they are only there for the case when both peers are in the same local network, but we're not sure about that.Arlo BreaultArlo Breaulthttps://gitlab.torproject.org/legacy/trac/-/issues/29816replace "Tor VM hosts" spreadsheet with Grafana dashboard2020-06-25T21:16:43Zanarcatreplace "Tor VM hosts" spreadsheet with Grafana dashboardOur KVM allocation strategy is currently managed through a Google spreadsheet. This is suboptimal for a few reasons:
1. it is hard to keep up to date - for example, moly is not listed in there even though it's in LDAP as a "KVM host"
...Our KVM allocation strategy is currently managed through a Google spreadsheet. This is suboptimal for a few reasons:
1. it is hard to keep up to date - for example, moly is not listed in there even though it's in LDAP as a "KVM host"
2. it's not real time data - for example, even if a host is allocated one vCPU, it might be totally idle most of the time and doing mostly network or disk, while another one might hit the CPU hard. actual load is what matters
3. it's hosted by Google - that has a few problems, the most important of which is that some TPA do not actually *want* to use Google services and might be reluctant to update it, worsening problem 1
I propose we shift this to a Grafana dashboard. I already have a prototype in the form of the [Node exporter server metrics Grafana Dashboard](https://grafana.com/dashboards/405) which shows multiple hosts basic stats in parallel. I set the default of the dashboard in Grafana to show the 6 KVM hosts:
<https://grafana.torproject.org/d/ER3U2cqmk/node-exporter-server-metrics?orgId=1&from=now-12h&to=now&var-node=kvm4.torproject.org:9100&var-node=kvm5.torproject.org:9100&var-node=macrum.torproject.org:9100&var-node=moly.torproject.org:9100&var-node=textile.torproject.org:9100&var-node=unifolium.torproject.org:9100>
That looks like this:
![https://paste.anarc.at/snaps/snap-2019.04.17-16.48.43.png, 700px](https://paste.anarc.at/snaps/snap-2019.04.17-16.48.43.png, 700px)
.. but it's not ideal:
* it's showing irrelevant stats for this purpose like context switches or detailed disk or memory stats
* it's missing critical information like the number of KVM guests hosted on the machine, how many CPUs and disk space is allocated and so on
This is the information we should be showing:
* disk capacity vs allocation
* disk utilization
* CPU count vs allocation
* actual CPU utilization
* load?
* memory capacity vs allocation
* actual memory usage
Some of that information currently lives *only* in the spreadsheet. For example, disk allocations are only available there, as the KVM guests run on QCOW (Qemu Copy On Write) filesystems that only take space when actually used by the guest. This has the advantage of allowing us to over-provision, but means we must keep that metadata somewhere else.
So for now it's in the spreadsheet, but we could find a way to move it somewhere Prometheus can scrape. One trick that Prometheus has is that it can expose metrics stored as text files in `/var/lib/prometheus/node-exporter/*.prom`. This is how the smartctl and APT metrics get shipped for example: a cron job (well, a systemd timer) regularly writes that file, atomically. So one option could be to move this information to (say) LDAP or Puppet/Hiera and write that information into that file using a cronjob (LDAP) or Puppet (Hiera).
Then we'd build a custom Grafana dashboard and get rid of the other spreadsheet.
A stop-gap measure might be to simplify the spreadsheet and move it to a plain text markdown file. We would lose the automatic calculations the spreadsheet provide, in exchange for easier updating and transparency.https://gitlab.torproject.org/legacy/trac/-/issues/32558clarify what happens to email when we retire a user2020-06-25T20:36:20Zanarcatclarify what happens to email when we retire a userAs part of improving the offboarding process (#32519), we should especially look at how email works.
Right now, when we [retire a user](https://help.torproject.org/tsa/howto/retire-a-user/), their account is first "locked" which means t...As part of improving the offboarding process (#32519), we should especially look at how email works.
Right now, when we [retire a user](https://help.torproject.org/tsa/howto/retire-a-user/), their account is first "locked" which means their access to various services is disabled. But their email still works for 186 days (~6 months). After that date, in theory, their email aliases start completely dropping email (needs to be onfirmed).
It's unclear if that's the right policy to follow. Some people feel that an email alias should stay around forever, as it is an inalienable human right.
Others feel that certain administrative roles should be forwarded when a person leave. If, say, "Alice" (fictive name) was doing fundraising but was using `alice@torproject.org` for that work. When they leave, should we forward `alice@` to `fundraising@torproject.org`?
But then what if Alice was using their work email for private correspondance either? Maybe the fundraising team shouldn't be able to see *those* communications.
One proposal could be that the default policy is this:
1. email @torproject.org is "function" email and is destined only for torproject.org related work
2. when a person leave their position, that email gets deactivated after a 6 months delay
3. in extreme cases, some forward may be *temporarily* enabled to reset accesses or re-establish contacts with a provider or third-party
It is also possible that there could be *two* policies, one for TPI employees and one for other TPO people.https://gitlab.torproject.org/legacy/trac/-/issues/32332Set up LDAP authn for nc.tpn2020-06-25T20:36:20ZLinus Nordberglinus@torproject.orgSet up LDAP authn for nc.tpnAll LDAP users should have a NC account.
Can this be done using the "LDAP User and group backend" application?All LDAP users should have a NC account.
Can this be done using the "LDAP User and group backend" application?https://gitlab.torproject.org/legacy/trac/-/issues/21537Consider ignoring secure cookies for .onion addresses2020-06-24T12:23:18Zmicahmicah@torproject.orgConsider ignoring secure cookies for .onion addressesOne of the main problem points with adding onion services to existing web services has been interaction with secure cookies. Its hard to setup onion services because you need to enable secure cookies some times (over regular network+TLS)...One of the main problem points with adding onion services to existing web services has been interaction with secure cookies. Its hard to setup onion services because you need to enable secure cookies some times (over regular network+TLS) and disable them other times (over .onion network, without TLS). Right now you have to make a trade-off: work well with .onions, or work well with everyone else. This is an unfortunate trade-off.
It is considered a best practice that every web developer is told to do, but its a best practice that doesn't work if you want to run an onion site. Running an onion site should not force you to violate established web application development best practices.
The idea of "secure cookies" is that they prevent you from leaking your cookie information over an insecure connection. There are a lot of ways you can leak your cookie info over an insecure connection:
. dont have hsts setup
. running an application server that sets the cookie before it redirects to https
. or your server is not setup to redirect everything to https
Using "secure cookies" allows the application (regardless of how it is run, or what intermediaries are in between), to make sure that the browser doesn't screw this up. It tells the browser to never submit the cookie over plaintext. Many frameworks have this set by default (such as Rails). Some applications, such as java/tomcat have as part of the stack the cookie setting that happens before that does the redirect to https.
The "secure cookies" spec is just a "suggestion" to the browser, so TBB is free to ignore them, and I think that maybe it should do so for .onion sites.
As an example, if a user goes to https://example.com the first response back sends back a cookie with nothing but a session id. If you then login, you now have a sessionid that is privileged and associated with your account. If you then close that tab, but then realize you needed to do something else, so you open a new tab and go to http://example.com (NB: no https). If the site did not mark the original cookies as 'secure', then the browser will submit in that initial first request the cookie it had previously saved and it will send it over the cleartext channel before the webserver can redirect to the secured site. With the secure cookies flag set, the browser will not send the cookie until the TLS connection is up. This doesn't matter if you are going over onion services because the connection is already wrapped in TLS, and it also doesn't matter if the site has HSTS, because the second visit will go to https by default in that scenario.
So what are the options?
. Ignore secure cookie flags for .onions
. Ignore tls verification for onions
Either one would increase the security properties of onion and non onions, unfortunately the second one would not be appreciated by sites that have actually paid for a valid .onion cert.
Pretty much every Rails application suffers with TBB because of this problem, I'm pretty sure other frameworks also suffer from this. Fixing this would fix a large number of tor problems related to this.
I'm unsure of the broader implications of this, which is why I wanted to open this for discussion.Georg KoppenGeorg Koppenhttps://gitlab.torproject.org/legacy/trac/-/issues/33856Set browser.privatebrowsing.forceMediaMemoryCache=true2020-06-24T11:43:52ZrichardSet browser.privatebrowsing.forceMediaMemoryCache=trueNew pref added to disable disk caching of video in private browsing mode.
Relevant ticket:
- https://bugzilla.mozilla.org/show_bug.cgi?id=1532486New pref added to disable disk caching of video in private browsing mode.
Relevant ticket:
- https://bugzilla.mozilla.org/show_bug.cgi?id=1532486Mark SmithMark Smithhttps://gitlab.torproject.org/legacy/trac/-/issues/29120Default value of media.cache_size (0) causes some media to load extremely slo...2020-06-24T11:43:52ZTracDefault value of media.cache_size (0) causes some media to load extremely slowly or become unplayableIt seems this value was defaulted to 0 after this ticket: https://trac.torproject.org/projects/tor/ticket/10237
The issue this causes can be seen on this page:
https://livestreamfails.com/post/28515
If you open this link in a default c...It seems this value was defaulted to 0 after this ticket: https://trac.torproject.org/projects/tor/ticket/10237
The issue this causes can be seen on this page:
https://livestreamfails.com/post/28515
If you open this link in a default configured Tor Browser (I'm using v8.0.4), and open the network inspector (CTRL+SHIFT+E), you will see the media file is constantly being re-requested, downloading the file a few hundred kilobytes at a time using range requests.
This causes the video to load extremely slowly. If the HTTP server does not support range requests, the video will seemingly become unplayable in the native player.
This kind of behavior could also be seen as abusive or mistaken as DoS related traffic by website operators, since it is not typically how browsers download media.
Setting media.cache_size to the default Firefox value of 512000 fixes this, it may also work with a lower cache size, but I've only tried the default Firefox value. I suppose it also may bring back the original issue of the media cache being stored on disk, that ticket is from 5 years ago though, so I'm sure if that situation has changed.
This bug only seems to affect certain media files, I'm not exactly sure what the other factors are in triggering the behavoir.
This bug also exists in the latest stable version of Firefox if you set the media.cache_size to 0, I suppose it's rarely encountered though since the default is much higher.
**Trac**:
**Username**: QZw2aBQoPyuEVXYVlBpshttps://gitlab.torproject.org/legacy/trac/-/issues/31286Include bridge configuration into about:preferences2020-06-24T11:26:19ZGeorg KoppenInclude bridge configuration into about:preferencesTorbutton as a standalone extension is going away (#10760) and while doing so we restructure our toolbar making it more usable by exposing New Identity directly on it (#27511). However, we need to find a new home for the bridge configura...Torbutton as a standalone extension is going away (#10760) and while doing so we restructure our toolbar making it more usable by exposing New Identity directly on it (#27511). However, we need to find a new home for the bridge configuration as well if we want to remove the onion button from the toolbar. The current plan is to move it to `about:preferences` as a general setting. This ticket tracks that work.richardrichardhttps://gitlab.torproject.org/legacy/trac/-/issues/34423Implement GetTor for mobile users2020-06-21T18:06:15ZCecylia BocovichImplement GetTor for mobile usersWhat happens if Tor Browser downloads through traditional app stores are blocked in certain regions?
Right now GetTor only distributes for windows, osx, and linux. If we upload .apks to our link download providers, is this a usable way ...What happens if Tor Browser downloads through traditional app stores are blocked in certain regions?
Right now GetTor only distributes for windows, osx, and linux. If we upload .apks to our link download providers, is this a usable way to install Tor Browser on Android?https://gitlab.torproject.org/legacy/trac/-/issues/34350Stop logging all successful databse queries in GetTor2020-06-21T18:06:14ZCecylia BocovichStop logging all successful databse queries in GetTorThis is another log message that isn't helpful and fills up our logs. Here's a patch: https://gitlab.torproject.org/tpo/anti-censorship/gettor-project/gettor/-/merge_requests/12This is another log message that isn't helpful and fills up our logs. Here's a patch: https://gitlab.torproject.org/tpo/anti-censorship/gettor-project/gettor/-/merge_requests/12Cecylia BocovichCecylia Bocovichhttps://gitlab.torproject.org/legacy/trac/-/issues/34300Ensure GetTor's email unit tests are properly formatted2020-06-21T18:06:14ZCecylia BocovichEnsure GetTor's email unit tests are properly formattedFrom #34286:
[comment:3 phw]:
> Looks good to me!
>
> On a slightly related note: I believe that an email's body is supposed to be separated by two (rather than one) newlines from its header. GetTor's unit tests are using only one (and...From #34286:
[comment:3 phw]:
> Looks good to me!
>
> On a slightly related note: I believe that an email's body is supposed to be separated by two (rather than one) newlines from its header. GetTor's unit tests are using only one (and mix \n with \r\n). Python's email module is also confused by this and thinks that the body is part of the `To` field:
>
> {{{
> In [1]: from email import message_from_string
> In [3]: m=message_from_string("From: MAILER-DAEMON@mx1.riseup.net\nSubject: Undelivered Mail Returned to Sender\r\nTo: gettor@torproject.org\n osx en\n")
> In [6]: m.items()
> Out[6]:
> [('From', 'MAILER-DAEMON@mx1.riseup.net'),
> ('Subject', 'Undelivered Mail Returned to Sender'),
> ('To', 'gettor@torproject.org\n osx en')]
> }}}
>
> This seems like something we should fix.https://gitlab.torproject.org/legacy/trac/-/issues/34286gettor appears to be in an email loop war with a .sk address2020-06-21T18:06:13ZRoger Dingledinegettor appears to be in an email loop war with a .sk addressI happened to be looking at eugeni's mail.log for other debugging, and saw that approximately 25% of the lines in mail.log contain the string gettor.
(Yesterday, eugeni's postfix had 460k lines in it, and 101k of them said "gettor" in t...I happened to be looking at eugeni's mail.log for other debugging, and saw that approximately 25% of the lines in mail.log contain the string gettor.
(Yesterday, eugeni's postfix had 460k lines in it, and 101k of them said "gettor" in them. Today in the first hour or so, it's 7k out of 25k.)
Does gettor get into fights with external addresses, where it replies to the bounce, gets another bounce and replies to that, etc?
There are probably smart guidelines for avoiding mail loop wars, like not answering names that start with mailer-domain, checking for the presence of an X-Something-Something header, or rate limiting responses to a given address.
And this is a great case where unifying how bridgedb handles its email answers, and how gettor does it, will save a lot of headache.Cecylia BocovichCecylia Bocovichhttps://gitlab.torproject.org/legacy/trac/-/issues/34253GetTor should set In-Reply-To when responding to email2020-06-21T18:06:12ZPhilipp Winterphw@torproject.orgGetTor should set In-Reply-To when responding to emailGetTor currently doesn't set the `In-Reply-To` header when responding to an email. That breaks threading in the user's mailbox and it also makes it slightly more difficult to test our autoresponder.
It's not high priority but let's add ...GetTor currently doesn't set the `In-Reply-To` header when responding to an email. That breaks threading in the user's mailbox and it also makes it slightly more difficult to test our autoresponder.
It's not high priority but let's add the `In-Reply-To` header at some point.