Trac issueshttps://gitlab.torproject.org/legacy/trac/-/issues2020-06-13T15:03:26Zhttps://gitlab.torproject.org/legacy/trac/-/issues/20148Add AES256 support to crypto_cipher_t2020-06-13T15:03:26ZNick MathewsonAdd AES256 support to crypto_cipher_tProposal 224 specifies the use of AES256 for encrypting hidden service descriptors. We need an aes256 backend.Proposal 224 specifies the use of AES256 for encrypting hidden service descriptors. We need an aes256 backend.Tor: 0.2.9.x-finalNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/legacy/trac/-/issues/20070Make address choice failure log message more informative2020-06-13T15:01:07ZteorMake address choice failure log message more informativeLog a more informative message when we fail to find an address for a fallback or authority, because it has a hard-coded IPv6 address, but its descriptor does not have an IPv6 address.Log a more informative message when we fail to find an address for a fallback or authority, because it has a hard-coded IPv6 address, but its descriptor does not have an IPv6 address.Tor: 0.3.0.x-finalteorteorhttps://gitlab.torproject.org/legacy/trac/-/issues/20012Stop upgrading client to intro connections to ntor2020-06-13T15:00:49ZteorStop upgrading client to intro connections to ntorSplit off from #19163, placed in the same milestone.
Clients inadvertently upgrade to ntor when the hidden service descriptor does not have a TAP onion key. This is a client discriminator that can be used by hidden services to discover ...Split off from #19163, placed in the same milestone.
Clients inadvertently upgrade to ntor when the hidden service descriptor does not have a TAP onion key. This is a client discriminator that can be used by hidden services to discover which consensus a client has.
This bug was inadvertently introduced along with ntor in 0.2.4.8-alpha.Tor: 0.2.9.x-finalteorteorhttps://gitlab.torproject.org/legacy/trac/-/issues/20010modifications of relay(s) on fallback whitelist2020-06-13T16:05:42ZTracmodifications of relay(s) on fallback whitelist```
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Hello teor,
I currently have two relays in the fallback whitelist (niij01 230A8B2A8BA861210D9B4BA97745AEC217A94207 and niij02 0B85617241252517E8ECF2CFC7F4C1A32DCD153F). I have some u...```
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Hello teor,
I currently have two relays in the fallback whitelist (niij01 230A8B2A8BA861210D9B4BA97745AEC217A94207 and niij02 0B85617241252517E8ECF2CFC7F4C1A32DCD153F). I have some updates/questions:
1. I have added IPv6 addresses for these two relays, is there anything I need to do to update them on the fallback list?
2. I have another fast, stable relay in the family of these 2 relays, that can be added (niij03 A9406A006D6E7B5DA30F2C6D4E42A338B5E340B2) to the fallback.
3. Do the IP addresses for these relays need to remain the same indefinitely? I was thinking of changing hosts (fingerprint would remain the same), but if changing the IP addresses causes issues, I will keep them on their current host.
Thanks, niij
-----BEGIN PGP SIGNATURE-----
wsFcBAEBCAAQBQJXwzUpCRAdSJS4jbcqPQAAsTAQAKs7K1exZHkf8Jyj/sLDBjo+
ZuBTulOQi+PxCstUNZgbOE3xN+LyerrBDBqFLy0znrwj1VK5TgKJi6+EawaJQFWh
qS/Mly8VujsighUdx94vrfxU2AKnvBIQ4oU72+tzXsp7Hsdscr3sG5DOMWTdWNKi
DvK/ccaeCsCkuAsU7UAJ55DtOhtHiJ9fHGMtJYipTXKB/gLUeo8rz5BUyJTGOCOJ
fTWqp1rw+Xbgvo+jPLl8YTsgijA+BMxurCgYng+90VH4P6weZGQFWIn7CQ55ANmO
kRfcw/sSRKXJTYAw6jCNe8eUC8eq1EhfGpbSZoa7KaV7l8UtpEsx7/splUnDtWj6
6KQF9tk+k3YR/2D1oeYfDcyDSJAMXIRH/NLRg7H06vuuoZEQm/Q5lSoZ8whGZbAN
HnKxb66ZNc/RMQ0DgLl1Gs42OMQCLcBsP0I6PFx429TgxnGfnceWpJgEqN0Q9kGy
rJ2J4jBy9kW70Sh813focmVlK3TkkejUcLYoWFz57siqipGY3nsBgtLETHpULEtl
SAhQCs6XjJ9LlRLmXplSj8ftmdTiwvyLKOukbxkrqdEiyDAxS0C9zdSCCfujrqR/
WEyEzbc9hom/Xms2FwCcZ5dFCDbf3CiD722bPbavhGH/6TgAyDzAlqOa2PA1heEr
BDFkOQzVrIyIbnzuoL7S
=0wQz
-----END PGP SIGNATURE-----
```
**Trac**:
**Username**: niijTor: 0.2.9.x-finalteorteorhttps://gitlab.torproject.org/legacy/trac/-/issues/19760Update longclaw's hard-coded IPv6 address2020-06-13T14:59:45ZteorUpdate longclaw's hard-coded IPv6 addressIn 0.2.8.1-alpha, we added longclaw's IPv6 address as
`ipv6=[2620:13:4000:8000:60:f3ff:fea1:7cff]:443`
https://gitweb.torproject.org/tor.git/tree/src/or/config.c#n931
Now its descriptor says:
`[2620:13:4000:8000:a800:ff:fef5:2213]:443`
...In 0.2.8.1-alpha, we added longclaw's IPv6 address as
`ipv6=[2620:13:4000:8000:60:f3ff:fea1:7cff]:443`
https://gitweb.torproject.org/tor.git/tree/src/or/config.c#n931
Now its descriptor says:
`[2620:13:4000:8000:a800:ff:fef5:2213]:443`
https://atlas.torproject.org/#details/74A910646BCEEFBCD2E874FC1DC997430F968145
Should we update this before the 0.2.8 release?Tor: 0.3.2.x-finalteorteorhttps://gitlab.torproject.org/legacy/trac/-/issues/19480Avoid errors during fallback selection when there are no fallbacks2020-06-13T16:05:41ZteorAvoid errors during fallback selection when there are no fallbacksThis issue is fixed as part of #19071, I just needed a bug number.This issue is fixed as part of #19071, I just needed a bug number.Tor: 0.2.8.x-finalteorteorhttps://gitlab.torproject.org/legacy/trac/-/issues/19161test for libscrypt_scrypt() fails2020-06-13T14:57:51ZIsis Lovecrufttest for libscrypt_scrypt() failsOn a Debian jessie system, possibly due libscrypt.so thinking that the "log" function is undefined. (Running `nm -D `locate libscrypt.so.0`` confirms this.)
There's a config.log which shows this happening attached. (See line 4311.)
Nic...On a Debian jessie system, possibly due libscrypt.so thinking that the "log" function is undefined. (Running `nm -D `locate libscrypt.so.0`` confirms this.)
There's a config.log which shows this happening attached. (See line 4311.)
Nick thinks we need to add `-lm` to the compiler flags for the configure test.Tor: 0.2.8.x-finalNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/legacy/trac/-/issues/18977correct_tm() doesn't set r->tm_wday, but format_rfc1123_time() uses it2020-06-13T14:57:03Zteorcorrect_tm() doesn't set r->tm_wday, but format_rfc1123_time() uses itI think this is causing some unit tests to fail on Windows:
https://jenkins.torproject.org/job/tor-ci-mingwcross-0.2.8-test/ARCHITECTURE=i386,SUITE=jessie/9/consoleI think this is causing some unit tests to fail on Windows:
https://jenkins.torproject.org/job/tor-ci-mingwcross-0.2.8-test/ARCHITECTURE=i386,SUITE=jessie/9/consoleTor: 0.2.7.x-finalNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/legacy/trac/-/issues/18456Exits on 0.2.7 publicise all their IP addresses in their descriptor2020-06-13T14:59:09ZteorExits on 0.2.7 publicise all their IP addresses in their descriptorRoger and I just spoke about the feature in 0.2.7 where Exits ban all their local / configured IP addresses in their descriptor.
If processes on an Exit trust connections from the local machine, this prevents Exits being attacked by mak...Roger and I just spoke about the feature in 0.2.7 where Exits ban all their local / configured IP addresses in their descriptor.
If processes on an Exit trust connections from the local machine, this prevents Exits being attacked by making a connection to their IP addresses.
But it also means that all exit addresses appear in the consensus.
Roger thinks this will surprise some Exit operators. It also makes Exit IP addresses easier to censor.
That said, if we silently block connections to these IP addresses, then clients can scan Exits and see which addresses are refused even though they are not banned in the Exit policy.
We should contact relay operators with multiple IP addresses, and see if they appreciate this feature, or if they are surprised by it.Tor: 0.2.9.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/13112Some things are probably broken when we advertise multiple ORPorts and only s...2020-06-13T14:38:30ZAndrea ShepardSome things are probably broken when we advertise multiple ORPorts and only some are reachableObservations on reachability testing made while fixing #12160:
- We only have a 1-bit notion of reachability; if we get an incoming non-local connection, we assume reachability in onionskin_answer() and call router_orport_found_reachab...Observations on reachability testing made while fixing #12160:
- We only have a 1-bit notion of reachability; if we get an incoming non-local connection, we assume reachability in onionskin_answer() and call router_orport_found_reachable() to publish a descriptor.
- We should have a reachability bit per *advertised* ORPort to determine its inclusion in the published descriptor, and publish if and only if we have one or more reachable ORPorts.
- To implement this, we need a way to link incoming testing circuits to a particular advertised ORPort; we don't know this from the port the underlying channel was listening on because reverse proxies might make this not one-to-one in general.
- Arma suggests in IRC that netinfo cells know the IP the connection was attempted on and if they were extended with a port number they might provide a sufficient mechanism.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/4631Idea to make consensus voting more resistant2020-06-13T15:51:31ZSebastian HahnIdea to make consensus voting more resistantThis is an idea how to improve the current situation, where sometimes a directory authority is slow to get its vote out to the other dirauths, and so the dirauths don't all have the same sets of votes. To simplify, I'm illustrating with ...This is an idea how to improve the current situation, where sometimes a directory authority is slow to get its vote out to the other dirauths, and so the dirauths don't all have the same sets of votes. To simplify, I'm illustrating with an example of three dirauths:
at :50, all dirauths make their vote and start uploading. auth1 and auth2 get their vote to all auths, but auth3 doesn't. it cannot publish a vote to auth1 at all, and it takes more than 2.5 minutes to publish its vote to auth2. at :52:30, all auths try fetching the votes they're missing from the other auths, so auth1 asks auth2 for auth3's vote, and auth2 asks auth1 for auth3's vote. auth3 asks nobody, and nobody asks auth3. At this point, neither auth1 nor auth2 have auth3's vote. auth3 now (at, for example, :53:30) succeeds publishing to auth2, so auth1 now has a vote from auth1 and auth2, auth2 and auth3 have a vote from auth1, auth2, and auth3. At :55 the auths try to make a consensus, but auth1 will end up with a different consensus than auth2 and auth3.
My idea to make this less of a problem would be that only for two minutes will we accept a vote that gets pushed to us, and anything we get later than that is considered "too late" and will be dropped. At :52:30 minutes, we still go ahead and try and fetch all votes from all the other authorities, and if they have a vote we will accept it. We repeat that fetching of all votes that we don't have at 53:00, 53:30, 54:00 and 54:30. That way, a delayed publication of the original vote will not cause this kind of split, where the dirauths have different opinions on who has voted, so only the dirauth that took more than 2 minutes to publish its descriptor to any of the other dirauths will be affected. There's still a race condition here, which is when a dirauth (within two minutes) only publishes to one other dirauth, and then that dirauth gets so slow it cannot get the vote to any of the other votes. But since it was fast enough to get the vote the first time, hopefully that's rather rare.
Does this all sound viable? Am I overlooking something?
Update: This bug was introduced in Tor 0.2.0.5-alpha, with the v3 authority voting code.Tor: 0.4.4.x-finalteorteorhttps://gitlab.torproject.org/legacy/trac/-/issues/34224Update analysis results file version to 2.02020-06-13T18:04:38ZKarsten LoesingUpdate analysis results file version to 2.0We added a new field to the analysis results file format in #26673, but we did not yet increment the format version number. The current version field is a floating point number, which doesn't work well for versioning. We should use a ver...We added a new field to the analysis results file format in #26673, but we did not yet increment the format version number. The current version field is a floating point number, which doesn't work well for versioning. We should use a version string here. Given that this change alone is going to be backward-incompatible, we'll have to call the new version 2.0. I'm preparing a patch.https://gitlab.torproject.org/legacy/trac/-/issues/34154Extend BlockedBridges table2020-06-13T18:30:04ZPhilipp Winterphw@torproject.orgExtend BlockedBridges tableBridgeDB has a (currently unused) table in its SQLite database that captures where a bridge is blocked. We are going to use this table as part of our work on #32740. It currently has the following fields:
* ID (primary key)
* hex_key (fi...BridgeDB has a (currently unused) table in its SQLite database that captures where a bridge is blocked. We are going to use this table as part of our work on #32740. It currently has the following fields:
* ID (primary key)
* hex_key (fingerprint)
* blocking_country (country code)
A fingerprint can relate to a bridge's OR port or any of its pluggable transports but these endpoints can be blocked independently. To remove this ambiguity, we should add additional fields for a bridge's IP address, port, and perhaps for an autonomous system because blocking isn't always uniform across a country.https://gitlab.torproject.org/legacy/trac/-/issues/34109Download and parse OnionPerf analysis .json files instead of .tpf files2020-06-13T18:09:38ZKarsten LoesingDownload and parse OnionPerf analysis .json files instead of .tpf filesWith #34070 and #34072 being merged and deployed we can now change metrics-web to download and parse OnionPerf analysis .json files instead of .tpf files.With #34070 and #34072 being merged and deployed we can now change metrics-web to download and parse OnionPerf analysis .json files instead of .tpf files.Karsten LoesingKarsten Loesinghttps://gitlab.torproject.org/legacy/trac/-/issues/34024Reduce timeout and stallout values2020-06-13T18:04:30ZKarsten LoesingReduce timeout and stallout valuesOn #33974 we discussed a suggestion to reduce timeouts for our three downloads as follows:
- 50 KiB download with 15 seconds timeout rather than 295 seconds,
- 1 MiB download with 60 seconds timeout rather than 1795 seconds, and
- 5 ...On #33974 we discussed a suggestion to reduce timeouts for our three downloads as follows:
- 50 KiB download with 15 seconds timeout rather than 295 seconds,
- 1 MiB download with 60 seconds timeout rather than 1795 seconds, and
- 5 MiB download with 120 seconds timeout rather than 3595 seconds.
Similarly, stallouts would be dropped entirely:
- 50 KiB download with 0 seconds stallout rather than 300 seconds,
- 1 MiB download with 0 seconds stallout rather than 1800 seconds, and
- 5 MiB download with 0 seconds stallout rather than 3600 seconds.
After discussing this with irl we concluded that we might want to pick values somewhere in the middle. The smaller values above are being used by TGen for generating load for Shadow simulations, in that case it makes sense to use timeouts similar to how users would behave. But in the measurements we're doing with OnionPerf we can easily record more data even after a human user would have given up and later filter out measurements taking longer than whatever timeouts we want to use.
In particular, it would be important for us to use timeouts that are higher than timeouts used internally by the Tor client, so that we can observe what happens in those cases. Even if a human user would long have given up.
How about we use timeouts and stallouts close to 5 minutes, so that we avoid overlapping measurements? Like 270 seconds for all three download sizes? What would we use as stallout value here? 0?Karsten LoesingKarsten Loesinghttps://gitlab.torproject.org/legacy/trac/-/issues/33716Create user accounts for Metrics Team and add SSH keys2020-06-13T17:48:44ZirlCreate user accounts for Metrics Team and add SSH keysCurrently machines are only accessible by the user that created them, unless more keys are added manually. This will also help us to keep SSH keys in sync if they are changed.Currently machines are only accessible by the user that created them, unless more keys are added manually. This will also help us to keep SSH keys in sync if they are changed.irlirlhttps://gitlab.torproject.org/legacy/trac/-/issues/33675Search microdescriptor files for relay ed25519 keys2020-06-13T13:31:47ZteorSearch microdescriptor files for relay ed25519 keysYour code in #33428 needs to pass your local "make test-network-all", before you start this ticket.
We need to enable searching for ed25519 keys in relay microdescriptor files.
There are instructions and a draft search pattern here:
ht...Your code in #33428 needs to pass your local "make test-network-all", before you start this ticket.
We need to enable searching for ed25519 keys in relay microdescriptor files.
There are instructions and a draft search pattern here:
https://github.com/torproject/chutney/blob/master/lib/chutney/TorNet.py#L1325
Please open a new pull request for this ticket. Your branch should be based on the final version of #33428.
Before you push new changes to your pull request, your chutney code should pass:
* "make test-network-all" on tor master
* "make test-network-all" on tor maint-0.3.5
You can build a tor branch using these commands:
```
cd tor
git checkout -b <branch>
make
```
Where <branch> is master or maint-0.3.5https://gitlab.torproject.org/legacy/trac/-/issues/33587puppet certificate revocation anomaly2020-06-13T17:01:14Zanarcatpuppet certificate revocation anomalytoday i revoked cupani's cert by mistake:
```
anarcat@curie:tsa-misc(master)$ ./retire -v -H cupani.torproject.org retire-all -p unifolium.torproject.org
checking for ganeti master on node unifolium.torproject.org
omeiense.torproject....today i revoked cupani's cert by mistake:
```
anarcat@curie:tsa-misc(master)$ ./retire -v -H cupani.torproject.org retire-all -p unifolium.torproject.org
checking for ganeti master on node unifolium.torproject.org
omeiense.torproject.org
polyanthum.torproject.org
instance cupani.torproject.org not running, no shutdown required
undefining instance cupani.torproject.org on host unifolium.torproject.org
error: failed to get domain 'cupani.torproject.org'
error: Domain not found: no domain with matching name 'cupani.torproject.org'
instance cupani.torproject.org not found on unifolium.torproject.org assuming retired: error: failed to get domain 'cupani.torproject.org'
error: Domain not found: no domain with matching name 'cupani.torproject.org'
scheduling cupani.torproject.org disk deletion on host unifolium.torproject.org
checking for path "/srv/vmstore/cupani.torproject.org/" on unifolium.torproject.org
scheduling rm -rf "/srv/vmstore/cupani.torproject.org/" to run on unifolium.torproject.org in 7 days
warning: commands will be executed using /bin/sh
job 4 at Tue Mar 17 17:45:00 2020
scheduling cupani.torproject.org backup disks removal on host bungei.torproject.org
checking for path "/srv/backups/bacula/cupani.torproject.org/" on bungei.torproject.org
scheduling rm -rf "/srv/backups/bacula/cupani.torproject.org/" to run on bungei.torproject.org in 30 days
warning: commands will be executed using /bin/sh
job 22 at Thu Apr 9 17:45:00 2020
Notice: Revoked certificate with serial 30
Notice: Removing file Puppet::SSL::Certificate cupani.torproject.org at '/var/lib/puppet/ssl/ca/signed/cupani.torproject.org.pem'
cupani.torproject.org
Submitted 'deactivate node' for cupani.torproject.org with UUID 7b5e6d74-cb31-4929-9082-4a2bcda08b88
```
i was following the migration procedure as part of #33446 and got over enthusiastic about the process. the cert shouldn't have been revoked, of course, as the machine is still up.
but when i tried to see the effect of this, it seemed the certificate still worked! cupani can do puppet runs without problems, even though the on-disk certificate is gone:
```
root@pauli:~# ls -al /var/lib/puppet/ssl/ca/signed/cupani.torproject.org.pem
ls: cannot access '/var/lib/puppet/ssl/ca/signed/cupani.torproject.org.pem': No such file or directory
```
so it seems our certificate revocation routine:
```
con.run('puppet node clean %s' % instance)
con.run('puppet node deactivate %s' % instance)
```
... does not work.anarcatanarcathttps://gitlab.torproject.org/legacy/trac/-/issues/33435Document BASETORRC environment variable2020-06-13T18:04:27ZAna CusturaDocument BASETORRC environment variableOP can configure the Tor client through the BASETORRC environment variable. This should be added to the documentation with examples.OP can configure the Tor client through the BASETORRC environment variable. This should be added to the documentation with examples.Philipp Winterphw@torproject.orgPhilipp Winterphw@torproject.orghttps://gitlab.torproject.org/legacy/trac/-/issues/33353Split chutney's diagnostics into a new script2020-06-13T13:31:32ZteorSplit chutney's diagnostics into a new scriptChutney's failure diagnostics are currently in the Travis CI config file.
But we want to use them in tor's CI. And maybe chutney users want to use them as well.Chutney's failure diagnostics are currently in the Travis CI config file.
But we want to use them in tor's CI. And maybe chutney users want to use them as well.teorteor