Trac issueshttps://gitlab.torproject.org/legacy/trac/-/issues2024-01-18T14:51:39Zhttps://gitlab.torproject.org/legacy/trac/-/issues/27155Include BGP prefix information in details documents2024-01-18T14:51:39ZnusenuInclude BGP prefix information in details documentsUse case:
* find relays in the same prefix (for example if a specific prefix has been hijacked)
* group relays by prefix
* is a requirement fore routing security related metrics (ROA, prefix length)
The RIPEstat API can be used as a so...Use case:
* find relays in the same prefix (for example if a specific prefix has been hijacked)
* group relays by prefix
* is a requirement fore routing security related metrics (ROA, prefix length)
The RIPEstat API can be used as a source and you can cache it if previous lookups were within the same /24 (IPv4) or /48 (IPv6) since that is the longest prefix length
https://stat.ripe.net/docs/data_api#NetworkInfo
example:
https://stat.ripe.net/data/network-info/data.json?resource=140.78.90.50
related: #26585https://gitlab.torproject.org/legacy/trac/-/issues/21378Archive bwauth bandwidth files2024-01-18T14:51:36ZTom Rittertom@ritter.vgArchive bwauth bandwidth filesThe raw bwauth votes (sample: https://bwauth.ritter.vg/bwauth/bwscan.V3BandwidthsFile) contain information such as last measured time, circuit failures and (eventually) scanner information. This can be used for debugging purposes.
Bloc...The raw bwauth votes (sample: https://bwauth.ritter.vg/bwauth/bwscan.V3BandwidthsFile) contain information such as last measured time, circuit failures and (eventually) scanner information. This can be used for debugging purposes.
Blocked by #21377, possible next steps in [#comment:14 comment 14].https://gitlab.torproject.org/legacy/trac/-/issues/26585improve AS number and name coverage (switch maxmind to RIPE Stat)2024-01-18T14:51:34Znusenuimprove AS number and name coverage (switch maxmind to RIPE Stat)
Onionoo currently uses maxmind for IP to as_number and as_name resolution.
This is fast as it is a local DB lookup but it is less up-to-date and has less coverage than RIPEstat https://stat.ripe.net/
This is a problem for tools that de...
Onionoo currently uses maxmind for IP to as_number and as_name resolution.
This is fast as it is a local DB lookup but it is less up-to-date and has less coverage than RIPEstat https://stat.ripe.net/
This is a problem for tools that depend on onionoo's as_name and as_number fields like Relay Search, OrNetStats and OrNetRadar
(I don't know maybe there are also others that are affected?)
Currently it might takes weeks or months before new ASes get added to maxmind so this information
is also missing when people lookup relays on Relay Search.
As of today onionoo is missing AS level data for about 100 relays,
but this value depends on how far we are away from the last maxmind update.
How about we use RIPEstat API as a data source + local cache.
To minimize the amount of required online queries against the RIPEstat API we can do the following
to create the IP to AS map initially (pseudocode):
if ip_prefix in cache
use cached entry
else
perform an online lookup (query RIPEstat API)
add new prefix entry to cache
expire cache entries after 15 days?
(it makes sense to log how many entries changed
after 15 days so we know whether this value is to large or to small)
This will significantly reduce the amount of required online API calls.
To give you an idea of the scale (based on onionoo data from a random
day in May 2018)
total relay records: 8116
unique IPv4 addresses: 7794
unique IPv4 BGP prefixes: 3884
each day about 50 new relays appear,
lets assume the worst case (every new relay is not in an
So on the estimated daily amount of queries you do would be around
4000/15+50 = ~320 requests/day = 1 req every ~4 minutes
which appears acceptable.
IP to prefixes (this can return multiple matches), as_number and as_name lookup:
https://stat.ripe.net/data/related-prefixes/data.json?resource=103.114.160.21
IP to prefix and as number (no as_name) lookup:
https://stat.ripe.net/data/network-info/data.json?resource=103.114.160.21
ASN to as_name lookup:
https://stat.ripe.net/data/as-overview/data.json?resource=AS40676
documentation:
https://stat.ripe.net/docs/data_apihttps://gitlab.torproject.org/legacy/trac/-/issues/26030Delete "Tor Messenger downloads and updates" section2023-06-29T14:26:57ZcypherpunksDelete "Tor Messenger downloads and updates" sectionhttps://metrics.torproject.org/webstats-tm.htmlhttps://metrics.torproject.org/webstats-tm.htmlhttps://gitlab.torproject.org/legacy/trac/-/issues/31435Emulate different Fast/Guard cutoffs in historical consensuses2022-03-04T12:53:06ZirlEmulate different Fast/Guard cutoffs in historical consensusesThere are many things that we can tune in producing votes and consensuses that will affect the ways that clients use the network, that might result in better load balancing.
We need tools for simulating what happens when we make those c...There are many things that we can tune in producing votes and consensuses that will affect the ways that clients use the network, that might result in better load balancing.
We need tools for simulating what happens when we make those changes, using data (either historical or live) for the public Tor network.
We can consider the MVP for this complete once we have a tool that allows us to take server descriptors and simulate votes and consensus generation using alternate Fast/Guard cutoffs.
Extensions to this would be allowing alternative consensus methods, or other tunables.
By reducing the cost of performing these simulations we can allow faster iteration on ideas that will hopefully allow for better user experience.https://gitlab.torproject.org/legacy/trac/-/issues/33076Graph consensus and vote information from Rob's experiments2022-03-04T12:51:36ZMike PerryGraph consensus and vote information from Rob's experimentsThis is a ticket for the work to graph the historical onionperf data from Rob's relay flooding experiment.
Some discussion threads:
https://lists.torproject.org/pipermail/tor-scaling/2019-December/000077.html
https://lists.torproject.or...This is a ticket for the work to graph the historical onionperf data from Rob's relay flooding experiment.
Some discussion threads:
https://lists.torproject.org/pipermail/tor-scaling/2019-December/000077.html
https://lists.torproject.org/pipermail/tor-scaling/2020-January/000081.html
Basically, we want to have a standard way to graph results from key metrics from before, during, and after the experiment.
In this case, we want CDF-TTFB, CDF-DL from onionperf results.
We also want CDF-Relay-Stream-Capacity and CDF-Relay-Utilization for the consensus, as well as from the votes, to see if the votes from TorFlow drastically differ from sbws during the experiment.
https://trac.torproject.org/projects/tor/wiki/org/roadmaps/CoreTor/PerformanceMetrics
**Update from June 10, 2020: We finished the CDF-TTFB and CDF-DL portions by adding these graphs to OnionPerf's visualize mode. The remaining parts are the CDF-Relay-* graphs that are based on consensuses and votes. Keep this in mind when reading comments up to June 10, 2020.**https://gitlab.torproject.org/legacy/trac/-/issues/31193Upgrade to Debian buster libraries2021-08-23T14:46:09ZKarsten LoesingUpgrade to Debian buster librariesThis is the parent ticket for upgrading the various metrics code bases to Debian buster libraries.This is the parent ticket for upgrading the various metrics code bases to Debian buster libraries.https://gitlab.torproject.org/legacy/trac/-/issues/31172Tests fail on Debian buster on jenkins2021-08-23T14:46:08ZirlTests fail on Debian buster on jenkinsWe will want to move metrics services to the new Debian stable, but currently metrics-lib does not pass tests on the new release.
https://jenkins.torproject.org/job/metrics-lib-master/ARCHITECTURE=amd64,SUITE=buster/1/consoleWe will want to move metrics services to the new Debian stable, but currently metrics-lib does not pass tests on the new release.
https://jenkins.torproject.org/job/metrics-lib-master/ARCHITECTURE=amd64,SUITE=buster/1/consolehttps://gitlab.torproject.org/legacy/trac/-/issues/22983Add a Descriptor subinterface and implementation for Tor web server logs2021-08-23T14:45:35ZiwakehAdd a Descriptor subinterface and implementation for Tor web server logsThe webstats-log-files are the only files available on CollecTor (in future) that are not yet covered by metrics-lib.
Should there be a 'LogDescriptor' interface and implementation in metrics-lib? Are there any reasons why not?
The nam...The webstats-log-files are the only files available on CollecTor (in future) that are not yet covered by metrics-lib.
Should there be a 'LogDescriptor' interface and implementation in metrics-lib? Are there any reasons why not?
The name `LogDescriptor` is just a working name; better naming suggestions welcome.
The interface should extend `Descriptor` and have additional methods for retrieving the measuring host and the served host, a method for retrieving the date of the log.
What else?metrics-lib 2.2.0https://gitlab.torproject.org/legacy/trac/-/issues/20412Skip bad archived descriptors rather than aborting the entire import2021-08-23T14:43:21ZKarsten LoesingSkip bad archived descriptors rather than aborting the entire importThe Onionoo mirror broke in late September for some reason I don't know, and the host didn't come back afterwards. We only noticed two weeks later and had to reimport September and October data. However, the September archives contain ...The Onionoo mirror broke in late September for some reason I don't know, and the host didn't come back afterwards. We only noticed two weeks later and had to reimport September and October data. However, the September archives contain a bad descriptor that breaks the import. Here's the bad descriptor (`3/8/384f93dbac20fdf293a731b391b3fc0757d9f78a` in `server-descriptors-2016-09.tar.xz`):
```
@type server-descriptor 1.0
router Pegasus70 78.142.19.172 443 0 80
platform Tor 0.2.5.12 on Linux
protocols Link 1 2 Circuit 1
published 2016-09-15 08:31:03
fingerprint E9C5 8383 DB9A E52A DAF3 5F91 88B2 741A 05F5 A02F
uptime 1016140
bandwidth 8746942 8955284 7957930
extra-info-digest C07612E283D3157219DA1DDDF3AE125268206412
onion-key
-----BEGIN RSA PUBLIC KEY-----
MIGJAoGBAPNQHBnVRiPl7H5cRC/GOMKIeGRSAuM/3Jzuxrg0idlL1YPoQtKAfaqI
LY9cGSEk88FGcOkgZdDiwSL9LAtBF1hpYB2ajGjNhTQkae00DC1NlWGzi89wkA/R
4qxSCm4mjoY7EEmfOLI/X/Rp9FE8rL7X39XK6q+nv5uyHI+T/7GHAgMBAAE=
-----END RSA PUBLIC KEY-----
signing-key
-----BEGIN RSA PUBLIC KEY-----
MIGJAoGBAMojXIJSwwavFOc6afWILpgIc4sAXd8KSsOh966rLjuGZyUsN3+gqta9
2QLV4HOBy9L24NRE3iXmySlfTiT2pxiSXo+h/B18Gw2clSewHx7xC1QnkT69xxL2
6AvOu5NDbu5SxHOtOi95FEnuE6VmKZhBawHD3KG6j6euZINilojrAgMBAAE=
-----END RSA PUBLIC KEY-----
family $9D14BAC27FFE7170601FC0EC792A927E1FC11A1D
hidden-service-dir
ntor-onion-key vk7nDH5FVjFWxflyapUT+9+em+CGO/aaYjaO6LGJ3B0=
reject 0.0.0.0/8:*
reject 169.254.0.0/16:*
reject 127.0.0.0/8:*
reject 192.168.0.0/16:*
reject 10.0.0.0/8:*
reject 172.16.0.0/12:*
reject 78.142.19.172:*
accept *:20-21
accept *:43
accept *:53
accept *:79-81
accept *:88
accept *:110
accept *:143
accept *:194
accept *:220
accept *:389
accept *:443
accept *:464
accept *:531
accept *:543-544
accept *:554
accept *:563
accept *:636
accept *:706
accept *:749
accept *:873
accept *:902-904
accept *:981
accept *:1194
accept *:1220
accept *:1293
accept *:1500
accept *:1533
accept *:1677
accept *:1723
accept *:1755
accept *:1863
accept *:2082
accept *:2083
accept *:2086-2087
accept *:2095-2096
accept *:2102-2104
accept *:3128
accept *:3389
accept *:3690
accept *:4321
accept *:4643
accept *:5050
accept *:5190
accept *:5222-5223
accept *:5228
accept *:5900
accept *:6697
accept *:8008
accept *:8074
accept *:8080
accept *:8082
accept *:8087-8088
@uploaded-at 2016-09-15 08:32:03
@source "79.134.255.35"
router Beluga 79.134.255.35 1979 0 0
identity-ed25519
-----BEGIN ED25519 CERT-----
AQQABkDiAfjGVdzeISYHVC86lkA1GRbNmgn80ndEWHoNfqq3apelAQAgBACmRaXq
1UdBqrNx7dYOhs62167xULhT4QoThd/IgiZw18mYn19eCtf0qfGiDmYv1v4d1INq
drh+i4yS1XGw8oypyLU27mt9BI5ezMXHMeKkEvRdgNwg5K2Levzw7PhK6go=
-----END ED25519 CERT-----
master-key-ed25519 pkWl6tVHQaqzce3WDobOtteu8VC4U+EKE4XfyIImcNc
platform Tor 0.2.8.7 on Linux
protocols Link 1 2 Circuit 1
published 2016-09-15 08:31:58
fingerprint EC69 7C3D 5819 B16B B899 D29A 18B9 E7B6 095D FAEC
uptime 129602
bandwidth 2097152 3145728 1286199
extra-info-digest 387513B8F45EFB6711F40FA6869146DE62B058D5 l+9BGbujAcdSANxivZN210RJSHsHSQCQMPqOYg4VNSA
onion-key
-----BEGIN RSA PUBLIC KEY-----
MIGJAoGBALKRXi8ClqAACiYtBCF+Ot4154CxhykXufXQFGEYR2KkyEI4wPp2E/hV
izLQjrjmIq+akyFUGNE/u/OY5seeUlcFtFnBHfsotrtBkL8yqMqmyheL5OG1CWX3
ROKd6UtzMP1ebIcalS0hdc7nlpOlzxd91IJjlE5eI/jKJyTKl4C1AgMBAAE=
-----END RSA PUBLIC KEY-----
signing-key
-----BEGIN RSA PUBLIC KEY-----
MIGJAoGBALL8VZK/RpoV8XkaSFTjFfchYDeTrzToWgiE8fFm68Pato+iQ5xArjkW
gKaj4DAqrUMZgR2rz0joiD9U6lEssEFhXM0laqlLpcuhoBB+6BbiLOFbcm6MHPcr
eVuNRjcwRIr71SCASCih52LHuyOCDwqMnNpdIyCLse+AAtdqmJ3TAgMBAAE=
-----END RSA PUBLIC KEY-----
onion-key-crosscert
-----BEGIN CROSSCERT-----
FZg86wGaQqd4v8nBC0GQm1bHo0ooZFh79XnZPVYzw/gBUIguy4vbumIb07sXd7+1
C7cNBlfpy/1UQH0t4l9Crqj6LqL9DNJr8xOKx0hbgw5dxDHJNu16+qMTsdo30RVQ
O1XBS12Wh6Or/suW8D+wRQ8TT804c0wdc5TMTJIe9Wo=
-----END CROSSCERT-----
ntor-onion-key-crosscert 0
-----BEGIN ED25519 CERT-----
AQoABj/5AaZFperVR0Gqs3Ht1g6GzrbXrvFQuFPhChOF38iCJnDXAJ4T5u2HXRQF
1B8/CNz0+VNfa1C+kP/CoAT+qxud/wJCpoUo3au/YzSZSrjTtstKTs8lv7chn+QT
JiqklO+iegM=
-----END ED25519 CERT-----
hidden-service-dir
contact BTC 1EfzsAj6rLnvYMuAZeTLBmhZ3gHcjxfUkp
ntor-onion-key V8SPDfH90+Fa8Q21xYzG5qqovaOsKqP2aZKdHBWAWBI=
reject *:*
tunnelled-dir-server
router-sig-ed25519 4CUXRGMVxOXkR+qgv/W+Jsz7WgGWVloER9qZgfqbNNEA3UyAo2W7odoAcBpwrrPFarAYoza1T7I2WmsigneFDw
router-signature
-----BEGIN SIGNATURE-----
aH+lYDSsISil+cUXxbXzv2H/M5rDguHOMKbMSZMBRSTIUAV6zrxaSEJgQweOJyr4
CiqAJjxp4sTcnUvsaqFZwrO8xsv+LZHabIbNu+DnuPtS66Xngh3q/wEJbx/VLWfq
yAexVH/GnxAfwplvB99GIiHqE20r+nLAdrgGnZ4x/9M=
-----END SIGNATURE-----
```
The issue is that this file contains one partial descriptor and another full descriptor without a @type annotation. Onionoo doesn't know how to process it. However, when we attempt to process this descriptor, we abort the entire import process. Even worse, when aborting the import process we're letting the descriptor reader thread continue reading descriptors until its queue runs full and it waits for us to accept more parsed descriptors. The result is that the Onionoo process, which ran in `--single-run` mode, did not finish within a few days until I killed it. Oh wow.
Quick fix: Skip the bad descriptor, that is, `continue;` rather than `break;`.
Longer-term fix: Come up with better rules how to handle bad input data, which is somewhat related to #19834.
Follow-up question: What went wrong that CollecTor produced this file?https://gitlab.torproject.org/legacy/trac/-/issues/19834Rethink how we handle issues while sanitizing bridge descriptors2021-08-23T14:43:21ZKarsten LoesingRethink how we handle issues while sanitizing bridge descriptorsThe bridge descriptor sanitizer parses tarballs containing non-sanitized bridge descriptors, modifies their content by removing bridge IP addresses and other sensitive parts, and writes sanitized versions of those bridge descriptors to d...The bridge descriptor sanitizer parses tarballs containing non-sanitized bridge descriptors, modifies their content by removing bridge IP addresses and other sensitive parts, and writes sanitized versions of those bridge descriptors to disk.
The sanitizer needs to recognize the lines contained in bridge descriptors to distinguish between lines that must be changed and others that can be kept unchanged, and it needs to be able to understand the exact format of certain lines in order to sanitize their contents.
This process can go wrong in various ways, and we need to decide how to handle those situations. Possible situations are:
1. A tarball is malformed or can otherwise not be opened.
2. A tarball contains one or more files that cannot be opened.
3. A tarball file contains an unknown descriptor type.
4. An internal problem prohibits sanitizing descriptor parts (e.g., missing secret for sanitizing IP address).
5. A descriptor is missing parts that are required for properly sanitizing its contents.
6. A descriptor contains an unrecognized line.
7. A descriptor line doesn't follow the expected format, contains fewer or more arguments, etc.
Possible ways of handling such situations are:
A. Skip a line we don't understand and keep the rest of the descriptor.
B. Skip a descriptor.
C. Skip the file contained in the tarball and continue with the next.
D. Abort processing the tarball.
E. Skip the entire tarball, including discarding any descriptors processed before running into the problem, and attempt to process the tarball again in the next execution.
F. Abstain from processing a given descriptor type until a problem has been resolved.
G. Discard any descriptors processed in a tarball until running into the problem, abort the current execution, and refuse starting the next execution until the problem has been resolved.
H. (in addition to A-G). Inform the operator by logging the problem.
I. (in addition to A-G). Warn the operator and ask them to resolve the problem.
Looking at this list, I think that my preferred ways of handling problems would be something like:
- B+H in situations 5, 6, and 7;
- E+I in situations 1, 2, and 3; and
- G+I in situation 4.
That's not exactly what we're currently doing. And I'm not even sure if somebody else operating a CollecTor instance with the bridgedescs module would have the same preferences.
Let's discuss!https://gitlab.torproject.org/legacy/trac/-/issues/30636Something funky is going in Iran: numbers of relay users flies off to 1M+2021-07-08T17:54:16ZcypherpunksSomething funky is going in Iran: numbers of relay users flies off to 1M+![userstats-relay-country-ir-2019-04-01-2019-09-03-off.png,600px](uploads/userstats-relay-country-ir-2019-04-01-2019-09-03-off.png,600px) [link](https://metrics.torproject.org/userstats-relay-country.html?start=2019-04-01&end=2019-09-03&...![userstats-relay-country-ir-2019-04-01-2019-09-03-off.png,600px](uploads/userstats-relay-country-ir-2019-04-01-2019-09-03-off.png,600px) [link](https://metrics.torproject.org/userstats-relay-country.html?start=2019-04-01&end=2019-09-03&country=ir)
![userstats-bridge-country-ir-2019-04-01-2019-09-03.png,600px](uploads/userstats-bridge-country-ir-2019-04-01-2019-09-03.png,600px) [link](https://metrics.torproject.org/userstats-bridge-country.html?start=2019-04-01&end=2019-09-03&country=ir)
![userstats-bridge-combined-ir-2019-04-01-2019-09-03.png,600px](uploads/userstats-bridge-combined-ir-2019-04-01-2019-09-03.png,600px) [link](https://metrics.torproject.org/userstats-bridge-combined.html?start=2019-04-01&end=2019-09-03&country=ir)https://gitlab.torproject.org/legacy/trac/-/issues/33176Check whether all of our growth stats we want are collected and accurate2021-07-05T16:09:12ZGeorg KoppenCheck whether all of our growth stats we want are collected and accurateWe should start writing down which growth stats we are interested in and check whether we have them + if so whether they are accurate. For context: That's actually for network health work preparing some anomaly analysis we want to do.We should start writing down which growth stats we are interested in and check whether we have them + if so whether they are accurate. For context: That's actually for network health work preparing some anomaly analysis we want to do.https://gitlab.torproject.org/legacy/trac/-/issues/21014Turkey blocking of direct connections, 2016-12-122021-03-27T04:55:11ZNima FatemiTurkey blocking of direct connections, 2016-12-12Turkey Blocks article: https://turkeyblocks.org/2016/12/18/tor-blocked-in-turkey-vpn-ban/
After getting some reports on twitter about Tor being blocked in Turkey and some chat on IRC, <bypassemall> aka <trdpi> aka <kzdpi> ran some tests...Turkey Blocks article: https://turkeyblocks.org/2016/12/18/tor-blocked-in-turkey-vpn-ban/
After getting some reports on twitter about Tor being blocked in Turkey and some chat on IRC, <bypassemall> aka <trdpi> aka <kzdpi> ran some tests and found some interesting information about how Turkey is blocking vanilla Tor connections. I paste their findings here:
```
16:48 < trdpi> 10 connections died in state handshaking (TLS) with SSL state SSLv2/v3 read server hello A in HANDSHAKE
16:48 < trdpi> after less than 10 seconds
...
16:55 < trdpi> this isp injects rst it seems
16:56 < trdpi> to both side, as i got 2 rst one legit and 2 not
16:57 < mrphs> oh apparently today is an special day in turkey
...
17:00 < trdpi> telneting to or port, no rsts. it triggered by something more than ip:port connection
17:01 < trdpi> yay, window trick for split req works for tr
17:02 < trdpi> magic tool allows to bypass vanilla tor censorship
17:04 < trdpi> so it's about ciphersuits or something
17:07 < trdpi> it's like kz, but obfs4 works
17:07 < trdpi> and kz do not rsts
17:07 < trdpi> it controlls connection
17:07 < trdpi> and tr like do not controlls and to inject fraud only
```https://gitlab.torproject.org/legacy/trac/-/issues/26081Unusual increase in unique .onion v2 services2021-03-27T04:55:11ZTracUnusual increase in unique .onion v2 servicesFind out what cause the increase from 70 000 unique .onion v2 services to 120 000 in just a few days.
**Trac**:
**Username**: computerfreakFind out what cause the increase from 70 000 unique .onion v2 services to 120 000 in just a few days.
**Trac**:
**Username**: computerfreakhttps://gitlab.torproject.org/legacy/trac/-/issues/21637Include both declared and reachable IPv6 OR addresses2021-03-03T14:55:21ZteorInclude both declared and reachable IPv6 OR addressesWhen a relay declares an IPv6 OR address, it puts it in its descriptor, and it gets placed in the microdescriptor automatically.
But the IPv6 addresses in the full consensus are different: authorities on IPv6 only vote for an IPv6 addre...When a relay declares an IPv6 OR address, it puts it in its descriptor, and it gets placed in the microdescriptor automatically.
But the IPv6 addresses in the full consensus are different: authorities on IPv6 only vote for an IPv6 address if they believe it is reachable.
(There are no IPv6 addresses in the microdesc consensus, see #20916.)
This makes a difference on Atlas tickets like #10401.
(It doesn't make a difference to client behaviour yet, because microdescs are the default. We'll fix that in #20916.)Onionoo-1.7.0https://gitlab.torproject.org/legacy/trac/-/issues/33663Check checktest.py related errors shown by our network-health scanners2020-11-13T13:45:10ZGeorg KoppenCheck checktest.py related errors shown by our network-health scannersI often see something like
```
2020-03-17 21:03:56,973 modules.checktest [ERROR] Check thinks <https://metrics.torproject.org/rs.html#details/296B2178FD742AB35AB20C9ADF04D5DFD3D407EB> isn't Tor. Desc addr is 206.55.74.0 and check addr i...I often see something like
```
2020-03-17 21:03:56,973 modules.checktest [ERROR] Check thinks <https://metrics.torproject.org/rs.html#details/296B2178FD742AB35AB20C9ADF04D5DFD3D407EB> isn't Tor. Desc addr is 206.55.74.0 and check addr is 206.55.74.0.
```
. We should figure out a) what's up with that and b) whether we actually still need that test to be running.https://gitlab.torproject.org/legacy/trac/-/issues/31521Investigate 10-second delay in TTFB2020-10-20T20:20:00ZKarsten LoesingInvestigate 10-second delay in TTFBWhile looking into OnionPerf data I noticed a 10-second delay in time to first byte. I'll attach an ECDF shortly.
I started hunting down this issue and found that many of these cases (though not all of them) had their stream detached fr...While looking into OnionPerf data I noticed a 10-second delay in time to first byte. I'll attach an ECDF shortly.
I started hunting down this issue and found that many of these cases (though not all of them) had their stream detached from a circuit and re-attached to another circuit following a 10-second timeout of some sort. The following example shows relevant controller events:
```
2019-05-05 09:55:00 1557046500.54 650 STREAM 45043 NEW 0 137.50.19.2:80 SOURCE_ADDR=127.0.0.1:36454 PURPOSE=USER
2019-05-05 09:55:00 1557046500.54 650 STREAM 45043 SENTCONNECT 29430 137.50.19.2:80
2019-05-05 09:55:00 1557046500.69 650 STREAM_BW 45043 13 2 2019-05-05T08:55:00.682587
^^ <- 10 second delay here
2019-05-05 09:55:10 1557046510.69 650 STREAM 45043 DETACHED 29430 137.50.19.2:80 REASON=TIMEOUT
2019-05-05 09:55:10 1557046510.69 650 STREAM 45043 SENTCONNECT 29411 137.50.19.2:80
2019-05-05 09:55:11 1557046511.12 650 STREAM 45043 REMAP 29411 137.50.19.2:80 SOURCE=EXIT
2019-05-05 09:55:11 1557046511.12 650 STREAM 45043 SUCCEEDED 29411 137.50.19.2:80
2019-05-05 09:55:11 1557046511.68 650 STREAM_BW 45043 55 10 2019-05-05T08:55:11.682353
2019-05-05 09:55:12 1557046512.68 650 STREAM_BW 45043 0 637971 2019-05-05T08:55:12.681636
2019-05-05 09:55:13 1557046513.21 650 STREAM_BW 45043 0 410673 2019-05-05T08:55:13.211188
2019-05-05 09:55:13 1557046513.21 650 STREAM 45043 CLOSED 29411 137.50.19.2:80 REASON=DONE
```
1% of measurements seems a lot to me, and I could imagine that these cases are particularly annoying to users. Maybe this timeout could be shorter or made more dynamic like other timeouts.
If the timeout cannot be changed, it would be nice to tell the user that their stream has just been attached to another circuit and that that's why they had to wait for the past 10 seconds.https://gitlab.torproject.org/legacy/trac/-/issues/30499In Tor Metrics / Relay Search, users are able to enter the digital fingerprin...2020-06-16T01:04:45ZTracIn Tor Metrics / Relay Search, users are able to enter the digital fingerprint of a bridge to run a successful search and display the data about that bridge, but the Relay Search page states, "If you are searching for a bridge, you will need to search byAt https://metrics.torproject.org/rs.html, the page contains the caveat, "If you are searching for a bridge, you will need to search by the hashed fingerprint. This prevents leaking the fingerprint of the bridge when searching."
H...At https://metrics.torproject.org/rs.html, the page contains the caveat, "If you are searching for a bridge, you will need to search by the hashed fingerprint. This prevents leaking the fingerprint of the bridge when searching."
However, when users enter the //digital fingerprint// (not the //hashed fingerprint//) of the bridge in the Relay Search / Query bar, the search successfully will display data about the bridge.
If Relay Search leaks bridge fingerprints when users use digital fingerprints (not hashed fingerprints) to run successful searches, we need to reconfigure Relay Search so that it will be restricted to using only hashed fingerprints to search for bridge data.
Furthermore, the hashed fingerprint of a bridge must be made visible to the user by appearing in the //torrc// file, i.e., the //hashed fingerprint// is not visible and does not appear in the torrc file when using Tor Browser 8.0.8 on macOS Yosemite 10.10.5. Only the //digital fingerprint// is visible, appearing in the torrc file.
**Trac**:
**Username**: monmirehttps://gitlab.torproject.org/legacy/trac/-/issues/28328Include "total consensus" split by relay type in vote totals graph2020-06-16T01:03:06ZstarlightInclude "total consensus" split by relay type in vote totals graphTotals of consensus weighs shift erratically due to some aspect of vote median behavior in the consensus. E.g. (Exit,Exit+Guard) moved 12.5% in 12 hours on 09-Jul-18 12:00 to 23:59 UTC while votes steady. Twenty percent in 56 hours wit...Totals of consensus weighs shift erratically due to some aspect of vote median behavior in the consensus. E.g. (Exit,Exit+Guard) moved 12.5% in 12 hours on 09-Jul-18 12:00 to 23:59 UTC while votes steady. Twenty percent in 56 hours with votes shifting. The behavior results in significant adjustment to the selection probability of relays with unchanged consensus weights. Please add to
https://metrics.torproject.org/totalcw.html
Suggest a separate weighted line for each effective class of relay, i.e.: (Exit,Exit+Guard),(Guard),(unflagged,Guard).