Trac issueshttps://gitlab.torproject.org/legacy/trac/-/issues2024-01-18T14:51:39Zhttps://gitlab.torproject.org/legacy/trac/-/issues/27155Include BGP prefix information in details documents2024-01-18T14:51:39ZnusenuInclude BGP prefix information in details documentsUse case:
* find relays in the same prefix (for example if a specific prefix has been hijacked)
* group relays by prefix
* is a requirement fore routing security related metrics (ROA, prefix length)
The RIPEstat API can be used as a so...Use case:
* find relays in the same prefix (for example if a specific prefix has been hijacked)
* group relays by prefix
* is a requirement fore routing security related metrics (ROA, prefix length)
The RIPEstat API can be used as a source and you can cache it if previous lookups were within the same /24 (IPv4) or /48 (IPv6) since that is the longest prefix length
https://stat.ripe.net/docs/data_api#NetworkInfo
example:
https://stat.ripe.net/data/network-info/data.json?resource=140.78.90.50
related: #26585https://gitlab.torproject.org/legacy/trac/-/issues/26585improve AS number and name coverage (switch maxmind to RIPE Stat)2024-01-18T14:51:34Znusenuimprove AS number and name coverage (switch maxmind to RIPE Stat)
Onionoo currently uses maxmind for IP to as_number and as_name resolution.
This is fast as it is a local DB lookup but it is less up-to-date and has less coverage than RIPEstat https://stat.ripe.net/
This is a problem for tools that de...
Onionoo currently uses maxmind for IP to as_number and as_name resolution.
This is fast as it is a local DB lookup but it is less up-to-date and has less coverage than RIPEstat https://stat.ripe.net/
This is a problem for tools that depend on onionoo's as_name and as_number fields like Relay Search, OrNetStats and OrNetRadar
(I don't know maybe there are also others that are affected?)
Currently it might takes weeks or months before new ASes get added to maxmind so this information
is also missing when people lookup relays on Relay Search.
As of today onionoo is missing AS level data for about 100 relays,
but this value depends on how far we are away from the last maxmind update.
How about we use RIPEstat API as a data source + local cache.
To minimize the amount of required online queries against the RIPEstat API we can do the following
to create the IP to AS map initially (pseudocode):
if ip_prefix in cache
use cached entry
else
perform an online lookup (query RIPEstat API)
add new prefix entry to cache
expire cache entries after 15 days?
(it makes sense to log how many entries changed
after 15 days so we know whether this value is to large or to small)
This will significantly reduce the amount of required online API calls.
To give you an idea of the scale (based on onionoo data from a random
day in May 2018)
total relay records: 8116
unique IPv4 addresses: 7794
unique IPv4 BGP prefixes: 3884
each day about 50 new relays appear,
lets assume the worst case (every new relay is not in an
So on the estimated daily amount of queries you do would be around
4000/15+50 = ~320 requests/day = 1 req every ~4 minutes
which appears acceptable.
IP to prefixes (this can return multiple matches), as_number and as_name lookup:
https://stat.ripe.net/data/related-prefixes/data.json?resource=103.114.160.21
IP to prefix and as number (no as_name) lookup:
https://stat.ripe.net/data/network-info/data.json?resource=103.114.160.21
ASN to as_name lookup:
https://stat.ripe.net/data/as-overview/data.json?resource=AS40676
documentation:
https://stat.ripe.net/docs/data_apihttps://gitlab.torproject.org/legacy/trac/-/issues/33350Is sbws weighting some relays too high?2022-02-07T19:22:53ZteorIs sbws weighting some relays too high?Before we deploy sbws to the rest of the bandwidth authorities, we should check if it is weighting some relays (or some ASes) much higher than torflow.
We should also check for bugs in sbws, that weight existing large ASes too high:
htt...Before we deploy sbws to the rest of the bandwidth authorities, we should check if it is weighting some relays (or some ASes) much higher than torflow.
We should also check for bugs in sbws, that weight existing large ASes too high:
https://metrics.torproject.org/rs.html#aggregate/assbws: 1.1.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/33176Check whether all of our growth stats we want are collected and accurate2021-07-05T16:09:12ZGeorg KoppenCheck whether all of our growth stats we want are collected and accurateWe should start writing down which growth stats we are interested in and check whether we have them + if so whether they are accurate. For context: That's actually for network health work preparing some anomaly analysis we want to do.We should start writing down which growth stats we are interested in and check whether we have them + if so whether they are accurate. For context: That's actually for network health work preparing some anomaly analysis we want to do.https://gitlab.torproject.org/legacy/trac/-/issues/32864Reproduce Arthur's exit failures and then contact or badexit the relays2020-11-16T20:44:00ZRoger DingledineReproduce Arthur's exit failures and then contact or badexit the relayshttps://arthuredelstein.net/exits/
lists a pile of exit relays, including some very fast exit relays, that are failing all of their dns queries. That is, they claim to be exits but Tor clients probably rarely use them, yet clients still ...https://arthuredelstein.net/exits/
lists a pile of exit relays, including some very fast exit relays, that are failing all of their dns queries. That is, they claim to be exits but Tor clients probably rarely use them, yet clients still *try* to use them, contributing to the long tail of low-probability high-impact misery of being a Tor client.
We should verify that we agree with his scripts, and also make sure we are comfortable running the checks on our own.
Then we should contact the affected relays, and either get them to fix their dns, or figure out what the bug is, or failing all of that, set the badexit flag for them to save clients the trouble of trying them and failing.
Then once we've done a round of that, we should come up with a process by which we repeat it regularly.Georg KoppenGeorg Koppenhttps://gitlab.torproject.org/legacy/trac/-/issues/33663Check checktest.py related errors shown by our network-health scanners2020-11-13T13:45:10ZGeorg KoppenCheck checktest.py related errors shown by our network-health scannersI often see something like
```
2020-03-17 21:03:56,973 modules.checktest [ERROR] Check thinks <https://metrics.torproject.org/rs.html#details/296B2178FD742AB35AB20C9ADF04D5DFD3D407EB> isn't Tor. Desc addr is 206.55.74.0 and check addr i...I often see something like
```
2020-03-17 21:03:56,973 modules.checktest [ERROR] Check thinks <https://metrics.torproject.org/rs.html#details/296B2178FD742AB35AB20C9ADF04D5DFD3D407EB> isn't Tor. Desc addr is 206.55.74.0 and check addr is 206.55.74.0.
```
. We should figure out a) what's up with that and b) whether we actually still need that test to be running.https://gitlab.torproject.org/legacy/trac/-/issues/33018Dir auths using an unsustainable 400+ mbit/s, need to diagnose and fix2020-11-13T13:39:33ZRoger DingledineDir auths using an unsustainable 400+ mbit/s, need to diagnose and fixWe've been having problems establishing a consensus lately. We realized that maatuska was rate limiting to only 10MBytes/s, and asked Linus to bump it up, so he did.
Then today we realized that moria1 was unable to serve dirport answers...We've been having problems establishing a consensus lately. We realized that maatuska was rate limiting to only 10MBytes/s, and asked Linus to bump it up, so he did.
Then today we realized that moria1 was unable to serve dirport answers because it was maxed out at its BandwidthRate of 30MBytes. I raised that to 50MBytes and it stayed maxed out. I have put it back down to 30MBytes so my host doesn't get too upset.
This is not a sustainable situation. We need to figure out what is asking the dir auths for so many bytes, and get it to stop or slow down.
This is a ticket to collect info and to brainstorm ideas.Tor: 0.4.3.x-finalDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/legacy/trac/-/issues/33457Twitter shows "Something went wrong." with a "Try again" button2020-06-16T01:11:23ZGeorg KoppenTwitter shows "Something went wrong." with a "Try again" buttonFor a while now it often happens that Twitter shows "Something went wrong" + an additional "Try again" button. That button does not really work as it seems Tor gets blocked by Twitter now.
If one requests long enough a new circuit one u...For a while now it often happens that Twitter shows "Something went wrong" + an additional "Try again" button. That button does not really work as it seems Tor gets blocked by Twitter now.
If one requests long enough a new circuit one usually comes through the block.https://gitlab.torproject.org/legacy/trac/-/issues/19119Repurpose block-malicious-sites-checkbox on TLS error page in Tor Browser2020-06-15T23:35:18ZGeorg KoppenRepurpose block-malicious-sites-checkbox on TLS error page in Tor BrowserRight now the checkbox on the neterror page sends a report about an TLS error to Mozilla (containing host, port, timestamp, useragent, update channel, buildid, certificate chain and version of that feature). We might want to repurpose th...Right now the checkbox on the neterror page sends a report about an TLS error to Mozilla (containing host, port, timestamp, useragent, update channel, buildid, certificate chain and version of that feature). We might want to repurpose that checkbox as, first, I see no reason why Mozilla should gather data related to a Tor Browser user. Second, this message is highly confusing in our context. Say, an exit node is MITMing a user. Why should the user report that to Mozilla in order to identify and block malicious sites? What is Mozilla supposed to do with that information?
We could think about having an own infrastructure for this that might help detecting bad relayshttps://gitlab.torproject.org/legacy/trac/-/issues/32545Perform measurements to concretely understand snowflake throughput and networ...2020-06-13T18:21:16ZCecylia BocovichPerform measurements to concretely understand snowflake throughput and network healthWe know that there are several proxies that don't seem to work once connected (#31960) and that connections are very slow on Windows and possible all platforms (#31971).
It would help to be able to quantify this and actively monitor it....We know that there are several proxies that don't seem to work once connected (#31960) and that connections are very slow on Windows and possible all platforms (#31971).
It would help to be able to quantify this and actively monitor it. There are two things we want to measure: the number of snowflakes that work at all, and the throughput of a sample of snowflakes.
Perhaps something like onionperf can help us out here, but we'll have to see whether onionperf works well with snowflake when we get bad proxies or disconnect.Cecylia BocovichCecylia Bocovichhttps://gitlab.torproject.org/legacy/trac/-/issues/33178Figure out specific baselines we are interested in from a network health pers...2020-06-13T18:10:34ZGeorg KoppenFigure out specific baselines we are interested in from a network health perspectiveIn #33176 we checked what metrics/growth stats we currently have, which we need and whether all of the are collected properly.
In this ticket we should figure out specific baselines for our favorite stats. meejah came up with some thing...In #33176 we checked what metrics/growth stats we currently have, which we need and whether all of the are collected properly.
In this ticket we should figure out specific baselines for our favorite stats. meejah came up with some things that were worth collecting/investigating:
* expected failure rate for ciruits
* what % of exits are not expected to establish circuits
There might be more. This is likely a parent ticket and we should file child ones for more specific items.https://gitlab.torproject.org/legacy/trac/-/issues/33175Build a roadmap/brainstorm all the future things we might automate measuring2020-06-13T18:10:33ZGeorg KoppenBuild a roadmap/brainstorm all the future things we might automate measuringWe should have some document, probably in our network health repository showing our plans to automate measurements for relay health.We should have some document, probably in our network health repository showing our plans to automate measurements for relay health.https://gitlab.torproject.org/legacy/trac/-/issues/26124Bring back Tor Weather2020-06-13T18:10:20ZnusenuBring back Tor WeatherTL;DR: I believe Tor Weather is the most efficient way to achieve and maintain a healthy Tor network on the long run.
This is an item on the metrics team road map ("Q4 2018 or later") but maybe the new relay advocate (Colin) can help wi...TL;DR: I believe Tor Weather is the most efficient way to achieve and maintain a healthy Tor network on the long run.
This is an item on the metrics team road map ("Q4 2018 or later") but maybe the new relay advocate (Colin) can help with this?
Tor Weather has been discontinued on 2016-04-04,
see Karsten's email for the reasoning behind it:
https://lists.torproject.org/pipermail/tor-relays/2016-April/009009.html
but as he says "Tor Weather is still a good idea, it just needs somebody to implement it."
How Tor Weather looked like:
https://web.archive.org/web/20141004055709/https://weather.torproject.org/subscribe/
**Motivation**
If a relay disappears today, it is unlikely that anyone will notice or even send an email to the operator unless it is a big one.
Relay operators and the entire tor network would benefit from a Tor Weather service because it notifies relay operators when the state of their relays changed (and more). This will increase the likelihood that relay operators notice problems and actually mitigate the problem otherwise there is no "user feedback" since tor can cope with disappearing relays quite well.
It also
* shows the relay operator that someone actually cares if their relays go down or become outdated or have another problem
* gives the operator relay best-practices information.
**Expected Effects**
If enough operators subscribe to such a service:
* relays might become more long lived / the churn rate might decrease
* the fraction of relays running outdated tor versions might decrease
* the fraction of exits with broken DNS might decrease
It also has the benefit of being able to contact relay operators
* completely automatically
* even if they choose to not set a public ContactInfo string in their torrc files.
**Ideas for Notification Types**
(sorted by importance)
Support subscribing via single relay FP or MyFamily groups (should not need any subscription change if a relay gets added to the family).
[ ] Email me when my node is down
How long before we send a notification? ________
[ ] email me when my relay is affected by a security vulnerability
[ ] email me when my relay runs an end-of-life version of tor
[ ] email me when my relay runs an outdated tor version (note: this should depend on the related onionoo bugs to avoid emailing alpha relay people)
[ ] email me when my exit relay fails to resolve hostnames (DNS failure)
[ ] email me when my relay looses the [ ] stable, [ ] guard, [ ] exit flag
[ ] email me when my MyFamily configuration is broken (meaning: non-mutual config detected or relay with same contactInfo but no MyFamily)
[ ] email me when you detect issues with my relay
[ ] email me with suggestions for configuration improvements for my relay (only once per improvement)
[ ] email me when my relay is on the top [ ] 20 [ ] 50 [ ] 100 relays list
[ ] email me with monthly/quarterly status information that includes information like what my position in the overall relay list is (sorted by CW), how much traffic my relay did during the last month and what fraction of the months time your relay was included in consensus as running (this shows information on how many % of the months' consensuses this relay has been included and running)
[ ] aggregate emails for all my relays into a single digest email
[ ] email me about new relay requirements
[ ] email me about tor relay operator events
* Write a specification describing the meaning of each checkbox
**Security and Privacy Implications**
The service stores email addresses of potential tor relay operators, they should be kept private and safeguarded, but a passive observer can collect them by watching outbound email traffic if no TLS is used. Suggest to use a dedicated email address for this service.
**Additional Ideas**
* easy: integration into tor: show the URL pointing to the new Tor Weather service like the current link to the lifecycle blogpost when tor starts and detects to be a new relay
* Provide an uptimerobot-style status page for relay operators using onionoo datahttps://gitlab.torproject.org/legacy/trac/-/issues/23509Implement family-level pages showing aggregated graphs2020-06-13T18:07:30ZcypherpunksImplement family-level pages showing aggregated graphsCurrently atlas is about single relays, many operators run more than a single relay and would like to see the aggregated data of all their relays on a single page with the graphs showing all relays in stacked way.
This allows them to see...Currently atlas is about single relays, many operators run more than a single relay and would like to see the aggregated data of all their relays on a single page with the graphs showing all relays in stacked way.
This allows them to see how they are doing across all their relays.
example graphs:
https://nos-oignons.net/Services/index.en.html
To avoid discussing how to identify MyFamilies lets just use onionoo lookup:
https://atlas.torproject.org/#search/family:<fingerprint of an arbitrary relay>
This would also create an incentive for properly configuring MyFamily since incorrect set MyFamily would result
relay operators find this useful (counted ~11)
https://lists.torproject.org/pipermail/tor-relays/2017-September/012942.html
https://twitter.com/nusenu_/status/907366138149044224https://gitlab.torproject.org/legacy/trac/-/issues/27235add route_origin_rpki_validity field2020-06-13T18:02:48Znusenuadd route_origin_rpki_validity fieldmotivation:
* bring routing security awareness and indicators to relay operators
* increase routing security by encouraging relay operators to ask their ISPs for properly configured prefixes
context:
https://medium.com/@nusenu/how-vuln...motivation:
* bring routing security awareness and indicators to relay operators
* increase routing security by encouraging relay operators to ask their ISPs for properly configured prefixes
context:
https://medium.com/@nusenu/how-vulnerable-is-the-tor-network-to-bgp-hijacking-attacks-56d3b2ebfd92
this field should contain the following information:
* RPKI ROA validity state for IPv4 and IPv6 (enum: NotFound, Invalid, Valid)
* invalid reason for IPv4 and IPv6 (enum: 'as', 'length')
validator software by RIPE (alternatively you can use RIPEstat, but running it yourself is likely a lot faster)
https://www.ripe.net/manage-ips-and-asns/resource-management/certification/tools-and-resources
depends on: #27155https://gitlab.torproject.org/legacy/trac/-/issues/33010Monitor cloudflare captcha rate: do a periodic onionperf-like query to a clou...2020-06-13T17:56:14ZRoger DingledineMonitor cloudflare captcha rate: do a periodic onionperf-like query to a cloudflare-hosted static siteWe should track the rate that cloudflare gives captchas to Tor users over time.
My suggested way of doing that tracking is to sign up a very simple static webpage to be fronted by cloudflare, and then fetch it via Tor over time, and rec...We should track the rate that cloudflare gives captchas to Tor users over time.
My suggested way of doing that tracking is to sign up a very simple static webpage to be fronted by cloudflare, and then fetch it via Tor over time, and record and graph the rates of getting a captcha vs getting the real page.
The reason for the "simple static page" is to make it really easy to distinguish whether we're getting hit with a captcha. The "distinguishing one dynamic web page from another" challenge makes exitmap tricky in the general case, but we can remove that variable here.
One catch is that Cloudflare currently gives alt-svc headers in response to fetches from Tor addresses. So that means we need a web client that can follow alt-srv headers -- maybe we need a full Selenium like client?
Once we get the infrastructure set up, we would be smart to run a second one which is just wget or curl or lynx or something, i.e. which doesn't behave like Tor Browser, in order to be able to track the difference between how Cloudflare responds to Tor Browser vs other browsers.
I imagine that Cloudflare should be internally tracking how they're handling Tor requests, but having a public tracker (a) gives the data to everybody, and (b) helps Cloudflare have a second opinion in case their internal data diverges from the public version.
The Berkeley ICSI group did research that included this sort of check:
https://www.freehaven.net/anonbib/#differential-ndss2016
https://www.freehaven.net/anonbib/#exit-blocking2017
but what I have in mind here is essentially a simpler subset of this research, skipping the complicated part of "how do you tell what kind of response you got" and with an emphasis on automation and consistency.
There are two interesting metrics to track over time: one is the fraction of exit relays that are getting hit with captchas, and the other is the chance that a Tor client, choosing an exit relay in the normal weighted fashion, will get hit by a captcha.
Then there are other interesting patterns to look for, e.g. "are certain IP addresses punished consistently and others never punished, or is whether you get a captcha much more probabilistic and transient?" And does that pattern change over time?https://gitlab.torproject.org/legacy/trac/-/issues/29343Run arthur's DNS timeout scanner, archive it in CollecTor, and add it to Onionoo2020-06-13T17:56:11ZKarsten LoesingRun arthur's DNS timeout scanner, archive it in CollecTor, and add it to OnionooWe put the following item on the last roadmap we made in Mexico City:
Run arthur's DNS timeout scanner, archive it in CollecTor, and add it to Onionoo.
However, our plans have changed and we dropped it from the new roadmap we made in B...We put the following item on the last roadmap we made in Mexico City:
Run arthur's DNS timeout scanner, archive it in CollecTor, and add it to Onionoo.
However, our plans have changed and we dropped it from the new roadmap we made in Brussels. Creating this ticket to remember the idea even though we're currently not working on it.https://gitlab.torproject.org/legacy/trac/-/issues/33754I discovered a Tor node using TCP port 9999 (service: "distinct" / "abyss"). ...2020-06-13T17:54:39ZTracI discovered a Tor node using TCP port 9999 (service: "distinct" / "abyss"). Is this normal?I recently ran Netstat, a program within Network Utility (I did this while using Tor Browser). When the option "Display the state of all current socket connections" was selected, two Tor IP addresses appeared in the "Foreign address" col...I recently ran Netstat, a program within Network Utility (I did this while using Tor Browser). When the option "Display the state of all current socket connections" was selected, two Tor IP addresses appeared in the "Foreign address" column. One IP address is the Tor entry node I am currently using, and the other IP address is an IP address using TCP port 9999 -- the "distinct" service, also known as "abyss". What I'm wondering is, is it normal for a Tor node to use TCP port 9999 ("distinct") while using Tor? Is this a sign of malicious activity?
I did a terminal command to see if I could find out more information about the IP address using port 9999, and it said it is using tor.real.
In addition, the mysterious Tor IP address appears to be using the following ports on my computer: 22, 80, 110, 143, 443, 993, 995, 9998 and 9999.
The reason I know the mysterious IP address is a Tor IP address is because Terminal told me it is using tor.real, and Tor Exonerator confirmed that it is a Tor IP address.
**Trac**:
**Username**: Tor235https://gitlab.torproject.org/legacy/trac/-/issues/33758Fix exitmap related bad relay tests2020-06-13T17:54:16ZGeorg KoppenFix exitmap related bad relay testsThis ticket is a placeholder for going over other exitmap related tests (for `checktest.py` see #33663) and document them while we are at it.This ticket is a placeholder for going over other exitmap related tests (for `checktest.py` see #33663) and document them while we are at it.Georg KoppenGeorg Koppenhttps://gitlab.torproject.org/legacy/trac/-/issues/33696Integrate badexiting into the badconf-entry.py script2020-06-13T17:54:15ZGeorg KoppenIntegrate badexiting into the badconf-entry.py scriptWe'll probably use the Badexit flag more from now on (see: #32864), so it might make sense to add a respective option to our `badconf-entry.py` script.We'll probably use the Badexit flag more from now on (see: #32864), so it might make sense to add a respective option to our `badconf-entry.py` script.Georg KoppenGeorg Koppen