The Tor Project issueshttps://gitlab.torproject.org/groups/tpo/-/issues2020-06-27T14:24:13Zhttps://gitlab.torproject.org/tpo/network-health/metrics/onionoo/-/issues/19260unit tests do not compile against DescripTor version > 1.0.02020-06-27T14:24:13Ziwakehunit tests do not compile against DescripTor version > 1.0.0The test classes `DummyBridgeStatus` and `DummyStatusEntry` only comply with
DescripTor release 1.0.0 and fail to implement some methods from 1.1.0 up.The test classes `DummyBridgeStatus` and `DummyStatusEntry` only comply with
DescripTor release 1.0.0 and fail to implement some methods from 1.1.0 up.https://gitlab.torproject.org/tpo/network-health/metrics/onionoo/-/issues/19259uncaught NFE and other bugs in weight status document processing2020-06-27T14:24:13Ziwakehuncaught NFE and other bugs in weight status document processingWhile waiting for OOM in legacy/trac#19249 I encountered this
```
Exception in thread "main" java.lang.NumberFormatException: For input string: "0,000086161974"
at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1250)...While waiting for OOM in legacy/trac#19249 I encountered this
```
Exception in thread "main" java.lang.NumberFormatException: For input string: "0,000086161974"
at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1250)
at java.lang.Double.parseDouble(Double.java:540)
at org.torproject.onionoo.docs.WeightsStatus.setFromDocumentString(WeightsStatus.java:71)
at org.torproject.onionoo.docs.DocumentStore.retrieveParsedStatusFile(DocumentStore.java:500)
at org.torproject.onionoo.docs.DocumentStore.retrieveDocumentFile(DocumentStore.java:484)
at org.torproject.onionoo.docs.DocumentStore.retrieve(DocumentStore.java:363)
at org.torproject.onionoo.updater.WeightsStatusUpdater.updateWeightsHistory(WeightsStatusUpdater.java:64)
at org.torproject.onionoo.updater.WeightsStatusUpdater.processRelayNetworkConsensus(WeightsStatusUpdater.java:54)
at org.torproject.onionoo.updater.WeightsStatusUpdater.processDescriptor(WeightsStatusUpdater.java:39)
at org.torproject.onionoo.updater.DescriptorSource.readArchivedDescriptors(DescriptorSource.java:197)
at org.torproject.onionoo.updater.DescriptorSource.readDescriptors(DescriptorSource.java:83)
at org.torproject.onionoo.cron.Main.updateStatuses(Main.java:188)
at org.torproject.onionoo.cron.Main.run(Main.java:122)
at org.torproject.onionoo.cron.Main.runOrScheduleExecutions(Main.java:96)
at org.torproject.onionoo.cron.Main.main(Main.java:32)
```
The code is [here](https://gitweb.torproject.org/onionoo.git/tree/src/main/java/org/torproject/onionoo/docs/WeightsStatus.java#n70):
```
double[] weights = new double[] { -1.0,
Double.parseDouble(parts[5]),
Double.parseDouble(parts[6]),
Double.parseDouble(parts[7]),
Double.parseDouble(parts[8]), -1.0, -1.0 };
```
Might be due to the locale of the server, but should be caught and logged.
Maybe, use this ticket to scan the code for similar things?Onionoo 3.1-1.0.0iwakehiwakehhttps://gitlab.torproject.org/tpo/network-health/metrics/onionoo/-/issues/19253replace submodule with released dependency of metrics-lib2020-06-27T14:24:13Ziwakehreplace submodule with released dependency of metrics-libI just noticed that Onionoo is not yet using the released descriptor.jar and still has the git submodule.I just noticed that Onionoo is not yet using the released descriptor.jar and still has the git submodule.https://gitlab.torproject.org/tpo/network-health/metrics/onionoo/-/issues/19249Onionoo server runs out of memory when importing a full month of data2020-06-27T14:24:13ZKarsten LoesingOnionoo server runs out of memory when importing a full month of dataI had to re-import all of May on the Onionoo mirror because it was offline for more than three days. Now it's running out of memory in the shut-down process. Logs and exception below:
```
2016-06-01 09:30:33,944 INFO o.t.o.cron.Main:9...I had to re-import all of May on the Onionoo mirror because it was offline for more than three days. Now it's running out of memory in the shut-down process. Logs and exception below:
```
2016-06-01 09:30:33,944 INFO o.t.o.cron.Main:92 Going to run one-time updater ...
2016-06-01 09:30:34,002 INFO o.t.o.cron.Main:130 Initializing.
2016-06-01 09:30:34,005 INFO o.t.o.cron.Main:133 Acquired lock
2016-06-01 09:30:34,005 DEBUG o.t.o.cron.Main:152 Started update ...
2016-06-01 09:30:34,007 INFO o.t.o.cron.Main:155 Initialized descriptor source
2016-06-01 09:30:34,012 INFO o.t.o.cron.Main:159 Initialized document store
2016-06-01 09:30:34,029 INFO o.t.o.cron.Main:163 Initialized status update runner
2016-06-01 09:30:34,040 INFO o.t.o.cron.Main:168 Initialized document writer runner
2016-06-01 09:30:34,041 INFO o.t.o.cron.Main:176 Downloading descriptors.
2016-06-01 09:30:34,041 INFO o.t.o.u.DescriptorSource:64 Loading: RELAY_CONSENSUSES
2016-06-01 09:33:02,861 INFO o.t.o.u.DescriptorSource:64 Loading: RELAY_SERVER_DESCRIPTORS
2016-06-01 09:35:39,639 INFO o.t.o.u.DescriptorSource:64 Loading: RELAY_EXTRA_INFOS
2016-06-01 09:38:10,562 INFO o.t.o.u.DescriptorSource:64 Loading: EXIT_LISTS
2016-06-01 09:38:51,159 INFO o.t.o.u.DescriptorSource:64 Loading: BRIDGE_STATUSES
2016-06-01 09:40:29,716 INFO o.t.o.u.DescriptorSource:64 Loading: BRIDGE_SERVER_DESCRIPTORS
2016-06-01 09:41:58,737 INFO o.t.o.u.DescriptorSource:64 Loading: BRIDGE_EXTRA_INFOS
2016-06-01 09:43:44,958 INFO o.t.o.cron.Main:184 Reading descriptors.
2016-06-01 09:43:44,959 INFO o.t.o.u.DescriptorSource:153 Reading archived descriptors...
2016-06-02 02:51:09,249 INFO o.t.o.u.DescriptorSource:200 Read archived descriptors
2016-06-02 02:51:09,249 DEBUG o.t.o.u.DescriptorSource:84 Reading recent RELAY_SERVER_DESCRIPTORS ...
2016-06-02 02:53:01,224 INFO o.t.o.u.DescriptorSource:129 Read recent relay server descriptors
2016-06-02 02:53:01,224 DEBUG o.t.o.u.DescriptorSource:88 Reading recent RELAY_EXTRA_INFOS ...
2016-06-02 03:31:50,889 INFO o.t.o.u.DescriptorSource:132 Read recent relay extra-info descriptors
2016-06-02 03:31:50,890 DEBUG o.t.o.u.DescriptorSource:91 Reading recent EXIT_LISTS ...
2016-06-02 03:32:13,478 INFO o.t.o.u.DescriptorSource:135 Read recent exit lists
2016-06-02 03:32:13,479 DEBUG o.t.o.u.DescriptorSource:94 Reading recent RELAY_CONSENSUSES ...
2016-06-02 08:29:52,761 INFO o.t.o.u.DescriptorSource:126 Read recent relay network consensuses
2016-06-02 08:29:52,765 DEBUG o.t.o.u.DescriptorSource:97 Reading recent BRIDGE_SERVER_DESCRIPTORS ...
2016-06-02 08:31:32,294 INFO o.t.o.u.DescriptorSource:141 Read recent bridge server descriptors
2016-06-02 08:31:32,295 DEBUG o.t.o.u.DescriptorSource:101 Reading recent BRIDGE_EXTRA_INFOS ...
2016-06-02 09:22:29,247 INFO o.t.o.u.DescriptorSource:144 Read recent bridge extra-info descriptors
2016-06-02 09:22:29,247 DEBUG o.t.o.u.DescriptorSource:104 Reading recent BRIDGE_STATUSES ...
2016-06-02 09:23:44,681 INFO o.t.o.u.DescriptorSource:138 Read recent bridge network statuses
2016-06-02 09:23:44,682 INFO o.t.o.cron.Main:186 Updating internal status files.
2016-06-02 09:23:44,682 DEBUG o.t.o.u.StatusUpdateRunner:36 Begin update of NodeDetailsStatusUpdater
2016-06-02 09:25:02,021 INFO o.t.o.u.NodeDetailsStatusUpdater:379 Read node statuses
2016-06-02 09:25:12,500 INFO o.t.o.u.NodeDetailsStatusUpdater:381 Started reverse domain name lookups
2016-06-02 09:31:29,006 INFO o.t.o.u.NodeDetailsStatusUpdater:383 Looked up cities and ASes
2016-06-02 09:31:29,112 INFO o.t.o.u.NodeDetailsStatusUpdater:385 Calculated path selection probabilities
2016-06-02 09:31:29,224 INFO o.t.o.u.NodeDetailsStatusUpdater:387 Computed effective and extended families
2016-06-02 09:31:29,252 INFO o.t.o.u.NodeDetailsStatusUpdater:389 Finished reverse domain name lookups
2016-06-02 09:34:37,918 INFO o.t.o.u.NodeDetailsStatusUpdater:391 Updated node and details statuses
2016-06-02 09:34:37,918 INFO o.t.o.u.StatusUpdateRunner:38 NodeDetailsStatusUpdater updated status files
2016-06-02 09:34:37,918 DEBUG o.t.o.u.StatusUpdateRunner:36 Begin update of BandwidthStatusUpdater
2016-06-02 09:34:37,918 INFO o.t.o.u.StatusUpdateRunner:38 BandwidthStatusUpdater updated status files
2016-06-02 09:34:37,918 DEBUG o.t.o.u.StatusUpdateRunner:36 Begin update of WeightsStatusUpdater
2016-06-02 09:34:37,918 INFO o.t.o.u.StatusUpdateRunner:38 WeightsStatusUpdater updated status files
2016-06-02 09:34:37,919 DEBUG o.t.o.u.StatusUpdateRunner:36 Begin update of ClientsStatusUpdater
2016-06-02 09:47:57,798 INFO o.t.o.u.StatusUpdateRunner:38 ClientsStatusUpdater updated status files
2016-06-02 09:47:57,799 DEBUG o.t.o.u.StatusUpdateRunner:36 Begin update of UptimeStatusUpdater
2016-06-02 10:28:41,049 INFO o.t.o.u.StatusUpdateRunner:38 UptimeStatusUpdater updated status files
2016-06-02 10:28:41,049 INFO o.t.o.cron.Main:194 Updating document files.
2016-06-02 10:28:41,049 DEBUG o.t.o.w.DocumentWriterRunner:28 Writing SummaryDocumentWriter
2016-06-02 10:29:03,109 INFO o.t.o.w.SummaryDocumentWriter:97 Wrote summary document files
2016-06-02 10:29:03,110 DEBUG o.t.o.w.DocumentWriterRunner:28 Writing DetailsDocumentWriter
2016-06-02 10:33:58,870 INFO o.t.o.w.DetailsDocumentWriter:46 Wrote details document files
2016-06-02 10:33:58,870 DEBUG o.t.o.w.DocumentWriterRunner:28 Writing BandwidthDocumentWriter
2016-06-02 11:45:04,633 INFO o.t.o.w.BandwidthDocumentWriter:54 Wrote bandwidth document files
2016-06-02 11:45:04,634 DEBUG o.t.o.w.DocumentWriterRunner:28 Writing WeightsDocumentWriter
2016-06-02 12:19:42,480 INFO o.t.o.w.WeightsDocumentWriter:55 Wrote weights document files
2016-06-02 12:19:42,481 DEBUG o.t.o.w.DocumentWriterRunner:28 Writing ClientsDocumentWriter
2016-06-02 12:23:22,577 INFO o.t.o.w.ClientsDocumentWriter:84 Wrote clients document files
2016-06-02 12:23:22,577 DEBUG o.t.o.w.DocumentWriterRunner:28 Writing UptimeDocumentWriter
2016-06-02 12:39:58,477 INFO o.t.o.w.UptimeDocumentWriter:57 Wrote uptime document files
2016-06-02 12:39:58,477 INFO o.t.o.cron.Main:199 Shutting down.
2016-06-02 12:39:58,477 DEBUG o.t.o.u.DescriptorSource:204 Writing parse histories for recent descriptors...
2016-06-02 12:39:58,492 INFO o.t.o.cron.Main:202 Wrote parse histories
karsten@onionoo:/srv/onionoo.thecthulhu.com/onionoo$ java -DLOGBASE=/srv/onionoo.thecthulhu.com/onionoo/log-cron/ -Xmx4g -jar dist/onionoo-3.1.0.jar --single-run && java -DLOGBASE=/srv/onionoo.thecthulhu.com/onionoo/log-cron/ -Xmx4g -jar dist/onionoo-3.1.0.jar
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2367)
at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130)
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:114)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:415)
at java.lang.StringBuilder.append(StringBuilder.java:132)
at org.torproject.onionoo.docs.DocumentStore.writeNodeStatuses(DocumentStore.java:710)
at org.torproject.onionoo.docs.DocumentStore.flushDocumentCache(DocumentStore.java:669)
at org.torproject.onionoo.cron.Main.shutDown(Main.java:205)
at org.torproject.onionoo.cron.Main.run(Main.java:121)
at org.torproject.onionoo.cron.Main.runOrScheduleExecutions(Main.java:93)
at org.torproject.onionoo.cron.Main.main(Main.java:32)
karsten@onionoo:/srv/onionoo.thecthulhu.com/onionoo$ ls -lh in/archive/
total 6.9G
-rw-r--r-- 1 karsten karsten 2.9G May 31 23:40 bridge-descriptors-2016-05.tar
-rw-r--r-- 1 karsten karsten 1.1G May 30 23:19 consensuses-2016-05.tar
-rw-r--r-- 1 karsten karsten 1.5G May 31 15:57 extra-infos-2016-05.tar
-rw-r--r-- 1 karsten karsten 1.5G May 31 12:41 server-descriptors-2016-05.tar
```
I didn't really start investigating. Note that it takes over 24 hours to do the processing, so we cannot reproduce this bug as easily.https://gitlab.torproject.org/tpo/network-health/metrics/onionoo/-/issues/19154Onionoo stopped to resolve ASN and country code for new relays2020-06-27T14:24:13ZtwimOnionoo stopped to resolve ASN and country code for new relaysSince sometime between 17th and 18th of May Onionoo includes no `"country"` (as well as other `"country"`-related fields) and `"as_number"` fields for new relays. Most probably it's all because `maxmind.com` installed MitM-as-a-Service ...Since sometime between 17th and 18th of May Onionoo includes no `"country"` (as well as other `"country"`-related fields) and `"as_number"` fields for new relays. Most probably it's all because `maxmind.com` installed MitM-as-a-Service (the OneMoreStep one) and Onionoo backend cannot update appropriate BGP databases from there [1][2].
Less probably, it's affected by this commit [3] and related to legacy/trac#18989.
[1] https://geolite.maxmind.com/download/geoip/database/GeoLite2-City-CSV.zip
[2] https://www.maxmind.com/download/geoip/database/asnum/GeoIPASNum2.zip
[3] https://gitweb.torproject.org/onionoo.git/commit/?id=d9c137c426487716213e6d67936fae485056a24eKarsten LoesingKarsten Loesinghttps://gitlab.torproject.org/tpo/network-health/metrics/onionoo/-/issues/19118Add organization name to each relay2020-06-27T14:24:14ZvirgilAdd organization name to each relayRoster quantifies things like "organization diversity" as something more stringent than mere AS diversity, i.e., AS-diversity is necessary but not sufficient for organization diversity.
To do this, we are leveraging data from CAIDA.org....Roster quantifies things like "organization diversity" as something more stringent than mere AS diversity, i.e., AS-diversity is necessary but not sufficient for organization diversity.
To do this, we are leveraging data from CAIDA.org. Particularly this data set:
* http://data.caida.org/datasets/as-organizations/
Here's my python code for downloading the most recent AS-organizations data
* http://dl.dropbox.com/u/3308162/download_latest_as2orgname_data.py
This python generates a JSON file. Here's the one for April 2016:
http://dl.dropbox.com/u/3308162/as2orgs.json.gz
In this JSON file, it has the organization name for each AS number. For example, here's the entry for the AS number 44925 on which torproject.org is hosted:
`"44925": {"aut_name": "THE-1984-AS","changed": "","org_name": "1984 ehf","source": "RIPE"}`The next step is to have this data inserted inOnionoo. Particularly, everyOnionoorelay entry should include the: 'aut_name' and 'org_name' as given in the as2orgs.jsonfile.
Now whenOnionooclients request information on a relay, they will receive the "aut_name" and "org_name" its AS belongs to.
That's it---just run `download_latest_as2orgname_data.py` once a month, weave the relevant JSON entries into the Onionoo data, and you're gold.Karsten LoesingKarsten Loesinghttps://gitlab.torproject.org/tpo/network-health/metrics/onionoo/-/issues/19027Adding organization name for each relay2020-06-27T14:24:14ZvirgilAdding organization name for each relayRoster quantifies things like "organization diversity" as something more sophisticated than mere AS diversity.
Particularly, AS-diversity is necessary but not sufficient for organization diversity.
To do this, we are leveraging data fr...Roster quantifies things like "organization diversity" as something more sophisticated than mere AS diversity.
Particularly, AS-diversity is necessary but not sufficient for organization diversity.
To do this, we are leveraging data from CAIDA.org. Particularly this data set: http://data.caida.org/datasets/as-organizations/
Here's my python code for downloading the most recent AS-orrganizationsdata
* http://dl.dropbox.com/u/3308162/download_latest_as2orgname_data.py
This python generates a JSON file. Here's the one for April 2016:
* http://dl.dropbox.com/u/3308162/as2orgs.json.gz
For example, here's the entry for the AS number 44925 on which torproject.org is hosted:
`"44925": {"aut_name": "THE-1984-AS","changed": "","org_name": "1984 ehf","source": "RIPE"}`The next step is to have this data included inOnionooand refreshed every month. Particularly, everyOnionooentry for a relay should include the: 'aut_name' and 'org_name' as given in the as2orgs.json file. My python script takes
That's it. Just run this once a month. Read the JSON intoOnionoo, and include the relevant data.virgilvirgilhttps://gitlab.torproject.org/tpo/network-health/metrics/onionoo/-/issues/18967Add UUID to families in Onionoo2020-06-27T14:24:14ZTracAdd UUID to families in OnionooThis is an enhancement of the implementation proposed in legacy/trac#16599.
Some services that depend onOnionoorequire persistence in family data. For instance, some of the proposed features of Roster such as replacing Tor Weather requ...This is an enhancement of the implementation proposed in legacy/trac#16599.
Some services that depend onOnionoorequire persistence in family data. For instance, some of the proposed features of Roster such as replacing Tor Weather require knowing when a certain relay goes down. Analogous to Tor spec proposal legacy/trac#242,Onionoo should implement such a scheme.
Put simply, the implementation would be as follows:
* Each family will have some UUID, which would be tagged on all member relays (like an extra fingerprint).
* A new relay will be tagged by its family's UUID by looking up the ID of older relatives.
* There are two schemes for storing key-value based data. One is for looking up families via UUID, the other for looking up a UUID via relay fingerprint.
Unlike legacy/trac#16599, this implementation does not require any a priori information about the family. The UUIDs are guaranteed to be unique. Currently, Roster has a half-baked implementation of the above. Despite the simplicity of the implementation, the benefits are potentially great, as querying for and storing persistent data of families would become possible.
[1] https://trac.torproject.org/projects/tor/ticket/16599
[2] https://gitweb.torproject.org/torspec.git/tree/proposals/242-better-families.txt
On uniqueness of UUIDs
[3] https://stackoverflow.com/questions/703035/when-are-you-truly-forced-to-use-uuid-as-part-of-the-design/786541#786541
**Trac**:
**Username**: seansaitohttps://gitlab.torproject.org/tpo/network-health/metrics/onionoo/-/issues/18834non-running relays (running=0) should have a consensus_weight of 02020-06-27T14:24:14Zcypherpunksnon-running relays (running=0) should have a consensus_weight of 0
Current state:
there are relays which are not running but have a cw value > 0.
Can we set their cw value to 0?
Was that already the case at some point in the past?
Current state:
there are relays which are not running but have a cw value > 0.
Can we set their cw value to 0?
Was that already the case at some point in the past?https://gitlab.torproject.org/tpo/network-health/metrics/onionoo/-/issues/18723Support registering webhooks for push updates2020-06-27T14:24:14ZTracSupport registering webhooks for push updatesIt would be great if onionoo would be able to support push updates using webhooks, something similar to pubsubhubbub (https://pubsubhubbub.appspot.com/) which supports push updates for RSS/Atom feeds.
Having push support would make syst...It would be great if onionoo would be able to support push updates using webhooks, something similar to pubsubhubbub (https://pubsubhubbub.appspot.com/) which supports push updates for RSS/Atom feeds.
Having push support would make systems like a new Tor Weather much more efficient.
For example, instead of polling onionoo to see if certain nodes are down, it would get a push notification whenever a node's status changes.
It can register for specific nodes or have various other filtering conditions, but that can be discussed later.
Pubsubhubbub supports 2 types of pings (when it has something to notify webhooks that were registered):
1) Thin - where it only says data was changed but doesn't send the actual data. The receiving party needs pull the relevant data.
2) Fat - where it sends the actual data that was changed or is relevant for the ping.
This can be implemented as a single system that polls onionoo or as part of onionoo as it knows exactly when certain data changes.
**Trac**:
**Username**: eranshttps://gitlab.torproject.org/tpo/network-health/metrics/onionoo/-/issues/18663Onionoo doesn't send certain headers on even-numbered responses2020-06-27T14:24:14ZDavid Fifielddcf@torproject.orgOnionoo doesn't send certain headers on even-numbered responsesWhen I load this URL, the first time I get meaningful output:
> ![1.png](uploads/1.png)
But if I hit Ctrl+R to refresh, I get this garbled (maybe compressed?) output instead:
> ![2.png](uploads/2.png)
If I refresh again, it goes back to ...When I load this URL, the first time I get meaningful output:
> ![1.png](uploads/1.png)
But if I hit Ctrl+R to refresh, I get this garbled (maybe compressed?) output instead:
> ![2.png](uploads/2.png)
If I refresh again, it goes back to the readable version, and if I refresh yet again, it switches back to the garbled version. I can keep switching back and forth.
The same happens if I click the refresh icon in the address bar. I tried it in Tor Browser 6.0a4 and Chromium 49.0, and it happens in both.
The garbled version additionally causes this to be printed to the console:
The character encoding of the plain text document was not declared. The document will render with garbled text in some browser configurations if the document contains characters from outside the US-ASCII range. The character encoding of the file needs to be declared in the transfer protocol or file needs to use a byte order mark as an encoding signature.
With another type of document, namely https://onionoo.torproject.org/details?lookup=88F745840F47CE0C6A4FE61D827950B06F9E4534, the text remains readable while repeatedly refreshing, but the "character encoding" message appears in the console in alternating refreshes.Karsten LoesingKarsten Loesinghttps://gitlab.torproject.org/tpo/network-health/metrics/onionoo/-/issues/18396onionoo fails at ant compile2020-06-27T14:24:14Zcypherpunksonionoo fails at ant compileFollowing the instructions at onionoo.git, cannot pass "ant compile" step.
Found prior ticket to update git submodules init and git update, problem persists.
```
ant compile
Buildfile: /home/metrics/onionoo/build.xml
metrics-lib:
ini...Following the instructions at onionoo.git, cannot pass "ant compile" step.
Found prior ticket to update git submodules init and git update, problem persists.
```
ant compile
Buildfile: /home/metrics/onionoo/build.xml
metrics-lib:
init:
compile:
jar:
init:
compile:
[javac] Compiling 63 source files to /home/metrics/onionoo/classes
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/cron/Main.java:11: error: package org.slf4j does not exist
[javac] import org.slf4j.Logger;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/cron/Main.java:12: error: package org.slf4j does not exist
[javac] import org.slf4j.LoggerFactory;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/cron/Main.java:27: error: cannot find symbol
[javac] private Logger log = LoggerFactory.getLogger(Main.class);
[javac] ^
[javac] symbol: class Logger
[javac] location: class Main
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/util/LockFile.java:10: error: package org.slf4j does not exist
[javac] import org.slf4j.Logger;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/util/LockFile.java:11: error: package org.slf4j does not exist
[javac] import org.slf4j.LoggerFactory;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/updater/DescriptorSource.java:13: error: package org.slf4j does not exist
[javac] import org.slf4j.Logger;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/updater/DescriptorSource.java:14: error: package org.slf4j does not exist
[javac] import org.slf4j.LoggerFactory;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/docs/DocumentStore.java:24: error: package org.apache.commons.lang3 does not exist
[javac] import org.apache.commons.lang3.StringUtils;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/docs/DocumentStore.java:25: error: package org.slf4j does not exist
[javac] import org.slf4j.Logger;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/docs/DocumentStore.java:26: error: package org.slf4j does not exist
[javac] import org.slf4j.LoggerFactory;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/docs/DocumentStore.java:31: error: package com.google.gson does not exist
[javac] import com.google.gson.Gson;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/docs/DocumentStore.java:32: error: package com.google.gson does not exist
[javac] import com.google.gson.GsonBuilder;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/docs/DocumentStore.java:33: error: package com.google.gson does not exist
[javac] import com.google.gson.JsonParseException;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/updater/StatusUpdateRunner.java:7: error: package org.slf4j does not exist
[javac] import org.slf4j.Logger;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/updater/StatusUpdateRunner.java:8: error: package org.slf4j does not exist
[javac] import org.slf4j.LoggerFactory;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/writer/DocumentWriterRunner.java:5: error: package org.slf4j does not exist
[javac] import org.slf4j.Logger;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/writer/DocumentWriterRunner.java:6: error: package org.slf4j does not exist
[javac] import org.slf4j.LoggerFactory;[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/util/LockFile.java:15: error: cannot find symbol
[javac] private final static Logger log = LoggerFactory.getLogger(
[javac] ^
[javac] symbol: class Logger
[javac] location: class LockFile
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/updater/DescriptorSource.java:20: error: cannot find symbol
[javac] private static final Logger log = LoggerFactory.getLogger(
[javac] ^
[javac] symbol: class Logger
[javac] location: class DescriptorSource
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/updater/DescriptorQueue.java:15: error: package org.slf4j does not exist
[javac] import org.slf4j.Logger;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/updater/DescriptorQueue.java:16: error: package org.slf4j does not exist
[javac] import org.slf4j.LoggerFactory;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/docs/DocumentStore.java:42: error: cannot find symbol
[javac] private static Logger log = LoggerFactory.getLogger(
[javac] ^
[javac] symbol: class Logger
[javac] location: class DocumentStore
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/docs/NodeStatus.java:18: error: package org.apache.commons.lang3 does not exist
[javac] import org.apache.commons.lang3.StringUtils;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/docs/NodeStatus.java:19: error: package org.slf4j does not exist
[javac] import org.slf4j.Logger;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/docs/NodeStatus.java:20: error: package org.slf4j does not exist
[javac] import org.slf4j.LoggerFactory;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/docs/SummaryDocument.java:13: error: package org.apache.commons.codec does not exist
[javac] import org.apache.commons.codec.DecoderException;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/docs/SummaryDocument.java:14: error: package org.apache.commons.codec.binary does not exist
[javac] import org.apache.commons.codec.binary.Base64;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/docs/SummaryDocument.java:15: error: package org.apache.commons.codec.binary does not exist
[javac] import org.apache.commons.codec.binary.Hex;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/docs/SummaryDocument.java:16: error: package org.apache.commons.codec.digest does not exist
[javac] import org.apache.commons.codec.digest.DigestUtils;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/updater/StatusUpdateRunner.java:12: error: cannot find symbol
[javac] private static final Logger log = LoggerFactory.getLogger(
[javac] ^
[javac] symbol: class Logger
[javac] location: class StatusUpdateRunner
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/updater/LookupService.java:25: error: package org.slf4j does not exist
[javac] import org.slf4j.Logger;[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/updater/LookupService.java:26: error: package org.slf4j does not exist
[javac] import org.slf4j.LoggerFactory;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/writer/DocumentWriterRunner.java:10: error: cannot find symbol
[javac] private final static Logger log = LoggerFactory.getLogger(
[javac] ^
[javac] symbol: class Logger
[javac] location: class DocumentWriterRunner
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/updater/DescriptorQueue.java:24: error: cannot find symbol
[javac] private final static Logger log = LoggerFactory.getLogger(
[javac] ^
[javac] symbol: class Logger
[javac] location: class DescriptorQueue
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/docs/NodeStatus.java:24: error: cannot find symbol
[javac] private final static Logger log = LoggerFactory.getLogger(
[javac] ^
[javac] symbol: class Logger
[javac] location: class NodeStatus
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/updater/LookupService.java:31: error: cannot find symbol
[javac] private final static Logger log = LoggerFactory.getLogger(
[javac] ^
[javac] symbol: class Logger
[javac] location: class LookupService
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/docs/BandwidthStatus.java:10: error: package org.slf4j does not exist[javac] import org.slf4j.Logger;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/docs/BandwidthStatus.java:11: error: package org.slf4j does not exist
[javac] import org.slf4j.LoggerFactory;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/docs/BandwidthStatus.java:17: error: cannot find symbol
[javac] private static Logger log = LoggerFactory.getLogger(
[javac] ^
[javac] symbol: class Logger
[javac] location: class BandwidthStatus
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/docs/ClientsHistory.java:9: error: package org.slf4j does not exist
[javac] import org.slf4j.Logger;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/docs/ClientsHistory.java:10: error: package org.slf4j does not exist
[javac] import org.slf4j.LoggerFactory;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/docs/ClientsHistory.java:14: error: cannot find symbol
[javac] private final static Logger log = LoggerFactory.getLogger(
[javac] ^
[javac] symbol: class Logger
[javac] location: class ClientsHistory
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/docs/ClientsStatus.java:9: error: package org.slf4j does not exist
[javac] import org.slf4j.Logger;
[javac] ^
[javac] /home/metrics/onionoo/src/main/java/org/torproject/onionoo/docs/ClientsStatus.java:10: error: package org.slf4j does not exist
[javac] import org.slf4j.LoggerFactory;
[javac] 100 errors
BUILD FAILED
/home/metrics/onionoo/build.xml:83: Compile failed; see the compiler error output for details.
```
```
dpkg -l | grep slf4j
ii libslf4j-java 1.7.12-2 all Simple Logging Facade for Java
```
```
apt-get install libslf4j-java
Reading package lists... Done
Building dependency tree
Reading state information... Done
libslf4j-java is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
```
```
ls /usr/share/java/*slf4j* | cat
gossip-bootstrap-slf4j-1.8.jar
gossip-bootstrap-slf4j.jar
gossip-slf4j-1.8.jar
gossip-slf4j.jar
jcl-over-slf4j-1.7.12.jar
jcl-over-slf4j.jar
jul-to-slf4j-1.7.12.jar
jul-to-slf4j.jar
log4j-over-slf4j-1.7.12.jar
log4j-over-slf4j.jar
log4j-to-slf4j-2.2.jar
log4j-to-slf4j.jar
slf4j-api-1.7.12.jar
slf4j-api.jar
slf4j-jcl-1.7.12.jar
slf4j-jcl.jar
slf4j-jdk14-1.7.12.jar
slf4j-jdk14.jar
slf4j-log4j12-1.7.12.jar
slf4j-log4j12.jar
slf4j-migrator-1.7.12.jar
slf4j-migrator.jar
slf4j-nop-1.7.12.jar
slf4j-nop.jar
slf4j-simple-1.7.12.jar
slf4j-simple.jar
```
Any hints?https://gitlab.torproject.org/tpo/network-health/metrics/onionoo/-/issues/18354flush cached host_name on IP change2020-06-27T14:24:15Zcypherpunksflush cached host_name on IP changedescription (somehow) implies that the host_name field is reset on IP change:
```
Host name as found in a reverse DNS lookup of the relay IP address. This field is updated at most once in 12 hours, unless the relay IP address changes. Om...description (somehow) implies that the host_name field is reset on IP change:
```
Host name as found in a reverse DNS lookup of the relay IP address. This field is updated at most once in 12 hours, unless the relay IP address changes. Omitted if the relay IP address was not looked up or if no lookup request was successful yet.
```
this would make sense but that does not seem to be the case:
```
+-----------------+-----------------+
| IP | host_name |
+-----------------+-----------------+
| 162.218.239.125 | 162.218.233.43 |
| 160.166.216.122 | 160.163.106.82 |
| 198.96.94.98 | 155.94.246.179 |
| 198.96.94.122 | 155.94.246.178 |
| 197.242.119.107 | 154.118.35.110 |
| 151.20.142.35 | 151.64.24.192 |
| 151.45.92.96 | 151.45.211.193 |
| 178.33.156.144 | 149.202.233.205 |
| 129.21.101.45 | 129.21.102.240 |
| 117.254.79.156 | 117.252.95.186 |
| 104.192.0.18 | 104.192.0.22 |
| 103.44.149.45 | 103.44.149.43 |
```
Additionally:
I would suggest to omit the host_name record if IP == host_name (saves space).https://gitlab.torproject.org/tpo/network-health/metrics/onionoo/-/issues/18342Provide more accurate reverse DNS results2020-06-27T14:24:15ZcypherpunksProvide more accurate reverse DNS resultsWhat DNS server does onionoo use for reverse DNS lookups to generate the host_name entries?
https://onionoo.torproject.org/details?search=SGGS
example, of onionoo's reverse lookup results for SG.GS relays, all IPs:
```
+----------+----...What DNS server does onionoo use for reverse DNS lookups to generate the host_name entries?
https://onionoo.torproject.org/details?search=SGGS
example, of onionoo's reverse lookup results for SG.GS relays, all IPs:
```
+----------+--------------+
| nickname | host_name |
+----------+--------------+
| SGGSUK4 | 124.6.36.196 |
| SGGSHK0 | 124.6.32.230 |
| SGGSUK6 | 124.6.36.198 |
| SGGSUK0 | 124.6.36.230 |
| SGGSUK3 | 124.6.36.195 |
| SGGSUK7 | 124.6.36.199 |
| SGGSUK1 | 124.6.36.193 |
| SGGSUK8 | 124.6.36.200 |
| SGGSUK5 | 124.6.36.197 |
| SGGSUK2 | 124.6.36.194 |
| SGGSLAX0 | 124.6.40.230 |
| SGGSNYC0 | 124.6.44.230 |
| SGGSUK9 | 124.6.36.201 |
+----------+--------------+
```
compare that with
http://torstatus.blutmagie.de/index.phpOnionoo 1.15.0irlirlhttps://gitlab.torproject.org/tpo/network-health/metrics/onionoo/-/issues/18270flush cached hostname when a relay's IP changes2020-06-27T14:24:15Zcypherpunksflush cached hostname when a relay's IP changesIf the IP address of a relay changes its DNS PTR record is most likely also different, use the IP change event as a trigger to flush the cached hostname of that relay.If the IP address of a relay changes its DNS PTR record is most likely also different, use the IP change event as a trigger to flush the cached hostname of that relay.https://gitlab.torproject.org/tpo/network-health/metrics/onionoo/-/issues/18036OnionOO ignores history age limit2020-06-27T14:24:15ZteorOnionOO ignores history age limitIn legacy/trac#18035, we modify the updateFallbackDirectories.py script in tor because OnionOO is ignoring the end of the "days ago" range, returning data much older than this limit.
Can this be fixed on the OnionOO side as well?
See h...In legacy/trac#18035, we modify the updateFallbackDirectories.py script in tor because OnionOO is ignoring the end of the "days ago" range, returning data much older than this limit.
Can this be fixed on the OnionOO side as well?
See https://lists.torproject.org/pipermail/tor-relays/2016-January/008504.htmlhttps://gitlab.torproject.org/tpo/network-health/metrics/onionoo/-/issues/18026Remove deprecated and optional "family" field from details documents2020-06-27T14:24:15ZKarsten LoesingRemove deprecated and optional "family" field from details documentsOn August 25, 2015, we added the optional `"alleged_family"` and `"indirect_family"` fields and deprecated the likewise optional `"family"` field in details documents. As of yesterday, Atlas (legacy/trac#16961), Globe (legacy/trac#16962...On August 25, 2015, we added the optional `"alleged_family"` and `"indirect_family"` fields and deprecated the likewise optional `"family"` field in details documents. As of yesterday, Atlas (legacy/trac#16961), Globe (legacy/trac#16962), and Compass (legacy/trac#17720) stopped using the `"family"` field and switched over to the new fields. It's time to finally remove the `"family"` field from details documents.https://gitlab.torproject.org/tpo/network-health/metrics/onionoo/-/issues/17939Optimize the construction of details documents with field constraints2020-06-27T14:24:16ZTracOptimize the construction of details documents with field constraintsIn a [recent post to metrics-team@](https://lists.torproject.org/pipermail/metrics-team/2015-December/000026.html), Karsten pointed toward an expensive operation within the response builder:
> Once per hour, the updater fetches new data...In a [recent post to metrics-team@](https://lists.torproject.org/pipermail/metrics-team/2015-December/000026.html), Karsten pointed toward an expensive operation within the response builder:
> Once per hour, the updater fetches new data and in the end produces JSON-formatted strings that it writes to disk. The servlet reads a (comparatively) small index to memory that it uses to handle requests, and when it builds responses, it tries hard to avoid (de-)serializing JSON.
>
> The only situation where this fails is when [a] request [to the /details endpoint] contains the fields parameter. Only in that case we'll have to deserialize, pick the fields we want, and serialize again. I could imagine that this shows up in profiles pretty badly, and I'd love to fix this, I just don't know how.
I think we can exploit a few properties of the updater to handle this case in a more efficient manner.
It seems safe to assume that: (1) the produced response is always the concatenation of a sequence of a substrings within the written document ^[#fn1 1]^; (2) that the documents on disk are legal JSON and correctly typed (having been written by the updater, which we trust and control); and (3) that the contents of the file are trivially parsed (belonging to a restriction of JSON with known and non-redundant keys, the grammar is at most context-free).
I believe these conditions admit introducing a relatively efficient parser generator pair, one that avoids request-time de-serialisation. Given a request, the result of the parser would be a sequence of pairs of indices marking the boundaries of each field. The generator would reproduce the input, but for excluding text regions corresponding to fields excluded by the request.
No patch yet, but I've hacked together a small (inefficient mess of a..) proof of concept that hopefully illustrates the basic idea:
http://hack.rs/~vi/onionoo/IndexJSON.hs
sha256: 14a09f26fadab8d989263dc76d368e41e63ba6c5279d37443878d6c1d0c87834
http://www.webcitation.org/6e3NEOLJg
```
% jq . 96B16C78BB54BA0F56EEA8721781C9BD01B7E9AE
{
"nickname": "Unnamed",
"hashed_fingerprint": "96B16C78BB54BA0F56EEA8721781C9BD01B7E9AE",
"or_addresses": [
"10.103.224.131:443"
],
"last_seen": "2015-11-23 03:40:44",
"first_seen": "2015-11-20 04:38:22",
"running": false,
"flags": [
"Valid"
],
"last_restarted": "2015-11-22 01:23:06",
"advertised_bandwidth": 49168,
"platform": "Tor 0.2.4.22 on Windows 8"
}
% index-json 96B16C78BB54BA0F56EEA8721781C9BD01B7E9AE
("nickname",(2,21,22))
("hashed_fingerprint",(23,85,86))
("or_addresses",(87,123,124))
("last_seen",(125,157,158))
("first_seen",(159,192,193))
("running",(194,208,209))
("flags",(210,226,227))
("last_restarted",(228,265,266))
("advertised_bandwidth",(267,294,295))
("platform",(296,333,333))
% cut -c1 -c23-158 -c194- 96B16C78BB54BA0F56EEA8721781C9BD01B7E9AE | jq .
{
"hashed_fingerprint": "96B16C78BB54BA0F56EEA8721781C9BD01B7E9AE",
"or_addresses": [
"10.103.224.131:443"
],
"last_seen": "2015-11-23 03:40:44",
"running": false,
"flags": [
"Valid"
],
"last_restarted": "2015-11-22 01:23:06",
"advertised_bandwidth": 49168,
"platform": "Tor 0.2.4.22 on Windows 8"
}
```
What do you think?
,,
[=#fn1 ^1^] There's a factor of surprise in the treatment of nullable properties, but it turns out that the existing behaviour works in our favour. GSON removes 'null'ed fields in writing documents to disk; e.g. note the absence of an AS number here:
```
% pwd
/srv/onionoo.torproject.org/onionoo/out/details
% jq . $(ls | shuf -n1)
{
"nickname": "Unnamed",
"hashed_fingerprint": "CE0A4E1B6C545FF9F25A9CAF5926732559A2C0FE",
"or_addresses": [
"10.190.9.13:443"
],
"last_seen": "2015-12-16 22:41:56",
"first_seen": "2015-11-11 21:01:43",
"running": true,
"flags": [
"Fast",
"Valid"
],
"last_restarted": "2015-12-16 02:13:40",
"advertised_bandwidth": 59392,
"platform": "Tor 0.2.4.23 on Windows 8"
}
```
,,
But it *also* excludes them from /details responses, even when specified by name using the 'fields' parameter:
```
% curl -s 'http://onionoo.local/details?lookup=CE0A4E1B6C545FF9F25A9CAF5926732559A2C0FE&fields=hashed_fingerprint,as_number' | jq .bridges[]
{
"hashed_fingerprint": "CE0A4E1B6C545FF9F25A9CAF5926732559A2C0FE"
}
```
,,So it doesn't seem necessary to add any text atop the persisted serialisation, even in this case.
**Trac**:
**Username**: fmaphttps://gitlab.torproject.org/tpo/network-health/metrics/onionoo/-/issues/17938Pagination would be useful when dealing with large numbers of results2020-06-27T14:24:16ZTracPagination would be useful when dealing with large numbers of resultsI'm the operator of the 'cocoanaut' and 'cocoadrome' relays. Changing the filter in Globe using:
* type=relay
* country=Germany
and sorting by bandwidth (desc) my relay 'cocoadrome' is never be displayed. I'm providing ~8Mbit bandwidth a...I'm the operator of the 'cocoanaut' and 'cocoadrome' relays. Changing the filter in Globe using:
* type=relay
* country=Germany
and sorting by bandwidth (desc) my relay 'cocoadrome' is never be displayed. I'm providing ~8Mbit bandwidth and have 69 days of uptime (as of today).
**Trac**:
**Username**: phranckOnionoo 3.2-1.1.0https://gitlab.torproject.org/tpo/network-health/metrics/onionoo/-/issues/17919Document history objects better2020-06-27T14:24:16ZTracDocument history objects betterIn the [| documentation of Onionoo](https://onionoo.torproject.org/protocol.html#history), all objects except for the **history** object have links to example requests (e.g., [| bandwidth object example](https://onionoo.torproject.org/ba...In the [| documentation of Onionoo](https://onionoo.torproject.org/protocol.html#history), all objects except for the **history** object have links to example requests (e.g., [| bandwidth object example](https://onionoo.torproject.org/bandwidth?limit=4)).
I'm wondering if the history object should have an analogous link so that people (like myself presently) can understand how to query for and use history objects.
Here are a few questions I have about the history object:
* Can I query the history object for individual relays?
* What data does a history object contain?
* How far back does a history object go?
Thanks!
**Trac**:
**Username**: seansaitoKarsten LoesingKarsten Loesing