The Tor Project issueshttps://gitlab.torproject.org/groups/tpo/-/issues2020-06-27T14:04:06Zhttps://gitlab.torproject.org/tpo/core/tor/-/issues/9333Illegal nickname "PPriv@last-listed" in family line2020-06-27T14:04:06ZcypherpunksIllegal nickname "PPriv@last-listed" in family lineI'm using the current(?) version of the Tor Browser bundle (version 2.3.25-10 I think; how do I find out which version I am running?) on GNU/Linux.
After a system crash (failing to shut down properly - laptop battery ran out) while tor ...I'm using the current(?) version of the Tor Browser bundle (version 2.3.25-10 I think; how do I find out which version I am running?) on GNU/Linux.
After a system crash (failing to shut down properly - laptop battery ran out) while tor browser was running, I was unable to restart Tor browser again after reboot. I got two error messages from vidalia; one saying that vidalia was unable to connect to the control port, the other saying that the tor process had exited immediately after launch. The last line in the tor logfile was a warning:
[Warning] Illegal nickname "PPriv@last-listed" in family line
Grepping through the Data/Tor/ directory, I found such a line in the cached-microdescs file. After deleting this file I was able to launch tor browser again.
Tor browser/vidalia should recover from this kind of error automatically, without manual user intervention.Tor: 0.2.5.x-finalMike PerryMike Perryhttps://gitlab.torproject.org/tpo/core/tor/-/issues/9323Option to start as a bridge by default, but change to relay if bw is super-high.2022-05-23T20:37:37ZGriffin BoyceOption to start as a bridge by default, but change to relay if bw is super-high.There are some minor ongoing problems with Tor speed, which has cascade effects in user adoption and accessibility.
If there were a type of node that starts as a middle relay and changes into a bridge if average speed in the past day we...There are some minor ongoing problems with Tor speed, which has cascade effects in user adoption and accessibility.
If there were a type of node that starts as a middle relay and changes into a bridge if average speed in the past day were unacceptably low, that could dramatically improve mean transfer time. And then, if conditions improved, a convertible node would change from a bridge back to a relay.
Given that bridges become unusable in the most high-risk areas in just a few days (Winter & Lindskag 2012), its side effect of increasing the bridge population would be a very positive thing.https://gitlab.torproject.org/tpo/core/tor/-/issues/9321Load balance right when we have higher guard rotation periods2020-06-27T14:04:06ZRoger DingledineLoad balance right when we have higher guard rotation periodsHere's our plan:
1) Directory authorities need to track how much of the past n months each relay was around and had the Guard flag.
2) They vote a percentage for each relay in their vote, and the consensus has a new keyword on the w lin...Here's our plan:
1) Directory authorities need to track how much of the past n months each relay was around and had the Guard flag.
2) They vote a percentage for each relay in their vote, and the consensus has a new keyword on the w line so clients can learn how Guardy each relay has been.
3) Clients change their load balancing algorithm to consider how Guardy you've been, rather than just treating Guard status as binary (legacy/trac#8453).
4) Raise the guard rotation period a lot (legacy/trac#8240).Tor: 0.2.6.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/9320Assign bandwidth to new relays faster2020-06-27T14:04:07ZTracAssign bandwidth to new relays fasterTor is very unfriendly to short time running relays. It takes too much time until relay starts to pull some real workload. Currently there is no point in running relay for short time (few hours) and maintaining long uptime (as few restar...Tor is very unfriendly to short time running relays. It takes too much time until relay starts to pull some real workload. Currently there is no point in running relay for short time (few hours) and maintaining long uptime (as few restarts as possible) is critical for getting bw assigned by authorities.
example: fresh new relay with 1d4h uptime and 100mbit bw gets assigned too little bw wieght (around 350KB/s) and even with that, it gets in first day about 400 MB of traffic in each direction. It takes about 6 hours until bw of new relay get measured and even after that, it gets assigned very little % of its real weight.
This is topic "why i get so little bw" is often asked on IRC.
look at tor status - http://torstatus.blutmagie.de/index.php?SR=Uptime&SO=Desc about 1/4 of relays has < 1 day uptime. It would be good to make their available bw really used.
**Trac**:
**Username**: hsnTor: unspecifiedhttps://gitlab.torproject.org/tpo/core/tor/-/issues/9309broken canonical connection detection in channel_matches_target_addr_for_exte...2020-06-27T14:04:07ZGeorge Kadianakisbroken canonical connection detection in channel_matches_target_addr_for_extend()`channel_matches_target_addr_for_extend()`: `skruffy` in IRC believes that the negation in `!channel_matches_target_addr_for_extend` is wrong.
He/she said that the old pre-channel code has the correct behavior.`channel_matches_target_addr_for_extend()`: `skruffy` in IRC believes that the negation in `!channel_matches_target_addr_for_extend` is wrong.
He/she said that the old pre-channel code has the correct behavior.Tor: 0.2.4.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/9304Internal Error2020-06-27T14:04:07ZTracInternal Error"microdesc_free(): Bug: microdesc_free() called, but md was still referenced 1 node(s); held_by_nodes == 1"
**Trac**:
**Username**: ulli"microdesc_free(): Bug: microdesc_free() called, but md was still referenced 1 node(s); held_by_nodes == 1"
**Trac**:
**Username**: ulliTorBrowserBundle 2.3.x-stablehttps://gitlab.torproject.org/tpo/core/tor/-/issues/9299Dump stack traces on assertion, crash, or general trouble2020-06-27T14:04:07ZNick MathewsonDump stack traces on assertion, crash, or general troubleIt's so easy to dump stack traces these days!
I have a "backtrace" branch right now that an dump stack traces on assertion failures. It works on glibc/ELF, and on OSX. We should expand it to work on Windows too, and BSD if we can.
Ot...It's so easy to dump stack traces these days!
I have a "backtrace" branch right now that an dump stack traces on assertion failures. It works on glibc/ELF, and on OSX. We should expand it to work on Windows too, and BSD if we can.
Other fixes to make before it's ready:
* ~~It should be able to log a stack trace too.~~
* ~~It should log the stack trace on an assertion.~~
* ~~There should be an option to tell it not to log to the stack_dumps file, perhaps.~~
* ~~Perhaps the logfile should be pid-controlled?~~
* It should support Windows.
* ~~It should handle deadly signals (SEGV, etc) as well.~~
* ~~It should indicate to the user somehow (if it can) that stuff might be saved to the stack_dumps file.~~
* ~~It should have tests.~~Tor: 0.2.5.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/9298Recent tor regression: segfault on startup2020-06-27T14:04:07ZDamian JohnsonRecent tor regression: segfault on startupAfter legacy/trac#9295 I tried rerunning the integ tests with a password authenticated tor instance but no joy. Four of the last five times I started tor with a HashedControlPassword it segfaulted...
```
atagar@morrigan:~/Desktop/stem$ ...After legacy/trac#9295 I tried rerunning the integ tests with a password authenticated tor instance but no joy. Four of the last five times I started tor with a HashedControlPassword it segfaulted...
```
atagar@morrigan:~/Desktop/stem$ cat ~/.tor/torrc
SocksPort 0
ControlPort 9051
Exitpolicy reject *:*
FetchUselessDescriptors 1
DisableDebuggerAttachment 0
# password: pw
HashedControlPassword 16:6175C1B2491BD88D605B5F65597E1CD3A7336D5715686DE1ED2145F2E6
atagar@morrigan:~/Desktop/stem$ /home/atagar/Desktop/tor/tor/src/or/tor -f /home/atagar/.tor/torrcJul 19 08:47:03.654 [notice] Tor v0.2.5.0-alpha-dev (git-e1d3b444952d861e) running on Linux with Libevent 1.4.13-stable and OpenSSL 0.9.8o.
Jul 19 08:47:03.655 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning
Jul 19 08:47:03.655 [notice] This version is not a stable Tor release. Expect more bugs than usual.
Jul 19 08:47:03.655 [notice] Read configuration file "/home/atagar/.tor/torrc".
Jul 19 08:47:03.667 [notice] Opening Control listener on 127.0.0.1:9051
Jul 19 08:47:03.000 [notice] Not disabling debugger attaching for unprivileged users.
Jul 19 08:47:03.000 [notice] Parsing GEOIP IPv4 file /usr/local/share/tor/geoip.
Jul 19 08:47:04.000 [notice] Parsing GEOIP IPv6 file /usr/local/share/tor/geoip6.
Jul 19 08:47:04.000 [notice] Your OpenSSL version seems to be 0.9.8o. We recommend 1.0.0 or later.
Jul 19 08:47:05.000 [notice] This version of Tor (0.2.5.0-alpha-dev) is newer than any recommended version, according to the directory authorities. Recommended versions are: 0.2.2.39,0.2.3.24-rc,0.2.3.25,0.2.4.5-alpha,0.2.4.6-alpha,0.2.4.7-alpha,0.2.4.8-alpha,0.2.4.9-alpha,0.2.4.10-alpha,0.2.4.11-alpha,0.2.4.12-alpha,0.2.4.13-alpha,0.2.4.14-alpha,0.2.4.15-rc
Jul 19 08:47:09.000 [notice] We now have enough directory information to build circuits.
Jul 19 08:47:09.000 [notice] Bootstrapped 80%: Connecting to the Tor network.
Jul 19 08:47:10.000 [notice] Bootstrapped 85%: Finishing handshake with first hop.
Jul 19 08:47:10.000 [notice] We weren't able to find support for all of the TLS ciphersuites that we wanted to advertise. This won't hurt security, but it might make your Tor (if run as a client) more easy for censors to block.
Jul 19 08:47:10.000 [notice] To correct this, use a more recent OpenSSL, built without disabling any secure ciphers or features.
Jul 19 08:47:11.000 [notice] Bootstrapped 90%: Establishing a Tor circuit.
Jul 19 08:47:13.000 [notice] Tor has successfully opened a circuit. Looks like client functionality is working.
Jul 19 08:47:13.000 [notice] Bootstrapped 100%: Done.
Segmentation fault
```
This is with tor commit e1d3b44.Tor: 0.2.5.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/9297Error while compiling the latest git (seccomp2-related)2020-06-27T14:04:07ZcypherpunksError while compiling the latest git (seccomp2-related)- Debian unstable 32
- gcc version 4.8.1 (Debian 4.8.1-6)
[In file included from src/common/sandbox.c:30:0:
src/common/sandbox.c:117:5: error: ‘__NR_accept4’ undeclared here (not in a function)
SCMP_SYS(accept4),
^
src/common...- Debian unstable 32
- gcc version 4.8.1 (Debian 4.8.1-6)
[In file included from src/common/sandbox.c:30:0:
src/common/sandbox.c:117:5: error: ‘__NR_accept4’ undeclared here (not in a function)
SCMP_SYS(accept4),
^
src/common/sandbox.c:133:5: error: ‘__NR_setsockopt’ undeclared here (not in a function)
SCMP_SYS(setsockopt),
^
]Tor: 0.2.5.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/9296seg fault in cell_queue_append()2020-06-27T14:04:07ZRoger Dingledineseg fault in cell_queue_append()moria1 running git master (e1d3b444) seg faults reliably, soon after startup.
```
#0 0x000000000042181f in cell_queue_append (queue=0x56e9cf8,
cell=0x7fffad841db0, wide_circ_ids=1, use_stats=0) at src/or/relay.c:2141
#1 cell_queue...moria1 running git master (e1d3b444) seg faults reliably, soon after startup.
```
#0 0x000000000042181f in cell_queue_append (queue=0x56e9cf8,
cell=0x7fffad841db0, wide_circ_ids=1, use_stats=0) at src/or/relay.c:2141
#1 cell_queue_append_packed_copy (queue=0x56e9cf8, cell=0x7fffad841db0,
wide_circ_ids=1, use_stats=0) at src/or/relay.c:2181
#2 0x000000000048003d in circuitmux_append_destroy_cell (chan=0x56e9b70,
cmux=0x56e9cd0, circ_id=2147507178, reason=<value optimized out>)
at src/or/circuitmux.c:1874
#3 0x000000000046ae09 in channel_send_destroy (circ_id=2147507178,
chan=0x56e9b70, reason=<value optimized out>) at src/or/channel.c:2687
#4 0x000000000047f39c in circuit_mark_for_close_ (circ=0x53d7170, reason=0,
line=1250, file=0x53f9fb "src/or/circuituse.c")
at src/or/circuitlist.c:1568
#5 0x0000000000478db8 in circuit_send_next_onion_skin (circ=0x53d7170)
at src/or/circuitbuild.c:808
#6 0x000000000042595a in connection_edge_process_relay_cell (
cell=0x7fffad842970, circ=0x53d7170, conn=<value optimized out>,
layer_hint=<value optimized out>) at src/or/relay.c:1443
#7 0x00000000004264a0 in circuit_receive_relay_cell (cell=0x7fffad842970,
circ=0x53d7170, cell_direction=CELL_DIRECTION_IN) at src/or/relay.c:226
#8 0x000000000048d9ae in command_process_relay_cell (chan=0x56e9b70,
cell=0x7fffad842970) at src/or/command.c:462
#9 command_process_cell (chan=0x56e9b70, cell=0x7fffad842970)
at src/or/command.c:148
#10 0x000000000047249b in channel_tls_handle_cell (cell=0x7fffad842970,
conn=0x56e9dd0) at src/or/channeltls.c:924
#11 0x00000000004af256 in connection_or_process_cells_from_inbuf (
conn=0x56e9dd0) at src/or/connection_or.c:1972
#12 0x00000000004a4008 in connection_handle_read_impl (conn=0x56e9dd0)
at src/or/connection.c:2949
#13 connection_handle_read (conn=0x56e9dd0) at src/or/connection.c:2990
#14 0x000000000040c076 in conn_read_callback (fd=<value optimized out>,
event=8112, _conn=0x1) at src/or/main.c:716
#15 0x00007f5b3a481344 in event_base_loop () from /usr/lib/libevent-1.4.so.2
#16 0x0000000000409e81 in do_main_loop () at src/or/main.c:1996
#17 0x000000000040a1dd in tor_main (argc=<value optimized out>,
argv=<value optimized out>) at src/or/main.c:2720
#18 0x00007f5b39732c8d in __libc_start_main (main=<value optimized out>,
argc=<value optimized out>, ubp_av=<value optimized out>,
init=<value optimized out>, fini=<value optimized out>,
rtld_fini=<value optimized out>, stack_end=0x7fffad8430b8)
at libc-start.c:228
#19 0x0000000000408789 in _start ()
```
```
(gdb) print *queue
$1 = {head = {sqh_first = 0x362c323700000000, sqh_last = 0x1799620},
n = 24820072, insertion_times = 0x17bd00424603d237}
```
First noticed on legacy/trac#9286 (unrelated), and you can see another very similar backtrace over there.Tor: 0.2.5.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/9295Recent tor regression: config_get_assigned_option(): Bug: src/or/confparse.c:5912020-06-27T14:04:08ZDamian JohnsonRecent tor regression: config_get_assigned_option(): Bug: src/or/confparse.c:591Hi Nick. As spotted by weasel stem's integ tests are presently busted. The issue looks to be a recent regression with password authentication. While I was looking into this I realized that '--hash-password' no longer works. Good chance i...Hi Nick. As spotted by weasel stem's integ tests are presently busted. The issue looks to be a recent regression with password authentication. While I was looking into this I realized that '--hash-password' no longer works. Good chance it's related to whatever the tests are choking on. :)
```
atagar@morrigan:~/Desktop/stem$ /home/atagar/Desktop/tor/tor/src/or/tor --hash-password pw
Jul 18 19:10:49.803 [notice] Tor v0.2.5.0-alpha-dev (git-f45e1fbd5b25735c) running on Linux with Libevent 1.4.13-stable and OpenSSL 0.9.8o.
Jul 18 19:10:49.803 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning
Jul 18 19:10:49.803 [notice] This version is not a stable Tor release. Expect more bugs than usual.
Jul 18 19:10:49.806 [err] config_get_assigned_option(): Bug: src/or/confparse.c:591: config_get_assigned_option: Assertion options && key failed; aborting.
src/or/confparse.c:591 config_get_assigned_option: Assertion options && key failed; aborting.
Aborted
```
Cheers! -DamianTor: 0.2.4.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/9290Use something other than "known relay" to decide on rate in connection_or_upd...2022-06-24T16:08:58ZNick MathewsonUse something other than "known relay" to decide on rate in connection_or_update_token_buckets_helper() on authoritiesOn #tor-dev , Beeps says:
```
13:18 < Beeps> connection_or_update_token_buckets_helper() will not limit speed
if relay knows desc. You can upldoad desc to any auth. Before
limit speed you need protect all a...On #tor-dev , Beeps says:
```
13:18 < Beeps> connection_or_update_token_buckets_helper() will not limit speed
if relay knows desc. You can upldoad desc to any auth. Before
limit speed you need protect all auths or limit speed for them.
5 of them are victims for cheaters for now.
```
In other words, anybody can get the higher limit from an authority by uploading a descriptor with their ID, whether they're really a relay or not. That's annoying.
One fix would be to change the behavior of connection_or_digest_is_known_relay to require that the relay be present in the consensus. (Would this hurt bandwidth measurement?)https://gitlab.torproject.org/tpo/core/tor/-/issues/9288Invalid memory read in `pt_configure_remaining_proxies()`2020-06-27T14:04:08ZGeorge KadianakisInvalid memory read in `pt_configure_remaining_proxies()````
void
pt_configure_remaining_proxies(void)
...
/* If the proxy is not fully configured, try to configure it
futher. */
if (!proxy_configuration_finished(mp))
configure_proxy(mp);
if (proxy_configuration_finis...```
void
pt_configure_remaining_proxies(void)
...
/* If the proxy is not fully configured, try to configure it
futher. */
if (!proxy_configuration_finished(mp))
configure_proxy(mp);
if (proxy_configuration_finished(mp))
at_least_a_proxy_config_finished = 1;
```
If the managed proxy is destroyed during `configure_proxy()` (by going to `handle_finished_proxy()`), then it is passed to `proxy_configuration_finished()` which reads `mp->conf_state`. This is an invalid memory read since the memory area of `mp` was freed.
Not too hard to fix. An inelegant fix would be to make `configure_proxy()` return an int, that would warn `pt_configure_remaining_proxies()` if it destroys the managed proxy.
Bug present since 0.2.4.x. Doesn't seem threatening, so we can fix it just in 0.2.5.x. The bug triggers when something bad happens during the managed-proxy configuration protocol, and we have to destroy the managed proxy.Tor: 0.2.4.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/9286ordb1 uses milliseconds in its descriptor, spec says it can't2020-06-27T14:04:08ZRoger Dingledineordb1 uses milliseconds in its descriptor, spec says it can't```
router ordb1 213.246.53.127 8002 0 0
platform Tor 0.2.3.25 on Linux x86_64
opt protocols Link 1 2 Circuit 1
published 2013-07-17 13:38:46.992
```
But dir-spec.txt says
```
"published" YYYY-MM-DD HH:MM:SS NL
[Exactly once...```
router ordb1 213.246.53.127 8002 0 0
platform Tor 0.2.3.25 on Linux x86_64
opt protocols Link 1 2 Circuit 1
published 2013-07-17 13:38:46.992
```
But dir-spec.txt says
```
"published" YYYY-MM-DD HH:MM:SS NL
[Exactly once]
The time, in UTC, when this descriptor (and its corresponding
extra-info document if any) was generated.
```
It looks like it's violating the spec. Should we (i.e. the directory authorities) have validated and refused the descriptor?
Is it our Tor implementation that does this on a weird edge case, or did somebody mess with something?
(Noticed because contrib/exitlist can't handle it.)Tor: 0.2.6.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/9282Dir-spec should note the max queryable fingerprints/hashes2020-06-27T14:04:08ZDamian JohnsonDir-spec should note the max queryable fingerprints/hashes% git cat-file -p 5218585
...
Noting maximum number of fingerprints/hashes that can be queried
In our tor-dev@ discussion Karsten mentioned that we could request at most 96
descriptors at a time when polling by their fingerprints...
h...% git cat-file -p 5218585
...
Noting maximum number of fingerprints/hashes that can be queried
In our tor-dev@ discussion Karsten mentioned that we could request at most 96
descriptors at a time when polling by their fingerprints...
https://lists.torproject.org/pipermail/tor-dev/2013-June/005005.html
This is a hardcoded limit in tor, so noting it in our spec...
https://gitweb.torproject.org/tor.git/blob/HEAD:/src/or/routerlist.c#l4435https://gitlab.torproject.org/tpo/core/tor/-/issues/9273Brainstorm tradeoffs from moving to 2 (or even 1) guards2020-06-27T14:04:09ZRoger DingledineBrainstorm tradeoffs from moving to 2 (or even 1) guardsThere are now many conflicting issues to consider when changing the default number of guards. I'd like to write a proposal suggesting we move to 2 (or even 1), but I don't think I'm ready to write the analysis section yet.
Here's a star...There are now many conflicting issues to consider when changing the default number of guards. I'd like to write a proposal suggesting we move to 2 (or even 1), but I don't think I'm ready to write the analysis section yet.
Here's a start:
Pro 1: Reduces chance of using an adversary's guard. This argues for 1, but 2 would still be a lot better. See Tariq's WPES 2012 paper for details.
Pro 2: Reduces impact from guard fingerprinting: if the adversary learns that you have the following n guards, and later sees an anonymous user with the same guards, how likely is it to be you? Said another way, a trio of guards produces a cubic, whereas a duo of guards produces a quadratic. Somebody should do the math to sort out the chance of having all possible trios of guards, followed by the expected uniqueness of a trio. I expect moving to 2 gives the majority of the benefit here.
Con 1: Increases the variance of performance. The more guards you have, the closer to average performance you'll be. Whereas if you have just one guard, your performance will be impacted a lot by that choice. It would seem that we need to raise the bar on getting the Guard flag if we move people to having just one guard.
Con 2: Moving to 1 guard will rule out a Conflux-style design. But 2 guards would still work fine.
What did I miss?Tor: 0.2.6.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/9272Dirport lacks a method for fetching all microdescriptors2020-06-27T14:04:09ZDamian JohnsonDirport lacks a method for fetching all microdescriptorsAs discussed in legacy/trac#9271 we intended to have a dirport method for querying all microdescriptors, but it was never implemented. This is still a good idea though.As discussed in legacy/trac#9271 we intended to have a dirport method for querying all microdescriptors, but it was never implemented. This is still a good idea though.Tor: unspecifiedhttps://gitlab.torproject.org/tpo/core/tor/-/issues/9271Portions of dir-spec unimplemented2020-06-27T14:04:09ZDamian JohnsonPortions of dir-spec unimplementedAs observed by Karsten [in ticket #3038](https://trac.torproject.org/projects/tor/ticket/3038#comment:4) the `/tor/micro/all[.z]` and `/tor/status-vote/(current|next)/consensus-index[.z]` dirport methods presently are not implemented.
I...As observed by Karsten [in ticket #3038](https://trac.torproject.org/projects/tor/ticket/3038#comment:4) the `/tor/micro/all[.z]` and `/tor/status-vote/(current|next)/consensus-index[.z]` dirport methods presently are not implemented.
I'm a little surprised that we don't have a tracking ticket for their implementation. Considering that Karsten discovered these were missing over a year ago I think it's safe to assume we should drop them [from the spec](https://gitweb.torproject.org/torspec.git/blob/HEAD:/dir-spec.txt#l2503). No information is better than information for something that's unlikely to ever exist. :)
[ note: I only checked that '/tor/micro/all' is presently unimplemented, I didn't double check the other ]Tor: 0.2.5.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/9267Usability tweaks for code coverage2020-06-27T14:04:09ZAndrea ShepardUsability tweaks for code coverageThe new coverage tools introduced in legacy/trac#8949 have a few rough edges:
* The cov-diff script wants two directories full of .gcov files, but the coverage script dumps them all in the repo root. There should be a way to specify a...The new coverage tools introduced in legacy/trac#8949 have a few rough edges:
* The cov-diff script wants two directories full of .gcov files, but the coverage script dumps them all in the repo root. There should be a way to specify a target directory to put them in.
* If you modify the code, rebuild and re-run the test suite, the gcov instrumentation tries to merge the states into the existing gcda file and fails. There should be a makefile target to reset the coverage data by deleting just the gcda (not the gcno) files.Tor: 0.2.5.x-finalhttps://gitlab.torproject.org/tpo/core/tor/-/issues/9265test_pt_parsing doesn't check managed_proxy_t contents2020-06-27T14:04:09ZNick Mathewsontest_pt_parsing doesn't check managed_proxy_t contentsHave a look at test_pt_parsing(). It looks to see whether parse_*() succeed or fail... but it doesn't actually check that they store the right results in managed_proxy_t! That's not what a unit test should do.Have a look at test_pt_parsing(). It looks to see whether parse_*() succeed or fail... but it doesn't actually check that they store the right results in managed_proxy_t! That's not what a unit test should do.Tor: 0.2.5.x-final