Skip to content
Snippets Groups Projects
Closed We bootstrap from different primary guards when we start with a non-live consensus and not enough guards in the state file
  • View options
  • We bootstrap from different primary guards when we start with a non-live consensus and not enough guards in the state file

  • View options
  • Closed Issue created by Roger Dingledine

    I started my Tor client (0.3.1.1-alpha (git-ab9976b7)) with these three guards (and only these three guards) in my state file:

    Guard in=default rsa_id=BB119A5A4D5DA2BBB7B796ECC50E3C0F1D4FD910 nickname=onslaught sampled_on=2017-04-29T09:28:05 sampled_by=0.3.1.0-alpha-dev listed=1 confirmed_on=2017-05-02T22:48:40 confirmed_idx=1
    Guard in=default rsa_id=7369A0E14C59B11D78357EC5AFE573A259264BBD nickname=yono1 sampled_on=2017-05-02T20:44:35 sampled_by=0.3.1.0-alpha-dev listed=1 confirmed_on=2017-04-25T13:14:11 confirmed_idx=0 pb_use_attempts=3.000000 pb_use_successes=3.000000 pb_circ_attempts=105.000000 pb_circ_successes=105.000000 pb_successful_circuits_closed=95.000000 pb_collapsed_circuits=10.000000 pb_timeouts=8.000000
    Guard in=default rsa_id=62DA0256BBC28992D41CBAFB549FFD7C9B846A99 nickname=IRejectTorFoundatnN sampled_on=2017-05-21T02:25:50 sampled_by=0.3.1.0-alpha-dev listed=1 confirmed_on=2017-05-23T22:28:05 confirmed_idx=2

    I figured that since there were three, and they were all confirmed, they would be my primary guards.

    Also, my cached-microdesc-consensus file was about a day old.

    When Tor starts, it says

    May 25 20:58:06.155 [info] sampled_guards_update_from_consensus(): No live consensus; can't update sampled entry guards.

    Ok, great, it's not going to go deleting or modifying any of my guards before it's even tried to load my consensus file from disk. Makes sense.

    Then it goes through to update my three guards to say that they're not working:

    May 25 20:58:06.155 [debug] entry_guard_set_filtered_flags(): Updated sampled guard onslaught ($BB119A5A4D5DA2BBB7B796ECC50E3C0F1D4FD910): filtered=0; reachable_filtered=0.
    May 25 20:58:06.155 [debug] entry_guard_set_filtered_flags(): Updated sampled guard yono1 ($7369A0E14C59B11D78357EC5AFE573A259264BBD): filtered=0; reachable_filtered=0.
    May 25 20:58:06.156 [debug] entry_guard_set_filtered_flags(): Updated sampled guard IRejectTorFoundatnN ($62DA0256BBC28992D41CBAFB549FFD7C9B846A99): filtered=0; reachable_filtered=0.

    I'm guessing that happened because of entry_guards_update_filtered_sets() which got called from entry_guards_update_all(), but there are several possible paths to that function so I'm not sure which one it was. Maybe it's decided they're all down, since it hasn't even loaded a consensus yet, so they're all missing from the nonexistent consensus?

    Then we get to:

    May 25 20:58:06.156 [info] sample_reachable_filtered_entry_guards(): Trying to sample a reachable guard: We know of 0 in the USABLE_FILTERED set.
    May 25 20:58:06.156 [info] sample_reachable_filtered_entry_guards():   (That isn't enough. Trying to expand the sample.)
    May 25 20:58:06.156 [info] entry_guards_expand_sample(): Expanding the sample guard set. We have 3 guards in the sample, and 0 eligible guards to extend it with.
    May 25 20:58:06.156 [info] entry_guards_expand_sample(): Not expanding the guard sample any further; just ran out of eligible guards
    May 25 20:58:06.156 [info] sample_reachable_filtered_entry_guards():   (After filters [b], we have 0 guards to consider.)

    Ok, great, we refuse to expand the guard list now. That's good because we don't have any consensus loaded yet.

    Then I finish loading the state file, and load other stuff from my datadirectory, like the cached-microdesc-consensus file:

    May 25 20:58:06.644 [info] A consensus needs 5 good signatures from recognized authorities for us to accept it. This one has 8 (dannenberg tor26 longclaw maatuska moria1 dizum gabelmoo Faravahar).
    May 25 20:58:06.797 [info] microdesc_cache_reload(): Reloaded microdescriptor cache. Found 7298 descriptors.
    May 25 20:58:06.812 [info] update_consensus_networkstatus_fetch_time_impl(): No live microdesc consensus; we should fetch one immediately.
    May 25 20:58:06.812 [info] cell_ewma_set_scale_factor(): Enabled cell_ewma algorithm because of value in CircuitPriorityHalflifeMsec in consensus; scale factor is 0.793701 per 10 seconds
    May 25 20:58:07.046 [info] networkstatus_set_current_consensus(): Loaded an expired consensus. Discarding.
    May 25 20:58:07.046 [info] router_reload_consensus_networkstatus(): Couldn't load unverified consensus microdesc networkstatus from cache

    Ok, fine, it's expired so we discarded it, no problem.

    May 25 20:58:07.182 [info] update_consensus_bootstrap_attempt_downloads(): Launching microdesc bootstrap mirror networkstatus consensus download.
    May 25 20:58:07.182 [notice] Starting with guard context "default"
    May 25 20:58:07.182 [info] sample_reachable_filtered_entry_guards(): Trying to sample a reachable guard: We know of 0 in the USABLE_FILTERED set.
    May 25 20:58:07.182 [info] sample_reachable_filtered_entry_guards():   (That isn't enough. Trying to expand the sample.)

    Sounds good, I agree it's time to go get a fresh consensus.

    May 25 20:58:07.182 [info] entry_guards_expand_sample(): Expanding the sample guard set. We have 3 guards in the sample, and 2220 eligible guards to extend it with.

    Wait, what? Where did these 2220 eligible guards come from?

    Did they get added to a list of potential eligible guards, when we were examining the old microdesc consensus -- the one that we then discarded because it was too old?

    That theory sounds plausible:

    $ grep "^s " cached-microdesc-consensus |grep Guard|grep V2Dir|wc -l
    2224

    That's bug number one.

    So we proceed to add an eligible guard, since I have 0 eligible guards in my set:

    May 25 20:58:07.182 [debug] compute_weighted_bandwidths(): Generated weighted bandwidths for rule weight as guard based on weights Wg=0.594700 Wm=0.594700 We=0.000000 Wd=0.000000 with total bw 0
    May 25 20:58:07.182 [info] entry_guard_add_to_sample(): Adding $9285B22F7953D7874604EEE2B470609AD81C74E9~0x3d005 at 62.138.7.171 as to the entry guard sample set.
    May 25 20:58:07.182 [debug] entry_guard_set_filtered_flags(): Updated sampled guard 0x3d005 ($9285B22F7953D7874604EEE2B470609AD81C74E9): filtered=1; reachable_filtered=1.

    In fact, we add 20 of them like this. And these ones are filtered=1, reachable_filtered=1, because we just picked them out of, I guess, a consensus that said they were Running.

    After we add the 20, we hit:

    May 25 20:58:07.185 [info] sample_reachable_filtered_entry_guards(): Trying to sample a reachable guard: We know of 20 in the USABLE_FILTERED set.
    May 25 20:58:07.185 [info] sample_reachable_filtered_entry_guards():   (After filters [b], we have 20 guards to consider.)
    May 25 20:58:07.185 [info] sample_reachable_filtered_entry_guards():   (Selected 0x3d005 ($9285B22F7953D7874604EEE2B470609AD81C74E9).)
    May 25 20:58:07.185 [info] sample_reachable_filtered_entry_guards(): Trying to sample a reachable guard: We know of 20 in the USABLE_FILTERED set.
    May 25 20:58:07.185 [info] sample_reachable_filtered_entry_guards():   (After filters [b], we have 19 guards to consider.)
    May 25 20:58:07.185 [info] sample_reachable_filtered_entry_guards():   (Selected colosimo ($51939625169E2C7E0DC83D38BAE628BDE67E9A22).)
    May 25 20:58:07.185 [info] sample_reachable_filtered_entry_guards(): Trying to sample a reachable guard: We know of 20 in the USABLE_FILTERED set.
    May 25 20:58:07.185 [info] sample_reachable_filtered_entry_guards():   (After filters [b], we have 18 guards to consider.)
    May 25 20:58:07.185 [info] sample_reachable_filtered_entry_guards():   (Selected Tungurahua ($7CB15FA5CD985B9BBD6AC386C62C206A1F391C23).)

    Ok, we've picked three. Then we see:

    May 25 20:58:07.185 [info] entry_guards_update_primary(): Primary entry guards have changed. New primary guard list is: 
    May 25 20:58:07.185 [info] entry_guards_update_primary():   1/3: 0x3d005 ($9285B22F7953D7874604EEE2B470609AD81C74E9)
    May 25 20:58:07.185 [info] entry_guards_update_primary():   2/3: colosimo ($51939625169E2C7E0DC83D38BAE628BDE67E9A22)
    May 25 20:58:07.185 [info] entry_guards_update_primary():   3/3: Tungurahua ($7CB15FA5CD985B9BBD6AC386C62C206A1F391C23)

    I had three perfectly fine primary guards earlier, but I had marked them down, so I guess we have three new ones.

    But then we proceed to ignore those three new ones too!

    May 25 20:58:07.185 [info] sample_reachable_filtered_entry_guards():   (After filters [7], we have 17 guards to consider.)
    May 25 20:58:07.185 [info] sample_reachable_filtered_entry_guards():   (Selected Yuman ($3265FB6279E142BFBB8BE08AA0582011D23ED2EF).)
    May 25 20:58:07.185 [info] select_entry_guard_for_circuit(): No primary or confirmed guards available. Selected random guard Yuman ($3265FB6279E142BFBB8BE08AA0582011D23ED2EF) for circuit. Will try other guards before using this circuit.

    That's bug number two.

    It looks like it launches a consensus download attempt to Yuman smoothly:

    May 25 20:58:07.185 [info] directory_send_command(): Downloading consensus from 37.187.177.2:9001 using /tor/status-vote/current/consensus-microdesc/0232AF+14C131+23D15D+49015F+D586D1+E8A9C4+ED03BB+EFCBE7.z

    And it launches a parallel fetch to an authority too, just like it's supposed to:

    May 25 20:58:07.185 [info] directory_send_command(): Downloading consensus from 154.35.175.225:443 using /tor/status-vote/current/consensus-microdesc/0232AF+14C131+23D15D+49015F+D586D1+E8A9C4+ED03BB+EFCBE7.z

    Faravahar is way more responsive than whoever Yuman is, so by the time I get my one-hop circuit open to Yuman, I'm already partway through fetching the consensus from Faravahar:

    May 25 20:58:07.824 [info] internal circ (length 1, last hop Yuman): $3265FB6279E142BFBB8BE08AA0582011D23ED2EF(open)
    May 25 20:58:07.824 [debug] command_process_created_cell(): Moving to next skin.
    May 25 20:58:07.824 [debug] circuit_send_next_onion_skin(): starting to send subsequent skin.
    May 25 20:58:07.824 [info] circuit_send_next_onion_skin(): circuit built!
    May 25 20:58:07.824 [info] connection_ap_handshake_attach_circuit(): Closing extra consensus fetch (to 37.187.177.2) since one is already downloading.

    But then

    May 25 20:58:07.825 [debug] conn_close_if_marked(): Cleaning up connection (fd -1).
    May 25 20:58:07.825 [info] entry_guards_note_guard_failure(): Recorded failure for guard Yuman ($3265FB6279E142BFBB8BE08AA0582011D23ED2EF)

    That's not very nice of us, since the guard didn't actually fail, we just opted not to use that circuit because another circuit was faster.

    That's bug number three.

    Ok, after a bunch of lines, I get my microdesc consensus from Faravahar:

    May 25 20:58:16.296 [debug] Received response from directory server '154.35.175.225:443': 200 "OK" (purpose: 14, response size: 629424, compression: 2)

    And we start acting on it:

    May 25 20:58:16.764 [info] launch_descriptor_downloads(): Launching 5 requests for 2262 microdescs, 500 at a time
    May 25 20:58:16.766 [info] select_entry_guard_for_circuit(): Selected primary guard Tungurahua ($7CB15FA5CD985B9BBD6AC386C62C206A1F391C23) for circuit.

    and we happily fetch the missing microdescriptors using our 3 (new) primary guards.

    There is this interesting tidbit:

    May 25 20:58:18.032 [debug] fetch_from_buf_http(): headerlen 197, bodylen 127544.
    May 25 20:58:18.032 [debug] Received response from directory server '109.236.90.209:443': 200 "OK" (purpose: 19, response size: 127741, compression: 2)
    May 25 20:58:18.032 [info] make_guard_confirmed(): Marking colosimo ($51939625169E2C7E0DC83D38BAE628BDE67E9A22) as a confirmed guard (index 3)
    May 25 20:58:18.032 [info] entry_guards_update_primary(): Primary entry guards have changed. New primary guard list is: 
    May 25 20:58:18.032 [info] entry_guards_update_primary():   1/3: colosimo ($51939625169E2C7E0DC83D38BAE628BDE67E9A22) (confirmed)
    May 25 20:58:18.032 [info] entry_guards_update_primary():   2/3: 0x3d005 ($9285B22F7953D7874604EEE2B470609AD81C74E9)
    May 25 20:58:18.032 [info] entry_guards_update_primary():   3/3: Tungurahua ($7CB15FA5CD985B9BBD6AC386C62C206A1F391C23)
    May 25 20:58:18.032 [info] entry_guards_note_guard_success(): Recorded success for primary confirmed guard colosimo ($51939625169E2C7E0DC83D38BAE628BDE67E9A22)

    We marked our guard confirmed because we used it to fetch directory info? I believe that's what we programmed it to do, but maybe our definition of confirmed should be that we attached a 'real' (non dir fetch) stream to a 'real' (3 hop or more) circuit.

    (We can call that bug number four if we like, but it's really more like feature request number one.)

    Eventually we get enough microdescs to convince us we should start making circuits:

    May 25 20:58:18.453 [debug] compute_frac_paths_available(): f_guard: 0.86, f_mid: 0.86, f_exit: 0.88
    May 25 20:58:18.453 [notice] Bootstrapped 80%: Connecting to the Tor network
    May 25 20:58:18.454 [debug] update_guard_selection_choice(): Staying with guard context "default" (no change)
    May 25 20:58:18.454 [info] sampled_guards_update_from_consensus(): Updating sampled guard status based on received consensus.
    May 25 20:58:18.454 [debug] sampled_guards_update_from_consensus(): Sampled guard onslaught ($BB119A5A4D5DA2BBB7B796ECC50E3C0F1D4FD910) is still listed.
    May 25 20:58:18.454 [debug] sampled_guards_update_from_consensus(): Sampled guard yono1 ($7369A0E14C59B11D78357EC5AFE573A259264BBD) is still listed.

    and we see those sampled_guards_update_from_consensus() lines for each of our 23 of our guards, followed by 23 of these lines, one for each guard:

    May 25 20:58:18.454 [debug] entry_guard_set_filtered_flags(): Updated sampled guard onslaught ($BB119A5A4D5DA2BBB7B796ECC50E3C0F1D4FD910): filtered=1; reachable_filtered=1.
    May 25 20:58:18.454 [debug] entry_guard_set_filtered_flags(): Updated sampled guard yono1 ($7369A0E14C59B11D78357EC5AFE573A259264BBD): filtered=1; reachable_filtered=1.

    And then, oh hey, we get our original primary guards back!

    May 25 20:58:18.455 [info] entry_guards_update_primary(): Primary entry guards have changed. New primary guard list is: 
    May 25 20:58:18.455 [info] entry_guards_update_primary():   1/3: yono1 ($7369A0E14C59B11D78357EC5AFE573A259264BBD) (confirmed)
    May 25 20:58:18.455 [info] entry_guards_update_primary():   2/3: onslaught ($BB119A5A4D5DA2BBB7B796ECC50E3C0F1D4FD910) (confirmed)
    May 25 20:58:18.455 [info] entry_guards_update_primary():   3/3: IRejectTorFoundatnN ($62DA0256BBC28992D41CBAFB549FFD7C9B846A99) (confirmed)

    And then we build a 3-hop circuit with yono1, as expected:

    May 25 20:58:19.738 [info] exit circ (length 3, last hop digineo3): $7369A0E14C59B11D78357EC5AFE573A259264BBD(open) $0269165034C5586D6551CCCFA8940D2305948754(
    open) $B21211A1A2C68F2D9E57E3C7AEAF4F04AFC10E7F(open)
    May 25 20:58:19.738 [info] entry_guards_note_guard_success(): Recorded success for primary confirmed guard yono1 ($7369A0E14C59B11D78357EC5AFE573A259264BBD)
    May 25 20:58:19.738 [notice] Tor has successfully opened a circuit. Looks like client functionality is working.

    Linked items ... 0

  • Activity

    • All activity
    • Comments only
    • History only
    • Newest first
    • Oldest first
    Loading Loading Loading Loading Loading Loading Loading Loading Loading Loading