Trac issueshttps://gitlab.torproject.org/legacy/trac/-/issues2023-07-16T20:35:20Zhttps://gitlab.torproject.org/legacy/trac/-/issues/2510bridge users who configure the non-canonical address of a bridge switch to it...2023-07-16T20:35:20ZRoger Dingledinebridge users who configure the non-canonical address of a bridge switch to its canonical addressIf I run a bridge with
```
Address 128.31.0.34
ORListenAddress 128.31.0.39
```
and then somebody runs their Tor client with
```
bridge 128.31.0.39
```
then it will connect, fetch the bridge descriptor, try to build a circuit by using 128...If I run a bridge with
```
Address 128.31.0.34
ORListenAddress 128.31.0.39
```
and then somebody runs their Tor client with
```
bridge 128.31.0.39
```
then it will connect, fetch the bridge descriptor, try to build a circuit by using 128.31.0.34, fail, and then sit there circuitless and bridgeless.
This bug is important because it means if you run a multihomed bridge, all the clients will immediately switch to using its single canonical address, ignoring all the other addresses you configured. If that canonical address gets blocked, the other addresses don't matter even if they'd still work.Tor: 0.2.2.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/19969tor client does not immediately open new circuits after standby2022-10-17T19:28:07Zweasel (Peter Palfrader)tor client does not immediately open new circuits after standbyViktor writes via https://bugs.debian.org/835119:
I use tor only as a client to connect icedove to the tor network with
the extension Torbirdy (on port 9050). With the tor version 0.2.8.6 I
can't immediately connect to any mail server o...Viktor writes via https://bugs.debian.org/835119:
I use tor only as a client to connect icedove to the tor network with
the extension Torbirdy (on port 9050). With the tor version 0.2.8.6 I
can't immediately connect to any mail server or news feed after the pc
woke up from standby ("long" time in standby) and I started icedove. I
have to wait for several minutes in order to connect successfully, but
the timespan seems to be random. This does not occur after a (re)boot.
The first version I remember to have this issue is 0.2.8.6-2, I did an
upgrade from 0.2.7.6-1 to 0.2.8.6-2, so I skipped the alpha and rc
versions and the first upload to unstable. I am very sure that the issue
didn't occur in version 0.2.7.6-1 which I used for several months. I can
exclude network connectivity problems because e.g. I can immediately
start the Tor Browser after standby.
Today I purged tor, installed version 0.2.7.6-1, copied the old "state"
file to /var/lib/tor, and set the pc in standby mode for a couple of
minutes. After waking up from standby I immediately tried to connect to
a mail server which worked. Then I upgraded step by step to every
version of tor 0.2.8 which I could find on snapshot.debian.org and tried
to connect to a mail server immediately after waking up from standby.
Unfortunately I could not reproduce the bug then. Finally with version
0.2.8.6-3 the bug occured again, but only after a "long" standby time
(almost 90 minutes).
Attached are two log files from the weekend and the complete log from
today after the installation of version 0.2.7.6-1.
As you can see, the bug is not easily reproducible, and the logs don't
show any particular reason for why tor does not open new circuits
immediately. Please tell me what I can do to give you more information
about the bug.Tor: 0.2.8.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/26578Do clients request new consensus documents more often than we expect?2022-10-11T07:48:31ZRoger DingledineDo clients request new consensus documents more often than we expect?In our user count estimates, we used the reasoning that clients fetch a new consensus document every 2 to 4 hours, or on average 3 hours, so that represents 8 fetches per day on average.
But in reality, it seems that clients fetch conse...In our user count estimates, we used the reasoning that clients fetch a new consensus document every 2 to 4 hours, or on average 3 hours, so that represents 8 fetches per day on average.
But in reality, it seems that clients fetch consensus documents way more frequently than that: looking at just my local Tor client, I see
```
Jun 28 21:11:52.190 [info] update_consensus_networkstatus_downloads(): Launching microdesc standard networkstatus consensus download.
Jun 28 22:43:52.355 [info] update_consensus_networkstatus_downloads(): Launching microdesc standard networkstatus consensus download.
Jun 28 23:59:52.417 [info] update_consensus_networkstatus_downloads(): Launching microdesc standard networkstatus consensus download.
Jun 29 01:42:52.501 [info] update_consensus_networkstatus_downloads(): Launching microdesc standard networkstatus consensus download.
Jun 29 03:33:52.601 [info] update_consensus_networkstatus_downloads(): Launching microdesc standard networkstatus consensus download.
Jun 29 05:09:52.699 [info] update_consensus_networkstatus_downloads(): Launching microdesc standard networkstatus consensus download.
Jun 29 06:04:52.754 [info] update_consensus_networkstatus_downloads(): Launching microdesc standard networkstatus consensus download.
Jun 29 07:54:52.874 [info] update_consensus_networkstatus_downloads(): Launching microdesc standard networkstatus consensus download.
Jun 29 08:56:52.946 [info] update_consensus_networkstatus_downloads(): Launching microdesc standard networkstatus consensus download.
Jun 29 10:32:53.036 [info] update_consensus_networkstatus_downloads(): Launching microdesc standard networkstatus consensus download.
Jun 29 12:36:53.121 [info] update_consensus_networkstatus_downloads(): Launching microdesc standard networkstatus consensus download.
Jun 29 14:06:53.186 [info] update_consensus_networkstatus_downloads(): Launching microdesc standard networkstatus consensus download.
Jun 29 14:53:53.215 [info] update_consensus_networkstatus_downloads(): Launching microdesc standard networkstatus consensus download.
Jun 29 15:52:53.256 [info] update_consensus_networkstatus_downloads(): Launching microdesc standard networkstatus consensus download.
Jun 29 17:15:53.319 [info] update_consensus_networkstatus_downloads(): Launching microdesc standard networkstatus consensus download.
Jun 29 18:20:53.367 [info] update_consensus_networkstatus_downloads(): Launching microdesc standard networkstatus consensus download.
```
So first, this means maybe our user counting algorithms are off, since they involve heuristics like "divide by 10 where 10 is an estimate of the average daily consensus fetches from a client."
And second, does it mean that we are putting more load on the network than we expected, or need? How often do clients need a new consensus document really?Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/22453Relays should regularly do a larger bandwidth self-test2022-08-25T15:41:28ZRoger DingledineRelays should regularly do a larger bandwidth self-testInspired by #8247 ("In sum. a vestigial tiny bw self-test seems silly to keep around"), I wonder if we're at the point where we can just take out all the bandwidth self-test infrastructure.
In favor of ripping it out: there's some compl...Inspired by #8247 ("In sum. a vestigial tiny bw self-test seems silly to keep around"), I wonder if we're at the point where we can just take out all the bandwidth self-test infrastructure.
In favor of ripping it out: there's some complexity at relay startup where we try to delay publishing our descriptor until we've done the self-test, since we know we'll have a newer bandwidth number to include soon. We've had bugs in this delay step.
In favor of ripping it out: in the current design we try to build 4 separate circuits, without using our guards in order to have actually independent paths, for pushing our 500KB. Relays that aren't reachable end up with hundreds or thousands of connections open, because they keep making new circuits and each one probably is to a new relay. Not a big deal but kind of unfortunate.
In favor of ripping it out: 50KB, which is the most that the current bandwidth test can tell you, is super tiny compared to current descriptor bandwidths and current consensus weights. In fact, as prophesied in #8247, the threshold for the Fast flag is now above 50KB, so publishing 0 vs 50 is essentially just moving you around within the "don't use, they're too slow" bucket.
In favor of keeping it: maybe the bandwidth authorities have some sort of psychotic behavior in the face of relays that have a 0 in their descriptor? Like, they multiply the 0 by a factor for how much better than the other 0's they are? I have no idea. In case they do, I propose that we proceed with ripping out the self-test, and simply replace it with the number "20KB" to guard against psychotic bwauth behavior. (I picked that number because the directory authorities already use the number 20 when assigning a weight to a relay that (A) is unmeasured and (B) self-declares at least 20KB in its descriptor.)
Note: if we do keep it in, here's a better design:
https://trac.torproject.org/projects/tor/ticket/22453#comment:35
But what about bridges, you might ask? Public relays might have the bwauths to measure them remotely, but bridges don't have that. I think nothing uses the bandwidths in bridge descriptors. Are there any plans for that to change in the future? Even if there are, I think raising the floor from 0 to 20, and retaining the behavior where we publish a bigger number if we actually see a bigger number due to client load, should make us compatible with whatever these plans might be.Tor: unspecifiedjugajugahttps://gitlab.torproject.org/legacy/trac/-/issues/31292please sign Tor releases with an OpenPGP tool that includes Issuer Fingerprin...2022-07-09T18:20:36Zdkgplease sign Tor releases with an OpenPGP tool that includes Issuer Fingerprint subpacketsThe OpenPGP signatures on distributed tor software currently have only an unhashed "issuer" subpacket, which contains only the 64-bit keyid of the public key used to create the signature.
Modern versions of GnuPG (version 2.1.16 or late...The OpenPGP signatures on distributed tor software currently have only an unhashed "issuer" subpacket, which contains only the 64-bit keyid of the public key used to create the signature.
Modern versions of GnuPG (version 2.1.16 or later) produce an "issuer fingerprint" subpacket in each signature by default, which includes the full fingerprint of the issuing public key.
The "issuer fingerprint" subpacket provides a much stronger linkage between the signature and the OpenPGP key used to make it.
This is not a core security concern -- that is, lack of an "issuer fingerprint" subpacket doesn't make it possible to forge signatures or do anything comparably serious -- but the story we tell about verifying signatures is cleaner if the full fingerprint is present in each signature.
If it is possible to upgrade the version of GnuPG (or any other modern OpenPGP implementation) that signs Tor releases to one that generates these subpackets, that would be a good thing.Tor: 0.4.2.x-finalNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/legacy/trac/-/issues/30172Always send PADDING_NEGOTIATE if middle supports it2022-06-24T16:07:31ZMike PerryAlways send PADDING_NEGOTIATE if middle supports itWe should define some kind of NULL machine for whatever hop is most common in our padding machine list, and negotiate that machine if no other machines apply to the current circuit. This machine shouldn't take up a slot or count as negot...We should define some kind of NULL machine for whatever hop is most common in our padding machine list, and negotiate that machine if no other machines apply to the current circuit. This machine shouldn't take up a slot or count as negotiated, tho, so we can still negotiate other machines at later points if the circuit purpose changes, etc.
Similarly, this NULL machine should (maybe) set should_negotiate_end and send a PADDING_NEGOTIATE at circuit close.
We need to do this so that there isn't an obvious PADDING_NEGOTIATE cell request/response pair with obvious timings that it went to the middle node (since the PADDING_NEGOTIATED response will come faster than all other responses on the circuit). See also #30092 for similar motivating reasoning.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/25705Refactor circuit_build_failed to separate build vs path failures2022-06-24T14:25:41ZMike PerryRefactor circuit_build_failed to separate build vs path failuresWe should not give up on the network, our TLS conn, or our guard in the event of path failures (which can happen if we're low on mds, and/or if the user set a bunch of path-restricting torrc options).
I think this might want to be a bac...We should not give up on the network, our TLS conn, or our guard in the event of path failures (which can happen if we're low on mds, and/or if the user set a bunch of path-restricting torrc options).
I think this might want to be a backport. It should also handle #25347.Tor: 0.3.3.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/24487Reverse path selection (choose outer hops first)2022-06-24T14:25:40ZMike PerryReverse path selection (choose outer hops first)Because Tor's path selection chooses inner nodes first, and then excludes those nodes from being used in outer hops, over many circuits, outer hops get information about the choice of inner hops/guards.
We need to reverse the selection ...Because Tor's path selection chooses inner nodes first, and then excludes those nodes from being used in outer hops, over many circuits, outer hops get information about the choice of inner hops/guards.
We need to reverse the selection of nodes in the loop circuit_establish_circuit() in order to fix this.
This isn't as bad as it might otherwise be, because the last hop already is chosen first in that function. So it is a little tricky to take advantage of this info leak.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/25753Check/enforce path restrictions for each path position2022-06-24T14:25:40ZMike PerryCheck/enforce path restrictions for each path positionFor the vanguard torrc options, we may want to check that each layer has at least one node from a different /16 and different node family than others in that layer, to ensure that a path can always be built using the vanguard set.
We ma...For the vanguard torrc options, we may want to check that each layer has at least one node from a different /16 and different node family than others in that layer, to ensure that a path can always be built using the vanguard set.
We may also want to do the same thing for Tor's Primary Guard set from Prop271, to ensure that an adversary can't force the user to pick guards randomly from Sampled Guards.
Doing both of these things at once should allow us to drop #24487.
See also: https://gitweb.torproject.org/torspec.git/tree/proposals/291-two-guard-nodes.txt#n33Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/15951FairPretender: Pretend as any hidden service in passive mode2022-06-21T16:35:49ZtwimFairPretender: Pretend as any hidden service in passive modeThis flaw in in Tor protocol provides a possibility to resign any Hidden Service descriptor with one’s private key. Thus an adversary that does so can upload this resigned descriptor to the HS Directory and act as a frontend to hidden se...This flaw in in Tor protocol provides a possibility to resign any Hidden Service descriptor with one’s private key. Thus an adversary that does so can upload this resigned descriptor to the HS Directory and act as a frontend to hidden services whose Introduction Point data has been resigned. They can spread the .onion address of his frontend Hidden Service as a real one over the Internet (phishing) and then perform a DoS attack on chosen Hidden Services or redirect traffic to replicas that he controls and perform Man-in-the-Middle attack.
This is just a brief explanation. For more info see attached paper.
I have idea how to fix this by introducing "backward permanent key signature"
https://github.com/mark-in/tor/tree/backward-permkey-signature
https://github.com/mark-in/torspec/tree/backward-permkey-signature
It would be great to hear more ideas from you how to fix it better.twimtwimhttps://gitlab.torproject.org/legacy/trac/-/issues/572fallback-consensus file impractical to use2022-06-17T18:25:28ZRoger Dingledinefallback-consensus file impractical to useWe can put the fallback-consensus in the tarball that results from 'make dist',
but it breaks 'make dist-rpm'.
Right now (0.2.0.12-alpha) it's commented out of src/config/Makefile.am
We need whatever voodoo it takes to let make dist-rp...We can put the fallback-consensus in the tarball that results from 'make dist',
but it breaks 'make dist-rpm'.
Right now (0.2.0.12-alpha) it's commented out of src/config/Makefile.am
We need whatever voodoo it takes to let make dist-rpm do its thing too, before
we can reenable it.
[Automatically added by flyspray2trac: Operating System: All]Tor: 0.2.4.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/20657prop224: Implement service support.2022-06-17T17:57:33ZDavid Gouletdgoulet@torproject.orgprop224: Implement service support.This ticket is the parent one for anything related to service implementation for proposal 224.
As we break down functionalities and needed features, we'll add more child tickets.This ticket is the parent one for anything related to service implementation for proposal 224.
As we break down functionalities and needed features, we'll add more child tickets.Tor: 0.3.2.x-finalDavid Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/legacy/trac/-/issues/8244The HSDirs for a hidden service should not be predictable indefinitely into t...2022-06-17T17:31:13ZRoger DingledineThe HSDirs for a hidden service should not be predictable indefinitely into the futureWhen a hidden service chooses what HSDir relays to publish its hidden service descriptor to, it does it in a deterministic way based on day and .onion address. That way clients can do the same calculation to decide what HSDirs to contact...When a hidden service chooses what HSDir relays to publish its hidden service descriptor to, it does it in a deterministic way based on day and .onion address. That way clients can do the same calculation to decide what HSDirs to contact when fetching the hidden service descriptor.
But a flaw in this approach is that anybody can predict what six HSDir relays will be responsible for a given hidden service, 22 days from now. There's no reason to have that property, and it makes attacks to temporarily censor a hidden service much more effective since you can e.g. choose the identity keys of your Sybils such that there exists a day in the next 30 days where you'll be running all six of the HSDirs for your target hidden service.
One solution would be for the directory authorities to produce a periodic random string that is unpredictable until they have produced it. Then put that string in the consensus.
The first issue is whether a single authority can play tricks where it waits to vote until it sees the votes from the other authorities, and then chooses its random string to produce the desired consensus random string. This issue is actually really serious, since I bet for any six adversarial HSDirs, there exists a random string that puts them in charge of the target hidden service. See all the contortions we went through in http://freehaven.net/anonbib/#casc-rep about generating a consensus random number; I hope we don't need as many contortions here.
The second issue is how we should handle transitions between epochs. One option is to post two random strings (today's and tomorrow's), and then each hidden service uses both of them. Surely there's a more efficient answer here.
I guess issue number three is how the directory authorities should vote on a thing that doesn't have granularity of one vote period. Do they all just vote the random string that they voted at the beginning of the day, for the whole day? If I'm an authority and I missed the first hour of the day, do I get to add my vote on the second hour (I think the answer has to be no)? What if there weren't enough votes to make a consensus in the first hour? If I come up as an authority and can't get a proper recent consensus, but now it's time to vote, what do I vote?
And lastly, how do we transition? I think hidden services would publish to the old ones and the new ones, until clients that don't know about the new way are obsolete. In the mean time that increases the exposure of the hidden service to an adversary who just wants to get one of the n HSDirs for the hidden service for that period. (Is getting some-but-not-all that bad?)
(This ticket is inspired by rpw's upcoming Oakland paper)https://gitlab.torproject.org/legacy/trac/-/issues/8710Sybil selection should prefer measured over advertised bw2022-06-17T16:20:24ZNick MathewsonSybil selection should prefer measured over advertised bwWhen choosing between two nodes on the same IP, we based the choice on bandwidth. But right now, we use dirserv_get_bandwidth_for_router(), which looks at measured bw and falls back to advertised when measured isn't present. Probably it...When choosing between two nodes on the same IP, we based the choice on bandwidth. But right now, we use dirserv_get_bandwidth_for_router(), which looks at measured bw and falls back to advertised when measured isn't present. Probably it makes more sense, if there are two nodes, one of which has measured bw and one of which doesn't, to prefer the one with measured bw.Tor: 0.2.5.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/8435Ignore advertised bandwidths for flags once we have enough measured bandwidths2022-06-17T13:53:36ZAndrea ShepardIgnore advertised bandwidths for flags once we have enough measured bandwidthsOnce a dirauth sees a large enough fraction of nodes with measured bandwidths, it should ignore advertised bandwidths for purposes of assigning flags. That is, nodes without measured bandwidths should never get the Fast, Guard, HSDir fl...Once a dirauth sees a large enough fraction of nodes with measured bandwidths, it should ignore advertised bandwidths for purposes of assigning flags. That is, nodes without measured bandwidths should never get the Fast, Guard, HSDir flags, etc.
This is a follow-on to 8273.Tor: 0.2.4.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/31115tor returns first 4 bytes of IPv6 address only when using SOCKS command "F0"2022-06-17T13:11:43Zcypherpunkstor returns first 4 bytes of IPv6 address only when using SOCKS command "F0"Context:
Tor has a custom extension to the SOCKS protocol, defined in:
https://gitweb.torproject.org/torspec.git/tree/socks-extensions.txt#n48
that allows resolving hostnames.
exitmap makes use of this SOCKS extension.
When the answe...Context:
Tor has a custom extension to the SOCKS protocol, defined in:
https://gitweb.torproject.org/torspec.git/tree/socks-extensions.txt#n48
that allows resolving hostnames.
exitmap makes use of this SOCKS extension.
When the answer is an IPv6 address (ATYP=04) only the first 4 bytes are contained in the response instead of the entire IPv6 address.
Expected behavior: The entire IPv6 address should be in the response (128 bit instead of 32 bit).
https://lists.torproject.org/pipermail/tor-dev/2019-July/013931.htmlTor: 0.4.2.x-finalhttps://gitlab.torproject.org/legacy/trac/-/issues/30092Add a probability-to-apply field for circuitpadidng machines2022-06-16T18:10:05ZMike PerryAdd a probability-to-apply field for circuitpadidng machinesIn #28634, we realized that we may want to make some fraction of pre-built GENERAL and HS_VANGUARDS circuits look like padded onion service circuits, as a defense in depth against a classifier that can still recognize our specially padde...In #28634, we realized that we may want to make some fraction of pre-built GENERAL and HS_VANGUARDS circuits look like padded onion service circuits, as a defense in depth against a classifier that can still recognize our specially padded onion service circuits as, well, special, and still interesting.
But we don't want to make all general circuits look this way. Just some fraction. So it would be nice if the machine conditions could somehow toss a coin to decide to apply the machine to a circuit. Unfortunately, right now the conditions are memoryless, so we have nothing that can say "you already tossed the coin", but we could special case just this to have a flag on the circuit or something.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/2681brainstorm ways to let Tor clients use yesterday's consensus more safely2022-03-22T13:28:40ZRoger Dingledinebrainstorm ways to let Tor clients use yesterday's consensus more safelyRight now Tor clients won't use a consensus that's 25 hours old. But if the directory authorities don't agree on a consensus for a day, things can go bad. We need to investigate other tradeoffs in this space than the one we've currently ...Right now Tor clients won't use a consensus that's 25 hours old. But if the directory authorities don't agree on a consensus for a day, things can go bad. We need to investigate other tradeoffs in this space than the one we've currently picked.
For instance: if you got your directory consensus info when it was valid, but you haven't been able to get any new consensus, perhaps you should be more forgiving about the timestamp on the consensus you have. That's a slightly different scenario than believing a new consensus that's 48 hours old.
Another option is just to change 24 to 48, which probably doesn't put clients at much greater harm, but gives us a lot more breathing room for mistakes.
The implementation side of this will be tricky, because we'll need to make sure that clients can handle descriptors that are 36 hours out of date too. We started implementing that feature several times, but I think we've never finished it.Tor: unspecifiedhttps://gitlab.torproject.org/legacy/trac/-/issues/18517meek is broken in Tor Browser 6.0a32022-03-22T13:25:53ZGeorg Koppenmeek is broken in Tor Browser 6.0a3meek does not work any longer in Tor Browser 6.0a3. It seems this is caused by an underlying bug in tor. After some amount of testing and bisecting commit 23b088907fd23da417f5caf2b7b5f664f317ef4a is the first that introduces the new beha...meek does not work any longer in Tor Browser 6.0a3. It seems this is caused by an underlying bug in tor. After some amount of testing and bisecting commit 23b088907fd23da417f5caf2b7b5f664f317ef4a is the first that introduces the new behavior. Trying to start meek with it results in
```
Mar 10 13:50:53.000 [notice] Ignoring directory request, since no bridge nodes are available yet.
Mar 10 13:50:54.000 [notice] Delaying directory fetches: No running bridges
```
and nothing thereafter: the startup is stalled.Tor: 0.2.8.x-finalteorteorhttps://gitlab.torproject.org/legacy/trac/-/issues/247Tor doesn't seem to work with Network configuration2022-03-22T13:21:19ZTracTor doesn't seem to work with Network configurationRunning OS X 10.3.9 with Tor installed; Network settings pointing to Privoxy on 127.0.0.1, port 8118 for HTTP,
HTTPS, and Gopher. Using Safari 1.3, Camino 1.0b1 and Firefox 1.5 with SwitchProxy and the browser's Connection
settings poin...Running OS X 10.3.9 with Tor installed; Network settings pointing to Privoxy on 127.0.0.1, port 8118 for HTTP,
HTTPS, and Gopher. Using Safari 1.3, Camino 1.0b1 and Firefox 1.5 with SwitchProxy and the browser's Connection
settings pointing to Privoxy as well.
Connecting to ipid.shat.net/ using Safari or Camino shows an ip address in a different part of the country or
world, as expected. Using Firefox 1.5 with SwitchProxy turned off (Network settings for OS X are still enabled), I get an
ip address for my ISP, even though they're not running any Tor servers. If I use Network Utility to do a Whois lookup,
it also says I'm coming from my ISP; same if I go to www.dnsstuff.com (Java and JavaScript are off). Turning on SwitchProxy
(even while Network settings are enabled) and trying again gets me an IP address in a different part of the country or world,
as it should be.
This makes no sense, and it doesn't inspire confidence, since I get different results depending on which browser I'm using.
I followed the directions for installation and setup, so I can only presume it's a bug of some sort, perhaps with OS X, or Tor.
[Automatically added by flyspray2trac: Operating System: OSX 10.4 Tiger]
**Trac**:
**Username**: dedwards