Despite updating general.useragent.override to match ESR 60 (done according to comment:16:ticket:25543) the platform part is not spoofed to Windows on my Linux box.
So, we probably should not set general.useragent.override at all anymore and just rely on the settings we get with privacy.resistFingerprinting? Because if we explicitly set it to the Windows UA but then don't get that, this is weird.
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Child items
0
Show closed items
No child items are currently assigned. Use child items to break down this issue into smaller parts.
Linked items
0
Link issues together to show that they're related.
Learn more.
This is also bad for anonymity. With Tor Browser 8.0a9 the results on panopticlick.eff.org look as expected, except for "Platform" and "User Agent" which reveal the OS (Linux in my case).
This is also bad for anonymity. With Tor Browser 8.0a9 the results on panopticlick.eff.org look as expected, except for "Platform" and "User Agent" which reveal the OS (Linux in my case).
The platform/OS on which Tor Browser is running can be detected multiple ways, spoofing the user agent is simply a low-hanging fruit for obscuring it. The usefulness of this is debatable, but we should minimize the differences between platforms when it is possible.
The platform/OS on which Tor Browser is running can be detected multiple ways, spoofing the user agent is simply a low-hanging fruit for obscuring it. The usefulness of this is debatable, but we should minimize the differences between platforms when it is possible.
I understand. Ideally the user agent should be the same for all platforms, but I see that the platform can be identified at least via the fonts at the moment.
The platform/OS on which Tor Browser is running can be detected multiple ways, spoofing the user agent is simply a low-hanging fruit for obscuring it. The usefulness of this is debatable, but we should minimize the differences between platforms when it is possible.
I understand. Ideally the user agent should be the same for all platforms, but I see that the platform can be identified at least via the fonts at the moment.
Not everyone does OS detection through installed fonts though everyone collects user-agents. Also with JS disabled OS detection is MUCH harder. Giving away free entropy like that is intolerable.
Expecting
Mozilla/5.0 (Windows NT 6.1; rv:60.0) Gecko/20100101 Firefox/60.0
If Firefox with RFP or Tor Browser used this format it would not match Firefox's (non-RFP) format and would be easily identifiable.
Don't mix FF w/RFP with TBB! RTFM of TBB ;)
Actually, Mozilla/5.0 (Unknown; rv:60.0) Gecko/20100101 Firefox/60.0 is everything TBB should expose until we can delete it altogether.
By the way let's smash this type of thinking once and for all, "X can be found using Y anyway in Z circumstances, so let's expose X directly". Ok, Tor Browser can be identified using exit IP, fingerprint and other stuff, so let's expose that information directly, how about Mozilla/5.0 (Windows NT 6.1; rv:8.0) Gecko/20100101 TorBrowser/8.0 as UA?
This is unacceptable.
I can't stress this enough, as I said on the blog comments the original Mozilla bugzilla report makes no sense since with a proxy (such as Tor) network OS fingerprinting is moot so why fall for it? And it's such a let down to see great privacy people on the Mozilla side (that I won't mention out of respect) fall for such a cheap trap. (PS: As mentioned earlier, the argument ("X can be found using Y anyway in Z circumstances, so let's expose X directly") itself is false)
Anyone willing to make a followup bugzilla report to get this fixed on the Mozilla side as well?
(I am not the previous cyberpunks. I am just using the public account to comment.)
Does the 'perfect is the enemy of good' argument apply here?
Surely, there are many fingerprinting tactics available if you look hard enough, since most web technology are not designed with privacy in mind. (I would even argue that it is exactly the opposite.) However, this doesn't mean we should give away identifying information ourselves! I consider this being a regression.
By the way, #27520 (moved) is a possible duplicate of this.
If there is any possible reason to not spoof the user agent, why not do that only at lower security levels? Assuming we fix the browser so it actually spoofs when we tell it to spoof, we can get the best of both worlds by still preventing those who have JS disabled and do not expect advanced / active fingerprinting attacks from inadvertently leaking more identifying information about their system.
The actual goal is to not split user groups even more. The current defaults are a tradeoff between usability across platforms and not being fingerprintable.
As i understood it, the fear is that new users open TB, visit their favourite sites, find them broken and switch back to another browser. So essentially the current approach is to make it easy for them and have an UI that invites to change browsing habits by getting more interested in anonymity.
I am not sure if it is possible to not break google's (and other) apps without harming anonymity, but this is the current approach.
Energy is probably best spent finding out how to fix existing fingerprinting issues that allow to guess your OS (see keyword) and to write guides to help others. Trying to convince developers to improve the settings for some by harming may be a waste of time for all.
For the moment interested users have to experiment with UA spoofing addons for themself. Hopefully the current OS specific UAs will be replaced by something better (like a split in two groups: desktop / mobile, or popups offering to lower security for known to be broken websites, etc).
I read both of those. Unfortunately, they really don't have much discussion in them. While I think it's fine to make trade-offs at the lower security levels, doing it at the higher security levels is really a very bad idea. Note that I am only talking about Windows vs OSX vs Linux, not about desktop vs mobile (which is completely new ground).
After all, the high security levels already break a number of sites. It's clearly meant to be used for those who do not mind the occasional site being non-functional in exchange for better anonymity and privacy.
Whatever we do with respect to the user agent won't be included in the security slider. If that's an issue then it's a privacy issue and Tor Browser's privacy guarantees are or should be the same for everyone regardless of the security level. The slider is for adjusting the security against browser exploitation.
Tor Browser includes a “Security Slider” that lets you increase your security by disabling certain web features that can be used to attack your security and anonymity.
It would make sense to leave general.useragent.override as is and set privacy.resistFingerprinting to false when the security slider is moved to "Safest".
Reasoning:
A user-agent which differs from general.useragent.override is an anonymity issue
Javascript is disabled when security slider is moved to "Safest"
privacy.resistFingerprinting deals with privacy issues which are relevant only when javascript is enabled
This would give those who want to trade a bit of anonymity and security for a better browsing experience the option while not affecting those who want the highest/safest level of anonymity and security.
Also, this would not silently change the user-agent on update of those who have already moved the slider to "Safest", which is what prompted my previous ticket. https://trac.torproject.org/projects/tor/ticket/27495
Whatever we do with respect to the user agent won't be included in the security slider. If that's an issue then it's a privacy issue and Tor Browser's privacy guarantees are or should be the same for everyone regardless of the security level. The slider is for adjusting the security against browser exploitation.
This is very precise reasoning, everyone who has a minimal knowledge on the Tor Browser's history and philosophy will come to these same conclusions. The comment above does note presuppose the security slider's function (the tb devs collect all the bugs that affected a Firefox ESR to determine which things to restrict in each slider level). Also what if someone uses a website foo.com with the lowest slider value then changes it to the highest, does he deserve to have his UA leaked then changed back? What kind of treatment is that?
Personally, I think TBB should send a Windows User-Agent from desktop browsers and Android from mobile, since it strikes a good balance between feature detection and a minor benefit to fingerprinting resistance. Regardless of the TBB default, I think users should be able to override it by configuring general.useragent.override for themselves, since they may have specific reasons for doing so.
Tor Browser includes a “Security Slider” that lets you increase your security by disabling certain web features that can be used to attack your security and anonymity.
Yes, security breaches may easily affect your anonymity (Tor's protections can bypassed that way) which is why both concepts are used in the same sentence. It has nothing to do with fingerprinting resistance here. If you feel that's not clear enough please file a ticket at this bug tracker so we can clarify our manual.
It would make sense to leave general.useragent.override as is and set privacy.resistFingerprinting to false when the security slider is moved to "Safest".
Reasoning:
A user-agent which differs from general.useragent.override is an anonymity issue
Javascript is disabled when security slider is moved to "Safest"
privacy.resistFingerprinting deals with privacy issues which are relevant only when javascript is enabled
privacy.resistFingerprinting deals with way more than the user agent but more importantly even if you have JavaScript disabled then you are not safe against fingerprinting risks. That ship has sailed long ago as the web has gotten more powerful over the years.
This would give those who want to trade a bit of anonymity and security for a better browsing experience the option while not affecting those who want the highest/safest level of anonymity and security.
I think you are misunderstanding something here: Tor Browser's privacy guarantees hold for everyone. It's the security side that some of our users need/want to adjust to their specific needs.
During a brief chat I had with gk on IRC, we were discussing how this impacts Linux as opposed to OSX. While there are clear problems caused by user agent spoofing that affects OSX users, there seem to be none of consequence for Linux users. One possibility that turned up was Google Translate which used to give an impossible captcha on Tails, but I tested it on Windows, Tails, and Debian Linux using both the 8.0 and outdated browsers, and found that that is no longer the case regardless of the status of user agent spoofing. Whatever was causing it, Google fixed it for everyone.
So far, while it may be beneficial to OSX users to disable spoofing (it is supposedly bad enough that some of them are actually leaving), I would like to see spoofing done for Linux users, in part because we are far less common than OSX users and stand out more in access logs with their default log level.
So far, while it may be beneficial to OSX users to disable spoofing (it is supposedly bad enough that some of them are actually leaving), I would like to see spoofing done for Linux users, in part because we are far less common than OSX users and stand out more in access logs with their default log level.
I think that's not true. The point here is not to count the fraction of Linux users per se but rather the fraction of Linux users that use Tor Browser. And I think that fraction is actually larger than the macOS Tor Browser users, at least if we believe the fractions shown for downloading fresh Tor Browser bundles (AFAICT we don't have statistics somewhere yet for per OS update ping numbers).
Good to know. I'd be interested in learning the actual statistics.
My overall point is more that Linux users may not suffer as much as OSX users from using a spoofed user agent. This is partially due to the fact that most websites lack special handling for Linux user agents and only recognize Windows, OSX, and a few mobile agents. Linux user agents are typically unknown and the site falls back to Windows behavior. In addition, Linux users are far more used to websites that do not support them. In fact, there are a number of websites that actually refuse to offer full functionality to non-Windows user agents, which is at least one benefit of keeping user agent spoofing on that platform.
On my regular browser (which is Chromium), I actually spoof my user agent to display Windows since it tends to give me a better browsing experience. While I don't know if the same applies to OSX users, for me as a Linux user, pretending to be Windows results in better compatibility.
Okay, here are the meeting results in condensed form:
We keep the mobile UA for Android as the breakage would be too high without it.
We investigate whether we can avoid the broken experience on macOS by sending a uniform UA header for all desktop platforms but do not lie about the JS navigator property as the OS detection might happen by JS and not via the HTTP header.
Okay, here are the meeting results in condensed form:
We keep the mobile UA for Android as the breakage would be too high without it.
We investigate whether we can avoid the broken experience on macOS by sending a uniform UA header for all desktop platforms but do not lie about the JS navigator property as the OS detection might happen by JS and not via the HTTP header.
I agree that keeping the mobile UA for Android makes sense.
What was the consensus on the situation for Linux users, or was that not discussed?
What was the consensus on the situation for Linux users, or was that not discussed?
I think the idea is to make all desktop browsers use the same HTTP User-Agent header. The biggest potential breakage is on macOS due to the command key vs. control key difference. Kathy and I just completed some tests and the news is mixed. In a new minutes, I will attach the patch we applied but the summary is that we changed the User-Agent header and navigator.userAgent in JavaScript to Mozilla/5.0 (Windows NT 6.1; rv:60.0) Gecko/20100101 Firefox/60.0
GitHub: In the source code editor, Cmd+F works as expected. However, in the comment editor Cmd+B does not work to format text as bold (Ctrl+B does).
Google Docs: Command keys do not work; control is recognized as the modifier key for making text bold, italic, etc.
Our conclusion is that if we want to maintain compatibility with these kinds of sites, Tor Browser needs to make the platform available. At this point we do not know if GitHub and Google Docs are looking at the User-Agent header or if they are using JavaScript to test against navigator.userAgent.
I'm sorry - I was wrong about navigator.userAgent. We wanted to test with that one reporting the correct platform; and I thought your patch would do that - but I was wrong.
I think the fastest thing to do for testing purposes would be to strip the '!nsContentUtils::ShouldResistFingerprinting' guarding the 'general.useragent.override' pref and then set 'general.useragent.override' to 'Macintosh; Intel Mac OS X 10.13'
As a Tor Browser user highly concerned with this change, I have two questions based on the dialogue I'm seeing on the comments section of the Tor blog about this subject:
The biggest reason this change seems to be promoted by some (particularly gk) as "not a big deal anyway", even in the context of disabled Javascript where potential OS detection methods are minimized, is because your OS can apparently be detected anyway by what fonts you have (as Tor Browser ships with different fonts depending on the version it seems). My question is how the server communicates this information back to itself after detection without using Javascript. I can find no website, browser uniqueness analyzer, fingerprint analyzer, anonymity analyzer, Panopticlick-style test, etc. that can actually detect anything about my fonts with Javascript disabled in Tor Browser. I can only find a small reference in Whonix documentation to detecting fonts via "CSS introspection". Can gk or somebody else provide more information about how this works?
If this is really all on behalf of fonts, is there a reason not to ship the same fonts with every version of Tor Browser on every platform?
...
I think the fastest thing to do for testing purposes would be to strip the '!nsContentUtils::ShouldResistFingerprinting' guarding the 'general.useragent.override' pref and then set 'general.useragent.override' to 'Macintosh; Intel Mac OS X 10.13'
Thanks for the guidance. Kathy and I hacked Navigator::GetUserAgent() to respect general.useragent.override and set that pref to Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:60.0) Gecko/20100101 Firefox/60.0. The result is that our tests with both GitHub and Google Docs were successful: the command key is correctly recognized on macOS.
I am not sure what the next step is; it looks like it will not be trivial to create a shippable patch (since Navigator::GetUserAgent() expects to get the userAgent string from the HTTP protocol handle, but we want HTTP to use a spoofed User-Agent).
I am not sure what the next step is; it looks like it will not be trivial to create a shippable patch (since Navigator::GetUserAgent() expects to get the userAgent string from the HTTP protocol handle, but we want HTTP to use a spoofed User-Agent).
I'm not sure what OSCPU is supposed to be without fingerprinting; but in RFP mode, it's the same as User Agent. So if RFP is enabled; you could go grab the value from Navigator::GetOscpu and return that instead of querying the HTTP header...
As a Tor Browser user highly concerned with this change, I have two questions based on the dialogue I'm seeing on the comments section of the Tor blog about this subject:
The biggest reason this change seems to be promoted by some (particularly gk) as "not a big deal anyway", even in the context of disabled Javascript where potential OS detection methods are minimized, is because your OS can apparently be detected anyway by what fonts you have (as Tor Browser ships with different fonts depending on the version it seems). My question is how the server communicates this information back to itself after detection without using Javascript. I can find no website, browser uniqueness analyzer, fingerprint analyzer, anonymity analyzer, Panopticlick-style test, etc. that can actually detect anything about my fonts with Javascript disabled in Tor Browser. I can only find a small reference in Whonix documentation to detecting fonts via "CSS introspection". Can gk or somebody else provide more information about how this works?
Anything that triggers a conditional load based on the size of other objects could be used to communicate it back. But it's more work and not as fun to program so I'm not surprised it's not common in POCs.
Besides Fonts, another JS-free ways to detect platform could be media support/streaming. But yea, without using JS it definetly gets tougher. (There are a lot more network-level tricks that Tor is immune to but affects Firefox.)
I'm not sure what OSCPU is supposed to be without fingerprinting; but in RFP mode, it's the same as User Agent. So if RFP is enabled; you could go grab the value from Navigator::GetOscpu and return that instead of querying the HTTP header...
Unfortunately, when privacy.resistFingerprinting is true, there is no way for code outside of nsHttpHandler.cpp to access the string. One possible solution would be to add a new attribute to nsIHttpProtocolHandler such as unspoofedUserAgent.
A question for tom: Is Mozilla likely to switch to this approach and accept such a patch?
A question for gk: Do we want to try this change in Tor Browser? Secondarily, should Kathy and I work on a patch this week or someone else or should we wait?
A question for gk: Do we want to try this change in Tor Browser? Secondarily, should Kathy and I work on a patch this week or someone else or should we wait?
Yes, this looks promising in the sense that we can avoid the breakage AND the splitting of our desktop users based on User Agent. And, yes, I'd like to have a fix for this in 8.0.1 if possible, so please have a look at it.
Anything that triggers a conditional load based on the size of other objects could be used to communicate it back. But it's more work and not as fun to program so I'm not surprised it's not common in POCs.
Besides Fonts, another JS-free ways to detect platform could be media support/streaming. But yea, without using JS it definetly gets tougher. (There are a lot more network-level tricks that Tor is immune to but affects Firefox.)
Well this is probably another dumb question, but is there any reason that all platforms can't ship the same fonts? Or would the differences in rendering them between the various platforms make this pointless anyway?
Also I'm curious about how you use media streaming to detect the OS. Is the way the video is rendered, detection of the audio/video interface names, or what?
Well this is probably another dumb question, but is there any reason that all platforms can't ship the same fonts? Or would the differences in rendering them between the various platforms make this pointless anyway?
I believe the reason was to preserve the look and feel of the operating system. I think there are also technical issues that make it hard to ship the same fonts, including size constraints (correct me if I'm wrong). It's not too big of a deal though, since the font set is not recorded by the average access log, whereas the user agent is.
I believe the reason was to preserve the look and feel of the operating system. I think there are also technical issues that make it hard to ship the same fonts, including size constraints (correct me if I'm wrong). It's not too big of a deal though, since the font set is not recorded by the average access log, whereas the user agent is.
But according to some possible font detection is the reason that we're supposed to consider our OSes compromised anyway. This whole issue is starting to seem a lot like an invented problem.
Also it seems obvious to me that fingerprinting defenses should take precedence over aesthetics in any case.
I can only find a small reference in Whonix documentation to detecting fonts via "CSS introspection". Can gk or somebody else provide more information about how this works?