OS-level fingerprinting probably is not a terribly high priority. The only situation where this is high priority is if different OS versions and library versions end up producing different results for these functions.
Designs
Child items 0
Show closed items
No child items are currently assigned. Use child items to break down this issue into smaller parts.
Linked items 0
Link issues together to show that they're related.
Learn more.
This may be easy for someone who has a bunch of different OSes and writes some test JS to print out values from these functions. Well, easy to test for differences anyway.
I did not test all functions but just some using https://people.torproject.org/~gk/misc/MathHighPrecisionAPI.html. While we have differences between OSes we have it seems differences between 32bit and 64bit architectures as well. I tested on OS X 10.6.8, 32 bit Debian testing, 32 bit Ubuntu Precise, 64 bit Ubuntu Precise, Windows 7 64 bit and Windows 8 64 bit. Small differences between Linux/OS X /Windows aside, the 64bit Ubuntu Precise gives me:
And the OS X values are only in cosh(10) different, too. I wonder whether we could get bigger differences by picking better values for these functions (I almost bet this is the case).
Just to add to my last comment: Windows7/8 and Ubuntu Precise 32/64 run on the same computer without VM. The Debian and the OS X are running on two other machines.
Hrmm.. Sounds like we may need to pick a library we like and include its versions of these functions rather than calling out to the OS.
I wonder why there are 32 vs 64bit differences here, though.. I guess the Linux versions probably use 'long' instead of something standard like int64_t... Bleh.
Trac: Keywords: tbb-fingerprinting deleted, tbb-fingerprinting-os added Summary: Determine if high-precision Math routines are fingerprintable to High-precision Math routines are OS fingerprintable
I guess the next question is to determine if this is any worse than just an OS+arch fingerprinting vector, so we can decide how to prioritize this versus other fingerprinting issues.
I guess the next question is to determine if this is any worse than just an OS+arch fingerprinting vector, so we can decide how to prioritize this versus other fingerprinting issues.
We should ask SpiderMonkey folks for a best way to fingerprint users using the Math object. They might know some really good corner cases due to the algorithms they chose. https://bugzilla.mozilla.org/show_bug.cgi?id=892671 could be relevant here, too.
More to puzzle. Output of testing sample linked with x86 libm (not reproducible for cross compiling):
{{{
Before FIX_FPU: tan(-1e300) = -1.421449, tan(strtod('-1e300')) = -4.802497
After FIX_FPU: tan(-1e300) = -1.421449, tan(strtod('-1e300')) = 0.883149
}}}
Code of sample:
{{{
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
inline void FIX_FPU() {
short control;
asm("fstcw %0" : "=m" (control) : );
control &= ~0x300; // Lower bits 8 and 9 (precision control).
control |= 0x2f3; // Raise bits 0-5 (exception masks) and 9 (64-bit precision).
asm("fldcw %0" : : "m" (control) );
}
A quick check with the browser console gives me the impression that simple JS math expressions are evaluated with 64 bit intermediaries (as opposed to 80 bit). I am uncertain about the JS JIT behavior. (1.0 + Number.EPSILON * 0.5) + Number.EPSILON * 0.5)
Assuming calls are made to libm (or equivalent) blindly, the results on each system are library version and implementation dependent. A particularly egregious example would be the output of double sin(double x); being flat out wrong for glibc < 2.19 for certain values. MS's VC++ runtime is less wrong for a different set of certain values, but is still wrong. This probably applies to most transcendental functions.
Even if we fix the JS that calls into libm, higher level apis that just happen to do math are not guaranteed to give the correct results, depending on how the native code it's called into is written or built. If we can assume that x87 is never used at all, then we'd still need to check for things like rsqrtss.
**part2: math.cos Windows: FF vs TB**results: see attachmenttest: https://thorin-oakenpants.github.io/testing/ (for as long as I leave it there)I do not know if that ticket/patch causes this, but there is a difference between TB vs FF for no discernible reason (e.g Linux doesn't differ between FF and TB)Look at the first result. FF: `minus 0.374...` vs TB `plus 0.840...`**part3: math.cos reveals platform**finally, to the meat and potatoes. See attachment. I'm using math.cos because it always returns a value between -1 and 1 (i.e no NaN or Infinity). The following tests show that, so far, the last four values can be used to detect windows or Linux, and so far one Android major version (v5.*). I am fully expecting the first four value to betray other Android and macOS/macOS X. My testing is incomplete, but enough to prove os FP'ing
and
Thanks :) Yup, that was the ticket. Wow, 4 years. That ticket is about the functions added in FF25+ - e.g like those in https://ghacksuserjs.github.io/TorZillaPrint/TorZillaPrint.html#math - which doesn't **seem** to differ in 60+ anyway (those FF25+ functions probably need more testing I guess)Also note, that sin() can also have differences, I'm just not sure on which values over which platforms produce the desired results (and I could probably find more functions) - I'm sure the solution for this would fix any functions, so I'm not going to dig any further (except to show combos for mac and other android versions using cos)Edit: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math#Browser_compatibility- `cos`, `sin` etc were FF version 1 compatible
I was worried that we might be exposing hardware information via this.
I compared the numbers I got on my x64 Nightly running on a Surface Go and got the same results as those of Thorin's Win7/Win10 x64 on x64 here: https://github.com/ghacksuserjs/TorZillaPrint/issues/30 . I'm reasonably convinced this means that hardware is not a factor in these values, and it comes down to OS and related libraries.
FYI: https://bugzilla.mozilla.org/show_bug.cgi?id=1380031 (FF68+) introduced a change (over my head) that reduced some entropy (namely that of precision in the number of decimal places) in some ECMAScript Edition 6 functions
The one test affected in my PoC was expm1(1): windows example below
1.7182818284590455 FF67 or lower
1.718281828459045 FF68+
But this is not enough to affect overall FP'ing of 32 vs 64 builds and platforms. Combined Edition 1 (the set of cos tests) and Edition 6 (3 tests) are still enough.
Alrighty! I've been trying to re-find that ticket for quite a few weeks. Used it many months ago and promptly lost it. Thanks. I will pass on the ticket number to the Tor Uplift guys
it's possible to detect some distros such way
Comment 18 was 5 years ago. So far, and my resources are limited (and as an upstream problem means more than Tor Browser, which already changes their math FP), I have found nothing so far that leaks anything more than major platform (win/linux/mac) and in some instances 32/64 bit builds or OS architecture (some by default: eg a 64 bit build must be on a 64bit OS).
I wanted to get a ticket at bugzilla opened. I have no idea how much work or complexity and potential issues lie with using the same libraries over all platforms (which is what Chrome seems to be doing: they have the same FP regardless of anything I test on).
I have done more testing, and improved the output in my test: including a red info if I haven't seen the hash before. I have now found that TB on Linux actually has more entropy than originally thought. After testing 5 distros (a mix of flavors and architecture) I have 3 distinct Linux buckets (it's not enough to distinguish the actual platform, at least not in all cases, yet). I will be adding more distros to investigate further.
Can't believe it's been two years ... FF93+ Bugzilla 531915 has landed
This solves all known desktop entropy in Firefox, by using fdlibm's sin, cos and tan in jsmath - big thanks to sanketh and tom
side note: hooking this up to webaudio math should hopefully also eliminate entropy there (which is a similar number of results)
Here is a new math test created about a year ago (which gained more entropy than my previous endeavors based on Saito et al., 2018), but TBH, we only gained it in android
RFP is on and the version is 91+ AND it is TB (if TB backport the patch)
TZP itself uses a very small tiny subset of cos/sin/tan that has just as much entropy for desktop as the big research one. I have updated the code for RFP compliance
going naked with no RFP
after putting my RFP pants on
NOTE: for any readers. This will not hide your OS and isn't meant to. The patch removes entropy within an OS (e.g. mac, windows, linux), and happens to make all desktop OSes the same (that we know of)
I don't have Android data, and I've only got ~13k samples from the US, but the dataset I have shows only one user reporting a different value for the math operations we collect (and that one user is on Darwin and reports .8178819121159087 instead of .8178819121159085 )
RFP covers trig functions. But other functions (from memory they are all polyfills) in my original dataset showed remaining entropy all came from android
I am not a hardware expert, so my best guess is this remaining entropy that we know of, in android, IIRC of 4 results (edit: assuming it still exists), is from architecture: 32bit, 64bit, arm (excuse my lack of correct terms)? and maybe that's already exposed