Open the network monitor for the page (Ctrl+Shift+E).
Load the local file below (maybe someone should confirm this works on https as well).
Initially no image is loaded.
Scroll to the blue area and place your pointer on it.
Image is now loaded.
Basically, we use a CSS content rule to point to a remote url, which only activates with :hover. It similarly works with
:focus, :focus-within and :focus-visible.
:target.
Place the element with content: url() in <details>. Only loads when the <summary> is clicked.
With ESR 128, add the popover attribute to the element with content: url(), and add <button popovertarget>. Only loads when the button is clicked.
There are probably others. I couldn't get an example to work with :visited or :link, so they may be protected.
Basically, it seems the remote resource will only be fetched when the content property applies and the element is displayed (hence why the <details> example works). So any rule that can change the display or content in reaction to some user interaction can track that interaction. The main limitation is that this only fires once.
Interestingly (ugh!) this doesn't affect Firefox + NoScript, at least for 3rd party resources, because of the forced CSS prefetching protection against PP0.
Unfortunately, though,
if (!(ns.policy.isTorBrowser||ns.allows("unchecked_css"))){// protection against CSS PP0, not needed on the Tor Browser because of its// noisy DNS resolution: https://orenlab.sise.bgu.ac.il/p/PP0
So my proposal to fix this would be enabling this protection on Tor Browser too, including same host requests.
We should probably upstream this concern to mozilla, and ask for a test to cover this. Similarly regarding https://gitlab.torproject.org/tpo/applications/tor-browser/-/issues/42828, and they should be able to confirm what is happening at the low level. In principle, any of these delayed loads can be used in a similar way, and we'll probably get more in the future as well. So it would be nice if they had a low-level solution for us and some test coverage so this doesn't get regressed as they add more optimizations.
Security and networking is not really my area though, so I'm not sure I'm the best person to follow this through. @ma1 could you do it? That way you would by default be able to see the confidential bugzilla discussion.
Before we go down these routes (prefetching all the CSS images and asking for upstream help), one philosophical question: is the "Safest" security level actually supposed to block this kind of behavior under our threat model / specification?
Once you start clicking stuff (i.e. <summary>, <button>), you kinda expect to send signals to the other side of the web.
Triggering a request on :hover and :focus* is a bit more unexpected and therefore concerning?
My understanding is that disabling scripts in "Safest" is meant more as a last-line defense against 0 day browser exploits and maybe to mitigate sophisticate, fine-grained, invasive user interaction tracking / fingerprinting (e.g. measuring typing or mouse movements patterns). This technique seems too much fuzzy and coarse-grained to actually matter (even though I can imagine ways to overcome the "fires once" limitation, for instance)
Yeah, I wasn't sure, but the HTML spec for the lazy loading of img and iframe specifically carved out the no-scripting exception for this reason. I guess it would be of interest to whoever implemented this if this can achieve a similar result.
So this is more of a privacy issue than a security issue, and so not applicable to just Safest.
I think an argument can be made that hovering a traditionally non-interactable element (eg a div, an img, etc versus a button) shouldn't have hidden side-effects.
If we do fix this (ie pre-fetch everything pre-fetch'abl) I think it should apply universally not just on Safest even though the same affect can be achieved by doing the request in JS on Standard.
What's the main source of concern here? Gaze/mouse tracking telemetry basically?
Also agree if we do fix this should happen as low as possible w/ upstream tests .
This is an anti-tracking measure, because if a user agent supported lazy loading when scripting is disabled, it would still be possible for a site to track a user's approximate scroll position throughout a session, by strategically placing images in a page's markup such that a server can track how many images are requested and when.
In the past we had already decided not to do anything for lazy loading (#40056 (closed)).
@thorin I think this is not really a fingerprinting issue. It is a different privacy concern. It is more the idea that a scriptless webpage should never be able to inform the website how the user is interacting with it. I.e. it should act more like a PDF document: initial download, then only further network activity when you follow a hyperlink.
E.g. if your web page contains a list of hyperlinks to third party domains, the host would be able to guess which third party link you clicked.
Obviously you could fill your website with internal hyperlinks and all sorts of tracking parameters to force the user to reveal their path and exactly what they are trying to do, but this would be fairly awkward and transparent to the user since they can read a hyperlink before clicking it.
thanks henry ... that's what I thought - it's not universal i.e all sites [1] - pretty much low target if any. I have deliberately ignored these sorts of things - e.g. observer
[1] now I say that but some things can be used for correlation - for example, you cannot hide you have a mouse once you start moving the pointer around (99% sure, I have a nasty PoC in the works - takes the order of all events), etc - so some user actions are worthy FP metrics, e.g. fullScreen API and so on - they have consequences that result in universal metrics.
If I've understood correctly, this feature effectively allows tracking of what the user reads/consumes when JavaScript is not enabled which is unexpected and a privacy concern unrelated to linkability.
@morgan (sorry I accidentally edited your comment, rather than quoting/replying, it should be restored now):
If I've understood correctly, this feature effectively allows tracking of what the user reads/consumes when JavaScript is not enabled which is unexpected and a privacy concern unrelated to linkability.
Yes. In my understanding it actually belongs to a set of features which I'd call "user interaction dependent subresource loading", including the "lazy" loading attribute (#40056 (closed)), to be reconsidered in this light.
Yes, as you know I was already working at a stopgap countermeasure to be shipped during the weekend in NoScript 11.4.35 (see below).
It seems like the consensus seems to be no this isn't a Security Level consideration
Not a Security Level consideration per se, but related to Safest because users who explicitly disable JavaScript may not expect interaction within the page (like mouse hovering or scrolling, with no apparent navigation) to be traceable.
we should always disable CSS pre-fetching always.
You probably rather mean "force subresource preloading" (which is quite the opposite of disabling pre-fetching).
And rather than "always", given both the potential performance implications and the assumption that JavaScript provides hostile pages with many more effective means to reach the same tracking goal, we should preload when JavaScript is user-disabled (in our case, at the Safest security level).
Therefore, as discussed last week on IRC, I'm provisionally shipping a NoScript-based mitigation by extending the "unrestricted CSS" capability semantics to any "user interaction dependent subresource loading" we know of (or we may learn about in the future), forcing such subresources to be preloaded when the capability is disabled and relying on the fact that we already disable this capability at the Safest Level.
@tjr is this something that you believe could realistically be translated in a built-in Firefox feature at some point in the future?
This isn't a Security Slider thing. The Security Slider is for Security, defense against 0-days. Not Privacy. You accidently get more privacy via the slider, because some features like JS are disabled, but once you add privacy-only behaviors to the slider you're starting down a slippery slope. Less so in the sense of "This will hurt our users" and more so in the sense of "This means we're making decisions that are kind of ancillary or inconsistent with our threat model of what we try to protect against and what we don't and it will result in more conversations (like this one) and more effort and brainpower being drawn from the things that are more directly in line with our threat model (and goals in usability.)"
What you're trying to do is treat a webpage as a static piece of content that once loaded, provides no feedback to the server about user behavior. If that is something you want to add to the threat model as an explicit goal, then so be it. It's going to require a concerted effort to identify the ways it can be broken (e.g. lazy image loading!). If you want to support such a feature, it might be easier to load the page, wait for the loading to be 'done' (don't ask me how you know that, but let's pretend you do), take a screenshot (I think that feature is still in the browser, right?), and then replace the page with the screenshot and discard the context.
I don't think this is a worthwhile goal to have. Under some use cases I suppose it might be nice, but has anyone in user studies ever said they wanted it? I doubt it's something they've ever considered. That said, if you asked someone "If a website added tracking information to a page to learn that a user was scrolling down it - not that you personally are doing it, but whoever the anonymous user is, they're scrolling down and reading more of the page - is that a concerning privacy leak to you?" - what would they say?
"is this something that you believe could realistically be translated in a built-in Firefox feature at some point in the future" - maybe I've biased myself by having the above opinions, but I think the relevant engineers would consider this an unnecessarily complex feature. That said, if you showed up with a patch and the patch wasn't that complicated - maybe they'd take it anyway.
We get both loading="lazy" neutering and CSS resources prefetching for free in NoScript 11.4.35, and it's now (since rc3) configurable via two distinct capabilities (lazy_load and unrestricted_css).
Both capabilities get automatically disabled on scriptless presets unless user explicitly change them, so the Tor Browser with NoScript 11.4.35 will get this configuration automatically on Safest (and individual script-disabled pages, if any) like all the other NoScript users unless we decide to explicitly handle those capabilities (e.g. re-enable them back, in order not to force any eager loadoing) in our SecurityLevel module.
Regardless of the conclusion in Tor Browser, we should still upstream this information to Mozilla since this is an easy work-around for a protection they explicitly added as part of the HTML specification.
Regardless of the conclusion in Tor Browser, we should still upstream this information to Mozilla since this is an easy work-around for a protection they explicitly added as part of the HTML specification.
IIUC you suggest to open a (confidential?) mozbug noting the contradiction between obeying the specification by disabling loading="lazy" on pages where "scripting is disabled" as "an anti-tracking measure", and neglecting to do the same for CSS features which are equivalent in their potential for abuse, and let Mozilla folks discuss. I'm fine with that.
On a side note, since the definition of "scripting is disabled" used in the spec and in the implementations does not include CSP-based script blocking like NoScript's, I'd still argue that users who aren't aware of this quite arcane difference may find the lazy loading behavior of pages script-blocked by NoScript < 11.4.35 quite confusing / unexpected.
The mitigation we discussed is included in the just released NoScript 11.4.35rc2. If no disaster is reported by beta testers, I'll submit 11.4.35 stable to AMO over the weekend or early next week, in time for inclusion in next Tor Browser releases.