As part of the tails merge, we want to adopt a common security policy. Considering we don't have a formal security policy in tor at all (tpo/team#41), perhaps the simplest would be to just adopt the Tails security policy directly.
There are many questions to answer here, however. First is compliance of TPA to "level B" of the security policy, which should perhaps be checked individually with all current TPA members. But then, if we adopt this policy, perhaps we need to actually make it public like our other policies, something which tails folks might not feel comfortable with. If not, then we need to figure out how to have secret policies.
We also need to figure out if we adopt the compliance framework and other policies they have out there (like the "Data Storage and Retention Policy") and whether tails want to do some changes to their policy before it reaches wider adoption.
This ticket, in any case, is secret because the merge is still a secret anyways...
In any case, here's a draft checklist:
check compliance of TPA members to Tail's security policy, level B (which includes level A), a copy of those policies was sent to tpa-team@ for review, delegated to #41962
decide on the compliance strategy (currently, tails does a "compliance check" but it's marked as "broken for years" so perhaps it's not the best thing to import?)
decide if the policy can be made public (@anarcat@groente@zen others?) if so, it should be documented in the wiki. either way, we use TPA-RFC-18 as a tracking number for this work. (it can, but not divulging the tails merge)
determine if we need to adopt a emergency rotation checklist... we do have a mass rotation procedure for the password manager, but not much above that.
so the above summary is what needs to be done, here's my take on each individual step, input welcome.
check compliance of TPA members to Tail's security policy, level B (which includes level A), a copy of those policies was sent to tpa-team@ for review
I'm in compliance, and i suspect at least @lavamind and @lelutin are. I suspect all of TPA might be.
If we aren't, i would suggest seriously considering revoking access to people that cannot comply in a timely manner, or at least discussing the matter with the affected admins. At least two members have shown openness to reviewing their status inside the team, so that is not that big of a deal as it sounds.
check if tails needs to make changes before wider adoption ( @groente@zen )
The big thing, IMHO, is the level B notes which show a typical limit of the policy, in that we routinely execute untrusted code when we build software. In my case, I try to restrict this to virtual machines, by building everything inside Debian packages through qemu, but this is not always the case, and there are many other similar vulnerabilities that can be leveraged.
Git repositories, for example, could be abused to run arbitrary code if not trusted, in the past. A lot of those issues are being patched by git, but i suspect a lot of that still remains.
I would treat this as a separate problem for now, this is something TPA is somewhat involved in, but I feel most of the problems in this area lie somewhat outside of TPA's responsability...
In any case, it doesn't feel like a blocker for the merge.
decide on the compliance strategy (currently, tails does a "compliance check" but it's marked as "broken for years" so perhaps it's not the best thing to import?)
I would not do a regular compliance check and but would add one in the onboarding procedure.
decide if the policy can be made public ( @anarcat@groente@zen others?) if so, it should be documented in the wiki. either way, we use TPA-RFC-18 as a tracking number for this work.
I would make the policy public.
determine if we need to adopt a emergency rotation checklist... we do have a mass rotation procedure for the password manager, but not much above that.
We have the retire-a-user procedure, perhaps we could improve it based on the above, but i would otherwise not consider this as part of the security policy here.
I would treat this as a separate problem for now, this is something TPA is somewhat involved in, but I feel most of the problems in this area lie somewhat outside of TPA's responsability...
In any case, it doesn't feel like a blocker for the merge.
I agree this is a separate problem.
check if tails needs to make changes before wider adoption ( @groente@zen )
The big thing, IMHO, is the level B notes which show a typical limit of the policy, in that we routinely execute untrusted code when we build software. In my case, I try to restrict this to virtual machines, by building everything inside Debian packages through qemu, but this is not always the case, and there are many other similar vulnerabilities that can be leveraged.
Git repositories, for example, could be abused to run arbitrary code if not trusted, in the past. A lot of those issues are being patched by git, but i suspect a lot of that still remains.
Do you have a proposal on how to deal with this? Personally, I think the spirit of the policy (apply caution and isolation when dealing with untrusted sources) covers this scenario, but I'm open to being more explicit.
decide on the compliance strategy (currently, tails does a "compliance check" but it's marked as "broken for years" so perhaps it's not the best thing to import?)
I would not do a regular compliance check and but would add one in the onboarding procedure.
I'd suggest compliance checks on onboarding and on every policy update (which is probably every couple of years).
decide if the policy can be made public ( @anarcat@groente@zen others?) if so, it should be documented in the wiki. either way, we use TPA-RFC-18 as a tracking number for this work.
I would make the policy public.
I see no reason why the policy should be secret, but I'll discuss it with the other Tails folks. The notes on the shortcomings of the policy I would like to keep confidential.
I see no reason why the policy should be secret, but I'll discuss it with the other Tails folks. The notes on the shortcomings of the policy I would like to keep confidential.
That is fine by me.
About making the notes public, I think we should aim to fix and enforce the policies, but don't oppose to keep the shortcomings secret meanwhile.
I see no reason why the policy should be secret, but I'll discuss it with the other Tails folks.
Great, let me know!
The notes on the shortcomings of the policy I would like to keep confidential.
No problem, although I would probably include some "caveats" section or something in the final proposal, which might recoup some of those concerns. We can draft something confidential and publish only when we're happy when the contents.
About making the notes public, I think we should aim to fix and enforce the policies, but don't oppose to keep the shortcomings secret meanwhile.
To protect your emails from online sniffing, you: - MUST use a "trusted" email provider.
It might be wise to specify a bit more what we mean by trusted. In particular, I am left wondering if gmail is in or out. I bet reasonable people could disagree. (I would personally choose 'out'.)
Consider using challenge/response 2FA to Gitlab for all workers, including GA
That's certainly something we could make broader in Tor, but it's a little complicated by the size of the orga ("all workers" means what? TPI employees? core contributors? people with access to projects in gitlab?)...
i think in the short term, i would make this a team-specific policy decision.
Consider using touch-based smart cards for SSH access for:
this is an informal requirement we have, currently enforced during onboarding, would really like to write this down as well.
Consider using touch-based smart cards for PGP for:
same as above.
Ensure board members don't install sketchy software (pirated versions, scripts from an unknown source, etc.)
yeaaah... that would be nice. :p
... and about:
Accounting Team
Sysadmins
Release Managers
Board Members
FT
Helpdesk PGP keys
Just to make sure we're on the same page here, I'm strictly restricting TPA-RFC-18 to the second group in there (sysadmins, AKA "TPA team", AKA "people with root access" in Puppet, see also TPA-RFC-7: root access which could perhaps be merged into TPA-RFC-18?
I think it might be worthwhile for tails folks to think about the scope of that policy with the merge in mind: perhaps it could be limited to the "tails team" (AKA FT) until we figure out the tor-wide security policy, which I consider out of scope here (see tpo/team#41)... I understand this might feel like a huge security regression for tails folks, but things are what it is and I don't know if we can realistically expect everyone to change all of those things in the short term.
not explicitly by name that i know of, but it probably falls under a ton of shady anti-terrorism laws. so let's rephrase that to: countries where using or working on tails might get you in trouble (russia, belarus, turkmenistan, iran, etc.).
could you expand on this? why does one need a browser plugin? wouldn't it be enough for a copy-paste password manager to be used here?
a browser plugin has the advantage that it won't automagically show you the password on a phishing domain (gitlab.torbroject.org or shit like that), giving the user a good hint that something's not right.
what's "GA"?
that's the, i guess by now defunct, tails general assembly
I think it might be worthwhile for tails folks to think about the scope of that policy with the merge in mind: perhaps it could be limited to the "tails team" (AKA FT) until we figure out the tor-wide security policy, which I consider out of scope here (see tpo/team#41)... I understand this might feel like a huge security regression for tails folks, but things are what it is and I don't know if we can realistically expect everyone to change all of those things in the short term.
let's start with TPA so we can feel comfortable sharing access to eachother's infra and worry about the rest of TPI later
The pseudonym I use is very much easily linked to my birth name so I don't care much anymore to hide my name. this point is "may" but yeah I don't fit into this I guess
I'd love to connect to email servers via onion service but I use thunderbird and the TorBirdy extension has stopped working a long while ago.. I could possibly fiddle around to modify the .desktop launcher to wrap thunderbird with torsocks..
level b:
I don't have an apparmor profile loaded for firefox and the debian package for firefox doesn't come with one. I wonder if we could share an example profile that could serve as the basis for that in the team
I use coc.nvim with extensions which download LSP servers from random places. I could/probably should work on containerizing the LSP servers but the coc extensions are generally not built for letting you use containers. I don't know how much work this is going to require and if it'll keep my editor stable
Same thing goes for vimspector which downloads DAP servers from random places, but that one I could simply uninstall since I don't use a debugger all that often. For what it's worth for vimspector, I know that it checks the sha512 sum of the archives it downloads.
I use a good number of vim plugins, all from git checkouts. Do I need to audit all of the plugins every time I install/update one? or would I be better off containerizing vim with its plugins?
I can generally tick off all of the MUST points except for one and most of the SHOULD points. I guess the last point above for level b means I'm currently not conforming to that level's requirements.
I don't have an apparmor profile loaded for firefox and the debian package for firefox doesn't come with one. I wonder if we could share an example profile that could serve as the basis for that in the team
that would be really nice. i know at least @micah runs firefox inside of a container (snaps, but whatever) for that reason.
i am not sure, however, that containerising the entire browser is worth it: most modern browsers use "user namespaces" to do their own container thingies, to isolate tabs/sites from each other: if something manages to cross that boundary, you're already in big trouble because that shady image sharing website you just opened by mistake just got access to gitlab.tpo and you're owned. it doesn't matter if firefox can't escape ~/.mozilla that much.
still, it's nice to limit that blast radius and would very much welcome that myself.
that said, i should note that torbrowser-launcher does ship with apparmor profiles, perhaps we could review those, if tails folks don't have anything peculiar to start with
I use coc.nvim with extensions which download LSP servers from random places. I could/probably should work on containerizing the LSP servers but the coc extensions are generally not built for letting you use containers. I don't know how much work this is going to require and if it'll keep my editor stable
i disabled that stuff in emacs's LSP, and would like you to do the same, actually. it's not that much work to set those up by hand (and, basically, TOFU) when you encounter new things... it's not as good as containerizing, but it's better than yolo...
Same thing goes for vimspector which downloads DAP servers from random places, but that one I could simply uninstall since I don't use a debugger all that often. For what it's worth for vimspector, I know that it checks the sha512 sum of the archives it downloads.
that does seem like a less attractive tradeoff...
I use a good number of vim plugins, all from git checkouts. Do I need to audit all of the plugins every time I install/update one? or would I be better off containerizing vim with its plugins?
i have a similar problem with emacs packages, for what it's worth. i consider MELPA as a trustworthy source, part of my attack surface, but naturally that's kind of a problem in itself.
same applies to browser extensions as well, BTW... not sure how we should articulate this in the policy without opening the floodgates completely to arbitrary code...
anarcatmarked the checklist item decide if the policy can be made public (@anarcat@groente@zen others?) if so, it should be documented in the wiki. either way, we use TPA-RFC-18 as a tracking number for this work. as completed
marked the checklist item decide if the policy can be made public (@anarcat@groente@zen others?) if so, it should be documented in the wiki. either way, we use TPA-RFC-18 as a tracking number for this work. as completed
Perhaps one area the policy could be improved is where it applies. I use a desktop, laptop and a phone.
Does everything in the policy apply to all three devices?
Does it apply only conditionally (eg. if I push code... somewhere, from that device?)
Could there be parts that apply to some devices only (eg. full-disk encryption on Android?), and other parts to all devices (eg. using software update mechanisms)?
i hope this has become more clear now. tails never had a policy about phones, because we were all quite phone-reluctant people. i do think it makes sense to have some policies for phones, but i wonder: does anyone use their phone for sysadmin work? and if so, what do you use it for?
personally, the only thing i use my phone for is calendaring.
perhaps we might want to clearly state that phones our out of scope, but that we assume you don't do sysadmin shit on them.
i do send passwords over Signal from Signal-desktop (which percolates out to the phone!) when pgp fails, that said... so maybe we do need some sort of policy there.
personally, the only thing i use my phone for is calendaring.
Same here, right now.
FTR, I used to use aNag for Nagios/Icinga2 notifications on the phone, but I stopped because of too much noise and not enough added value for my use case.
i'm not sure we should be using the opsec templates stuff, to be quite frank.
it's a little jarring, in the document, to jump from the "normal" TPA-RFC template into what seems to be an entirely different document, with yet another copy of the table of contents.
if we're going to source this from elsewhere, let's just generate the document as is, and throw that entire file as the RFC. maybe we could prefix the front matter.
we also should have some documentation on how this was generated, and how we'll keep it in sync with the templates... really, i wonder if perhaps TPA-RFC-18 should just be a pointer to the document generated in the opsec pipelines, at this point, as otherwise we have two sources of truth for this and it will get confusing quick.
This document contains the baseline security procedures for protecting both an organization, its employees, it's contributors and the community in general.
This jargon is a little unclear to me: the top of the policy says the scope is only TPA, but then this talks about "the community in general" for example and seems to imply it might apply to all of tor.
this is copy-pasted from the template, so if you want this adjusted, we should adjust it there. personally, i don't really see the problem: the line you quote describes what/whom the policy is meant to protect, not to whom the policy applies.
well you're saying two things here: above you said "let's agree on the policy and not worry about how it's built", and here you're saying "let's fix this upstream"... so which one is it? :)
what i meant with 'not worry about how it's built' is 'not worry about the technicalities of how it's built and figure that out later', not 'not worry about (the possibility of) syncing up with upstream'
Is this list supposed to be exhaustive? because if we're going to do an assesment of contents here, we're missing quite a few things. bridges.tpo, in particular, is quite sensitive, and so is logging in various places.
NextCloud, in itself, has varying security levels from PUBLIC, but certainly also "top secret" stuff like HR.
It's also not entirely clear to me what the differenec is between PRIVATE, SECRET, and TOP SECRET, or why those are in upper case.. At first I thought that was defined in the traffic light protocol, but it doesn't look like it.
Also:
It's RECOMMENDED that each document has a version and an INFOSEC status on it's beginning.
That is kind of a massive undertaking: does that mean each wiki page should have such a statement associated to it? or could we scope it at the repository (e.g. "wiki", "gitlab project X", "Nextcloud share Y") level?
I also found this section a bit unclear, @rhatto can you clarify what the intention is here?
It's RECOMMENDED that each document has a version and an INFOSEC status on it's beginning.
That is kind of a massive undertaking: does that mean each wiki page should have such a statement associated to it? or could we scope it at the repository (e.g. "wiki", "gitlab project X", "Nextcloud share Y") level?
IMHO, a pragmatic approach of simply starting to tag new documents and adding an infosec status if we edit documents is good enough.
couldn't we perhaps assume a "PUBLIC" status so that we don't pollute everything we publish with such a header by default? that seems like an unrealistic expectation...
it's not a blocker, but i don't think we should recommend people add a header to every single document out there.
for example, do we actually recommend that every single page in the TPA wiki should have such a header? that seems really verbose, especially since the entire wiki is public... some of that stuff is implicit no?
maybe we could just say we RECOMMEND adding the header for non-public stuff only?
As an example, procedures for Online Support work begins with the OnSup prefix, so the first policy from the level 0 is referred as OnSup.0.1 etc.
I guess here we hit the meat of the opsec templates stuff.. I find this stuff kind of noisy, and that it makes the document harder to read. It also looks like OIDs which, in my psyche, is kind of traumatic, but that might be something i can live with. :p
Perhaps those could be embedded as comments in the markdown source instead, to make it easier to read?
Follow any existing organization-wide, baseline security policies.
considering we don't have any org-wide policies, this seems a little odd. It also seems strange coming from opsec-templates where each team adopts a subset of such policies...
During onboard, make the newcomers to be the reviewers of the security policies, templates and HOWTOs for one month, and encourage them to submit merge requests to fix any issues and outdated documentation.
Nice, but those two will require changes to our onboarding procedure (new-person.md) in the wiki. Not a big deal, but something that should be fixed before adoption, so this actually sticks.
Encryption passphrase SHOULD be considered strong and MUST NOT be used for other purposes.
Could we expand that? I manage multiple personal servers, should i really have to use a different LUKS secret key for all of those machines, and memorize all of them?
What does "strong" mean? Perhaps we could set a entropy target here like "78 bits of entropy" (which is what a 6-word diceware password gives you).
Adopt safe procedures for handling key material (Onion Service keys, HTTPS certificates, SSH keys etc), including generation, storage, transmission, sharing, rollover, backing up and destruction.
I'm not sure what that actually means, so it should probably be clarified.
I would ditch this entire section: we don't need travel advice in TPA's security policy. If the company adopts such a policy, perhaps we could include it here if it's not already refered to by pid:general_security_policies, but even then, i find this a bit out of context and, to a certain extent, patronizing.
Use desktop isolation/sandboxing whenever possible (such as Qubes) (which threat models and roles it would apply etc), but not imposing this as a requirement.
The dreaded Qubes requirement crept in here! First, I don't like those double parens... but more importantly, it's not clear to me exactly what we're asking here... is it a MAY? Is it a MUST? we're "not imposing this as a requirement", why is this there then?
The tails policy looked better here: it was specifically providing various compatible solutions for this requirement, of which qubes was one option...
Isolated using either:
A virtual machine
A desktop session separate from the one that has the privileged
access this policy is about, running under a non-privileged UID
(not sudoer and not PolicyKit admin for example).
Flatpak with sandboxing enabled, running inside a Wayland
desktop session.
Snap, running inside a Wayland desktop session, on a system with
AppArmor enabled.
Containers (Docker, LXC, you name it).
You SHOULD configure your containers to make it harder for
malicious code to escape the container… and to make the
consequences of a successful escape less catastrophic.
Perhaps we could reuse this phrasing somehow?
Use a Hardware Security Token: 1.1. For Yubikeys, refer to the Yubikey HOWTO.
Let's expand on this significantly. I think the policy right now is that you MUST use a yubikey to store your encryption keys to be granted access to the password manager. You also MUST (or is it a SHOULD?) use a yubikey to get granted global root access through puppet.
considering we don't have any org-wide policies, this seems a little odd. It also seems strange coming from opsec-templates where each team adopts a subset of such policies...
it's just boilerplate, really, if we do ever get org-wide policies, this tells us we shouldn't ignore those just because we have our own little policy.
Could we expand that? I manage multiple personal servers, should i really have to use a different LUKS secret key for all of those machines, and memorize all of them?
this policy doesn't apply to your personal servers, does it?
What does "strong" mean? Perhaps we could set a entropy target here like "78 bits of entropy" (which is what a 6-word diceware password gives you).
i'm not a big fan of defining fast-changing things like this in a policy. imho a policy like this should lean more towards the tactical, rather than describing things on an operational level. we can later choose whether we want to add and maintain operational notes.
Adopt safe procedures for handling key material (Onion Service keys, HTTPS certificates, SSH keys etc), including generation, storage, transmission, sharing, rollover, backing up and destruction.
I'm not sure what that actually means, so it should probably be clarified.
if so, the clarification should imho be done in a separate operational note.
I would ditch this entire section: we don't need travel advice in TPA's security policy. If the company adopts such a policy, perhaps we could include it here if it's not already refered to by pid:general_security_policies, but even then, i find this a bit out of context and, to a certain extent, patronizing.
ah, i hadn't seen that MR yet, that looks more suitable to our needs. i've replaced the check-in section with a subset of the basic device policy (removing duplicate requirements for FDE and removing the controversional rule on bringing your primary device).
The dreaded Qubes requirement crept in here! First, I don't like those double parens... but more importantly, it's not clear to me exactly what we're asking here... is it a MAY? Is it a MUST? we're "not imposing this as a requirement", why is this there then?
sorry this triggered some trauma, but it's explicitly not a requirement and qubes is merely an example. to be more clear on the not-a-requirement-scale, i've set this to OPTIONAL now.
Let's expand on this significantly. I think the policy right now is that you MUST use a yubikey to store your encryption keys to be granted access to the password manager. You also MUST (or is it a SHOULD?) use a yubikey to get granted global root access through puppet.
okay, i've changed this to REQUIRED and bumped the level (for whatever that means).