Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in
  • TPA team TPA team
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Issues 174
    • Issues 174
    • List
    • Boards
    • Service Desk
    • Milestones
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
  • Wiki
    • Wiki
  • Activity
  • Create a new issue
  • Issue Boards
Collapse sidebar
  • The Tor Project
  • TPA
  • TPA teamTPA team
  • Issues
  • #40476
Closed
Open
Issue created Oct 21, 2021 by anarcat@anarcatOwner

persistent GitLab runner volumes for shadow simulations

we're having disk and reliability issues of various kinds in the shadow simulations because the amount of data to process is so large. simulations sometimes stop and it's hard to examine the state from the artifacts, which sometimes fail to upload because they are too big or because gitlab is unavailable.

the solution we've picked is to have a shared volume that jnewsome could acess.

from #40340 (comment 2755709):

We discussed this a little bit today. The proposed solution is:

  • jnewsome to use the already-mounted /cache volume for persistent storage
  • give jnewsome some kind of remote shell access to the host machine
  • currently all cache volumes are world accessible, so giving jnewsome even unprivileged access to the host would implicitly give access to other projects' cache volumes. We discussed temporarily disabling the 'public' runner on this host machine as a stop-gap, but now maybe we can instead enable FF_DISABLE_UMASK_FOR_DOCKER_EXECUTOR?
  • Add some way for jnewsome to inspect the shadow cache volume as an unprivileged user.[...]
Assignee
Assign to
Time tracking