Skip to content

GitLab

  • Menu
Projects Groups Snippets
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in
  • Trac Trac
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Issues 246
    • Issues 246
    • List
    • Boards
    • Service Desk
    • Milestones
  • Monitor
    • Monitor
    • Metrics
    • Incidents
  • Analytics
    • Analytics
    • Value stream
  • Wiki
    • Wiki
  • Activity
  • Create a new issue
  • Issue Boards
Collapse sidebar
  • Legacy
  • TracTrac
  • Issues
  • #2755

Closed (moved)
(moved)
Open
Created Mar 14, 2011 by Karsten Loesing@karsten

Reconsider BridgeDB's pool assignment file implementation and deployment

While deploying the new BridgeDB feature that dumps bridge pool assignments to disk, I had two ideas for improving that feature. Neither of these ideas are critical, and I wanted to see how well the current implementation works before making it even more perfect. Maybe we should revisit these ideas in a month or two from now.

  • We only write to assignments.log after parsing a new network status file, but not after dumping unallocated bridges to file buckets. That means we might miss the fact that, say, Twitter distributes new bridges for up to 30 minutes between dumping to file buckets and reading the next network status. Does that matter? Should we also write to assignments.log after dumping to file buckets?

  • Appending to a single assignments.log file and rsyncing that to the host that sanitizes it won't scale forever. We could run a monthly or weekly cronjob that runs "mv assignments.log assignments.log.old". metrics-db can handle multiple input files and will read both files, as long as they are rsync'ed correctly.

To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Assignee
Assign to
Time tracking