Addressing trac performance issues
We have been having issues with trac for a while now. These are mostly caused by a number of crawlers browsing our entire history on trac.
AFAIK we are not sure if these bots are random crawlers or search engines doing their normal duty. We could temporarily look at the log while the issue happen and try to see if we can spot known IP addresses or User Agent tags.
Regardless, there are a few things that we could do:
Implement the crawl delay directive: https://en.wikipedia.org/wiki/Robots_exclusion_standard#Crawl-delay_directive There is a good chance this is not going to change anything (especially with random bots) but it doesn't affect users and it can be implemented right away.
This could be DB related. I.e. the crawler is performing some complex sql query and the trac process is crashing because of a timeout or something. We know Trac search isn't fast either, so this could be an easy explanation.
Could this be a hardware issue? The host seem happy, but maybe we could easily upgrade the specs of the machine where trac lives?
Try rate limit on iptables / use something like HA-proxy? This will have to be discussed as we would have to change our policy on what we want to log. We wouldn't have to store logs for a long time but probably we would need to keep some state info / addresses / etc for a few minutes. I would consider this a last resort.