Onion service Proof of Work effort limit on client side should not be hard-coded
Summary
Right now there's a hardcoded definition in hs_client.c:
#define CLIENT_MAX_POW_EFFORT 10000
I put this in after some limited experimentation on the machines I have handy, intending it to be a temporary stand-in for something that takes a couple minutes.
Steps to reproduce:
On an especially slow client, this hardcoded limit would mean that the hs_client could choose to spend an unacceptably long time on just the PoW solution portion of an onion service introduction.
On an especially fast client with a service under heavy attack, this hardcoded limit may be insufficient.
What is the current bug behavior?
Effort values over 10000 are truncated to 10000 with a log notice. This value can only be changed by recompiling tor.
What is the expected behavior?
One thought was we should have a benchmark at some point, but I'd argue there's no good time to do a benchmark. Doing it at startup is wasteful since most users won't need it, and doing it on first use will add significant latency to even a low-effort connection.
I propose a limited initial max effort (perhaps 1000 or so) along with an independent mechanism for keeping a running benchmark of the equix_solve() function. This can be used to calculate an approximate effort limit given a desired time limit.
That time limit can be more legitimately hardcoded, since it's related to other hardcoded timers and not to the client's CPU performance.
Relatedly we should have a mechanism for cancelling queued PoW solves. If we had a robust cancellation mechanism, is it possible we want to remove the client max effort entirely? This would simply trust the server's suggested effort, and if that is in fact just an arbitrary large number the worst outcome is that we waste CPU until the request is cancelled.
I think this would be ideal, avoiding any kind of preset upper limit.