channelpading_get_netflow_inactive_timeout_ms is fragile when high_timeout < low_timeout.

See channelpading_get_netflow_inactive_timeout_ms and channelpadding_update_padding_for_channel

Nothing in that function, so far as I can tell, actually enforces high_timeout > low_timeout in the function that generates the next timeout. So for example, if somebody sends a padding negotiate cell with low_timeout set very high, we can get a negative value when we compute high_timeout - low_timeout.

This is bad! When we pass that to crypto_rand_int(), we assert that the input is less than INT_MAX - 1, to detect exactly this kind of underflow.

Fortunately, I think we cannot run into that situation from an errant PADDING_NEGOTIATE cell: channelpadding_updates_padding_for_channel makes sure that the high_timeout on each channel is at least the low timeout as it processes those cells. (I tested this with come code that sent those cells on a chutney network.)

Nevertheless, I would like to apply the following patch for defense-in-depth:

diff --git a/src/core/or/channelpadding.c b/src/core/or/channelpadding.c
index 47a04e5248caec..1f559f6c420d6c 100644
--- a/src/core/or/channelpadding.c
+++ b/src/core/or/channelpadding.c
@@ -186,7 +186,7 @@ channelpadding_get_netflow_inactive_timeout_ms(const channel_t *chan)
     high_timeout = MAX(high_timeout, chan->padding_timeout_high_ms);
   }
 
-  if (low_timeout == high_timeout)
+  if (low_timeout >= high_timeout)
     return low_timeout; // No randomization

What do you think?

I'm going to leave this ticket as confidential until somebody else can confirm that there isn't a way to trigger this in practice.

cc @mikeperry

Found while investigating #40591.