In message <199609260135.SAA05454@toad.com>, John Gilmore writes:
>swapped-out majordomo processes fighting for a lock. It takes each of
>them more than ten seconds to swap in and look -- and only ONE of them
>actually has the lock, so the other 79 are just getting in that one's
>way. They should all back off, rather than delaying the same amount
>after each successive failure to grab the lock.)
I agree a simple random 1-10 sleep is not sufficient. Even on a
few rapid bogus unsubscribes I get warnings from Majordomo on not finding
L.Log -- apparently a race condition? (or mabye shlock is warning
on a no-problem condition?)
Anyway, exponential or even linear backoff is good, but you need a
random seed in there to avoid the backed off processes from fighting
with each other again.
The best of both worlds would be to add a random value to the sleep
interval each try. Perhaps something in the range of 5-15 seconds?
I see the advantage of having a sharp initial backoff -- it makes
sense when you're bombarded with lots of requests. However if too sharp
then you have a lot of dead time after the first few requests get
processed, and then another spike when those sleep(100)'s wake up.