Conversation
|
This is an interesting idea! But I want to note that even having concurrency = 1 is not a guarantee you won't hit rate limits, e.g. for endpoints where the rate limits are 1 request per second (we may still make 3-4 requests in that second, just not concurrently). I think, rather than tuning concurrency, which is a bit of a blunt instrument, one way or another we need a The other thing I note here is that this is currently only implemented for the DFS scheduler, which is (IIRC) not used most of the time nowadays. |
All other schedulers except the queue one wraps the DFS scheduler (they re-order the table client pairs before calling the DFS logic). Agree that in some cases concurrency won't be good enough. I think the best approach would actually be to handle rate limiting in each service SDK, since there we can read rate limiting headers and see how much quota we have left, and prevent hitting it. If the service doesn't expose the rate limiting data, we should use a rate limiter for requests per second. We know that even relaying on static documented rate limits is no so reliable as it's not consistent between accounts, and the docs sometimes don't reflect the reality |
Summary
Use the following steps to ensure your PR is ready to be reviewed
go fmtto format your code 🖊golangci-lint run🚨 (install golangci-lint here)