英文:
How to retry a Polly rate limit when the rate limit has exceeded?
问题
I've got the following policy setup:
// Slack api for postMessage is generally 1 per channel per second.
// Some workspace specific limits may apply.
// We just limit everything to 1 second.
var slackApiRateLimitPerChannelPerSecond = 1;
var rateLimit = Policy.RateLimitAsync(slackApiRateLimitPerChannelPerSecond, TimeSpan.FromSeconds(slackApiRateLimitPerChannelPerSecond),
(retryAfter, _) => retryAfter.Add(TimeSpan.FromSeconds(slackApiRateLimitPerChannelPerSecond)));
This should:
- Rate limit requests till 1 req/s
- Retry when rate limited
I can't wrap my head around wrapping this into a second policy that would retry...
I could retry this like so:
try
{
_policy.Execute(...)
}
catch(RateLimitedException ex)
{
// Policy.Retry with ex.RetryAfter
}
But that does not seem right.
I'd like to retry this a couple (3?) times so the method is abit more resilient - how would I do that?
英文:
I've got the following policy setup:
// Slack api for postMessage is generally 1 per channel per second.
// Some workspace specific limits may apply.
// We just limit everything to 1 second.
var slackApiRateLimitPerChannelPerSecond = 1;
var rateLimit = Policy.RateLimitAsync(slackApiRateLimitPerChannelPerSecond, TimeSpan.FromSeconds(slackApiRateLimitPerChannelPerSecond),
(retryAfter, _) => retryAfter.Add(TimeSpan.FromSeconds(slackApiRateLimitPerChannelPerSecond)));
This should:
- Rate limit requests till 1 req/s
- Retry when rate limited
I can't wrap my head around wrapping this into a second policy that would retry...
I could retry this like so:
try
{
_policy.Execute(...)
}
catch(RateLimitedException ex)
{
// Policy.Retry with ex.RetryAfter
}
But that does not seem right.
I'd like to retry this a couple (3?) times so the method is abit more resilient - how would i do that?
答案1
得分: 4
我可能来晚了,但还是要说两句。
速率限制器
此策略的引擎以无锁方式实现令牌桶算法。这有一个含义,它的工作方式与你可能直觉所想的不同。
例如,从这个策略的角度来看,每秒1个请求与每分钟60个请求是一样的。
但实际上,后者不应该强制均匀分布(但它确实这样做了)!
所以,你不能像这样使用它:
- 在前10秒内发出50个请求
- 45秒内没有任何请求
- 在最后5秒内,可以发出9个额外的请求,而不会达到限制
速率限制器作为共享策略
在 Polly 的大多数策略中都是无状态的。这意味着两次执行不需要共享任何东西。
但在断路器(Circuit Breaker)的情况下,控制器内部有一个状态。因此,你应该在多次执行之间使用相同的实例。
在 Bulkhead 和速率限制器策略的情况下,状态不是那么明显。它们隐藏在实现内部。但在这里也适用同样的规则,你应该在多个线程之间共享相同的策略实例以实现期望的结果。
速率限制器与速率门
速率限制器本身可以在客户端和服务器端都使用。服务器端可以主动拒绝过多的请求以减轻过度洪泛。而客户端可以主动自我限制输出请求,以遵守服务器和客户端之间的契约。
这个策略更适用于服务器端(请参阅RetryAfter
属性)。在客户端上,速率门实现可能更合适,它通过利用队列和计时器来延迟输出请求。
带重试的速率限制器
如果重试和速率限制器都在客户端上
var retryPolicy = Policy
.Handle<RateLimitRejectedException>()
.WaitAndRetry(
3,
(int _, Exception ex, Context __) => ((RateLimitRejectedException)ex).RetryAfter,
(_, __, ___, ____) => { });
如果重试在客户端,速率限制器在服务器端
var retryPolicy = Policy<HttpResponseMessage>
.HandleResult(res => res.StatusCode == HttpStatusCode.TooManyRequests)
.WaitAndRetry(
3,
(int _, DelegateResult<HttpResponseMessage> res, Context __)
=> res.Result.Headers.RetryAfter.Delta ?? TimeSpan.FromSeconds(0));
英文:
I might be late to the party but let me put in my 2 cents.
Rate limiter
This policy's engine implements the token bucket algorithm in a lock-free fashion. This has an implication so, it does not work as you might intuitively think.
For instance from this policy perspective 1 request / second is the same as 60 requests / minute.
In reality the latter should not impose even distribution (but it does)!
So, you can't use it like this:
- issue 50 requests in the first 10 seconds
- 45 seconds without any requests
- in the last 5 seconds 9 more requests can be issued without reaching the limit
Rate limiter as shared policy
In case of Polly most of the policies are stateless. This means two executions do not need to share anything.
But in case of Circuit Breaker there is a state inside a Controller. So, you should use the same instance across multiple executions.
In case of Bulkhead and Rate Limiter policies the state are not so obvious. They are hidden inside the implementation. But the same rule applies here, you should share the same policy instance between multiple threads to achieve the desired outcome.
Rate limiter vs Rate gate
Rate limiter itself can be used both on client and server-side. Server-side can proactive refuse too many requests to mitigate over-flooding. Whereas client-side can proactively self-restrict the outgoing requests to obey to the contract between server and client.
This policy is more suitable for server-side (see the RetryAfter
property). On the client side a rate gate implementation might be more appropriate which delays outgoing requests by utilizing queues and timers.
Rate limiter with retry
If retry and rate limiter both live on client-side
var retryPolicy = Policy
.Handle<RateLimitRejectedException>()
.WaitAndRetry(
3,
(int _, Exception ex, Context __) => ((RateLimitRejectedException)ex).RetryAfter,
(_, __, ___, ____) => { });
If retry resides on client-side whereas rate limiter on server-side
var retryPolicy = Policy<HttpResponseMessage>
.HandleResult(res => res.StatusCode == HttpStatusCode.TooManyRequests)
.WaitAndRetry(
3,
(int _, DelegateResult<HttpResponseMessage> res, Context __)
=> res.Result.Headers.RetryAfter.Delta ?? TimeSpan.FromSeconds(0));
答案2
得分: 1
以下是您要翻译的代码部分:
你可以忽略工厂,将速率限制策略包装到另一个策略中:
var ts = TimeSpan.FromSeconds(1);
var rateLimit = Policy.RateLimit(1, ts);
var policyWrap = Policy.Handle<RateLimitRejectedException>()
.WaitAndRetry(3, _ => ts) // 注意,您可能希望在这里使用更高级的退避策略
.Wrap(rateLimit);
policyWrap.Execute(...);
如果你想要遵守返回的 RetryAfter
,那么 try-catch
方法是正确的方式,基于文档示例:
public async Task SearchAsync(string query, HttpContext httpContext)
{
var rateLimit = Policy.RateLimitAsync(20, TimeSpan.FromSeconds(1), 10);
try
{
var result = await rateLimit.ExecuteAsync(() => TextSearchAsync(query));
var json = JsonConvert.SerializeObject(result);
httpContext.Response.ContentType = "application/json";
await httpContext.Response.WriteAsync(json);
}
catch (RateLimitRejectedException ex)
{
string retryAfter = DateTimeOffset.UtcNow
.Add(ex.RetryAfter)
.ToUnixTimeSeconds()
.ToString(CultureInfo.InvariantCulture);
httpContext.Response.StatusCode = 429;
httpContext.Response.Headers["Retry-After"] = retryAfter;
}
}
更新
还有一个带有 sleepDurationProvider
重载,它还传递了异常,因此可以用于 Wrap
方法:
var policyWrap = Policy.Handle<RateLimitRejectedException>()
.WaitAndRetry(5,
sleepDurationProvider: (_, ex, _) => (ex as RateLimitRejectedException)?.RetryAfter.Add(TimeSpan.From....) ?? TimeSpan.From...,
onRetry:(ex, _, i, _) => { Console.WriteLine($"retry: {i}"); })
.Wrap(rateLimit);
<details>
<summary>英文:</summary>
You can omit the factory and wrap the rate-limiting policy into another one:
```csharp
var ts = TimeSpan.FromSeconds(1);
var rateLimit = Policy.RateLimit(1, ts);
var policyWrap = Policy.Handle<RateLimitRejectedException>()
.WaitAndRetry(3, _ => ts) // note that you might want to use more advanced back off policy here
.Wrap(rateLimit);
policyWrap.Execute(...);
If you want to respect the returned RetryAfter
then try-catch
approach is way to go, based on the documentation example:
public async Task SearchAsync(string query, HttpContext httpContext)
{
var rateLimit = Policy.RateLimitAsync(20, TimeSpan.FromSeconds(1), 10);
try
{
var result = await rateLimit.ExecuteAsync(() => TextSearchAsync(query));
var json = JsonConvert.SerializeObject(result);
httpContext.Response.ContentType = "application/json";
await httpContext.Response.WriteAsync(json);
}
catch (RateLimitRejectedException ex)
{
string retryAfter = DateTimeOffset.UtcNow
.Add(ex.RetryAfter)
.ToUnixTimeSeconds()
.ToString(CultureInfo.InvariantCulture);
httpContext.Response.StatusCode = 429;
httpContext.Response.Headers["Retry-After"] = retryAfter;
}
}
UPD
There is WaitAndRetry
overload with sleepDurationProvider
which also passes the exception, so it can be used for the Wrap
approach:
var policyWrap = Policy.Handle<RateLimitRejectedException>()
.WaitAndRetry(5,
sleepDurationProvider: (_, ex, _) => (ex as RateLimitRejectedException)?.RetryAfter.Add(TimeSpan.From....) ?? TimeSpan.From...,
onRetry:(ex, _, i, _) => { Console.WriteLine($"retry: {i}"); })
.Wrap(rateLimit);
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论