-
-
Notifications
You must be signed in to change notification settings - Fork 23
Fix Redis rate limiter sliding window cleanup #133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Fix Redis rate limiter sliding window cleanup #133
Conversation
Summary of ChangesHello @dennismenken, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request resolves a significant issue in the Redis-based rate limiter where the sliding window mechanism failed to properly clean up old entries. Previously, the incorrect Redis command was used for cleanup, causing rate-limited keys to remain perpetually blocked. The change replaces the command with the correct one, ensuring that the rate limiter functions as designed, allowing keys to recover after their configured time window has elapsed. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
MarcEspiard
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please build and test your changes before submitting a PR.
The issue is real so I'll keep this open so you can finish addressing the issue.
Thank you
| // Remove all elements older than our window | ||
| let _: () = conn | ||
| .zrevrangebyscore(&redis_key, 0, window_start as i64) | ||
| .zremrangebyscore(&redis_key, 0, window_start as i64) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The build doesn't compile. zremrangebyscore doesn't exist in redis-rs.
The equivalent redis-rs function is zrembyscore
src/rate_limiter/redis_limiter.rs
Outdated
| // Remove all elements older than our window | ||
| let _: () = conn | ||
| .zrevrangebyscore(&redis_key, 0, window_start as i64) | ||
| .zremrangebyscore(&redis_key, 0, window_start as i64) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The build doesn't compile. zremrangebyscore doesn't exist in redis-rs.
The equivalent redis-rs function is zrembyscore
95eba0b to
99fcea7
Compare
|
Hey @MarcEspiard, good catch, the Redis command is I have updated the code to use that method, ran Yesterday, I noticed that Sockudo threw a rate limit error behind Traefik when trusted_hops = 0 during the health check, so I thought it would be a good idea to fix it. |
Summary
This Pull Request fixes a bug in the Redis-based rate limiter (both single-node and cluster variants) where old entries in the sliding window were never removed. Under sustained traffic, this could cause individual rate-limit keys to remain permanently "full" and reject all subsequent requests with HTTP 429, even after the configured time window had elapsed.
Background
The Redis rate limiter uses a sorted set per key to implement a sliding-window algorithm:
In both
RedisRateLimiterandRedisClusterRateLimiter, the cleanup logic was implemented as:ZREVRANGEBYSCOREonly reads the matching members; it does not delete them. Because the result was ignored and there was no additional cleanup, old timestamps were never removed from the ZSET. When combined with the periodicEXPIREon the key, a hot key could end up in a state where:ZCARDeventually returns a value greater than or equal tomax_requests.EXPIRE, so it never naturally expires.remaining = 0and rejects all new requests indefinitely for that key.The only way out of this state was an explicit
DELon the key (e.g., viareset()or manual intervention in Redis).Changes
The fix is minimal and targeted:
In
src/rate_limiter/redis_limiter.rsandsrc/rate_limiter/redis_cluster_limiter.rs, replace the cleanup call:with:
No configuration fields, public types, or external interfaces are changed.
Impact
With this change:
This change is backward compatible from a configuration and API perspective and only affects the internal cleanup semantics of the rate limiter.
Testing
cargo checkpasses locally.max_requestswithin the window and observe429as expected.