-
Notifications
You must be signed in to change notification settings - Fork 6
feat: add APIError, rate limit parsing, and retry backoff #15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
guys? |
|
Hey @salemalem Thank you for your PR 🙏 The whole team is currently out of office on a conference, so we will not be able to review this until next week. Sorry about the delay |
|
@TheEdgeOfRage which conference are you referring to? ETH DevCon Argentina? |
| if maxRetries != 0 && errAttempts >= maxRetries { | ||
| return nil, fmt.Errorf("%w. %s", ErrorRetriesExhausted, err.Error()) | ||
| } | ||
| fmt.Fprintln(os.Stderr, "failed to retrieve results. Retrying...\n", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why remove the print here?
dune/http.go
Outdated
| remStr := h.Get("X-RateLimit-Remaining") | ||
| resetStr := h.Get("X-RateLimit-Reset") | ||
|
|
||
| var lim, rem int |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need to shorten variable names :)
| var lim, rem int | |
| var limit, remaining int |
dune/http.go
Outdated
| InitialBackoff: 500 * time.Millisecond, | ||
| MaxBackoff: 5 * time.Second, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that rate limits apply on a minute basis, I think it makes sense to bump these a bit higher, as it's very possible that you'll still be rate limited after waiting only a couple of seconds
dune/http.go
Outdated
| return &RateLimit{Limit: lim, Remaining: rem, Reset: reset} | ||
| } | ||
|
|
||
| func nextBackoff(attempt int, p RetryPolicy) time.Duration { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And would you mind moving the retry and backoff code into a separate file please (e.g. dune/retries.go) 🙏
| func nextBackoff(attempt int, p RetryPolicy) time.Duration { | |
| func (p RetryPolicy) nextBackoff(attempt int) time.Duration { |
dune/http.go
Outdated
| if err != nil { | ||
| return nil, fmt.Errorf("failed to read error response body: %w", err) | ||
| snippetBytes, _ := io.ReadAll(io.LimitReader(resp.Body, 1024)) | ||
| var er ErrorResponse |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| var er ErrorResponse | |
| var errorResp ErrorResponse |
dune/http.go
Outdated
| return nil, fmt.Errorf("failed to read error response body: %w", err) | ||
| snippetBytes, _ := io.ReadAll(io.LimitReader(resp.Body, 1024)) | ||
| var er ErrorResponse | ||
| _ = json.Unmarshal(snippetBytes, &er) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This unmarshal could fail, since the body is capped at 1024 characters, so you'll need to handle the error here in that case
|
@salemalem I'm sorry about this, I seem to have accidentally deleted the notification for this PR and forgot about it 😬 Yeah, it was DevConnect :) |
|
@TheEdgeOfRage ok let me review your suggestions and get back to you. |
…; use method-based backoff
PR SummaryAdds a configurable retry policy with exponential backoff and enhances HTTP/error handling (APIError, rate limits, Retry-After), applying backoff in execution result polling.
Written by Cursor Bugbot for commit 7646e87. Configure here. |
|
@TheEdgeOfRage thanks for the thorough review — I’ve addressed your points and aligned with house style:
I’ll resolve the current merge conflicts and push the updates. If you’d like a different jitter strategy or backoff defaults, I can adjust quickly. Thanks again! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment @cursor review or bugbot run to trigger another review on this PR
dune/http.go
Outdated
| } | ||
| time.Sleep(sleep) | ||
| attempt++ | ||
| continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: POST request body empty on retry
When retrying a request after receiving a retryable status code (429, 5xx), the code reuses the same http.Request object without resetting its body. The request body (req.Body) is consumed by http.DefaultClient.Do() on the first attempt. On subsequent retry attempts, the body is empty because it has already been read. POST requests with JSON payloads (like QueryExecute, SQLExecute, QueryPipelineExecute) will send empty bodies on retry, causing silent failures. The fix would require calling req.GetBody() to regenerate the body before each retry.
Additional Locations (1)
dune/http.go
Outdated
| } | ||
|
|
||
| if resp.StatusCode != 200 { | ||
| defer resp.Body.Close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Defer in loop accumulates unclosed response bodies
The defer resp.Body.Close() statement is inside the retry loop. Each time the loop iterates with a non-200 status code, a new defer is added without the previous response body being closed immediately. This keeps multiple response bodies and their associated resources (TCP connections, file descriptors) open until the function returns, which could exhaust connection pool slots or cause "too many open files" errors during extended retry sequences.
…s; move RetryPolicy
No description provided.