Skip to content

Conversation

@gullmar
Copy link
Collaborator

@gullmar gullmar commented Nov 11, 2025

  • Always try to start the run, and parse the returned error.
  • When the "retry" option is enabled and an appropriate error is matched, put the request back in the queue (it will become last!).
  • Do not log the total account memory.

Here is a test run where I test this implementation, using a release candidate of apify-orchestrator: https://console.apify.com/admin/users/Ae6Jrtb88GD32gKfK/actors/runs/zfCeR6pjKczkE46AF#log

@gullmar gullmar requested a review from Copilot November 11, 2025 13:52
@gullmar gullmar self-assigned this Nov 11, 2025
Copilot finished reviewing on behalf of gullmar November 11, 2025 13:56
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR refactors the resource management approach to avoid using the sensitive /users/me/limits endpoint. Instead of proactively checking resource limits before attempting to start runs, the implementation now optimistically tries to start runs and parses error responses to determine if resources are insufficient.

Key changes:

  • Implements an optimistic "try first, handle errors" approach instead of checking limits upfront
  • Adds error parsing logic to detect memory and concurrent job limit errors from the Apify API
  • When retry is enabled and resource errors are detected, failed requests are re-enqueued at the end of the queue

Reviewed Changes

Copilot reviewed 12 out of 13 changed files in this pull request and generated no comments.

Show a summary per file
File Description
src/utils/apify-client.ts New utility function to parse start run errors and identify insufficient resource conditions
src/utils/apify-api.ts Removed getUserLimits function that called the sensitive limits endpoint
src/errors.ts Updated InsufficientMemoryError to only track required memory (not available memory) and changed units from GBs to MBs
src/clients/apify-client.ts Refactored main scheduler loop to optimistically attempt runs and handle errors, re-enqueuing on insufficient resources
test/utils/apify-client.test.ts Added tests for the new parseStartRunError function
test/utils/apify-api.test.ts Removed tests for deleted getUserLimits function
test/retry-on-insufficient-resources.test.ts Updated tests to mock parseStartRunError instead of getUserLimits
test/index.test.ts Updated test to verify ActorClient.start is called instead of getUserLimits
test/clients/run-client.test.ts Removed getUserLimits mock setup
test/clients/apify-client.test.ts Updated scheduler tests to mock parseStartRunError and verify new behavior
test/clients/actor-client.test.ts Removed getUserLimits mock helper function
package.json Bumped version to 0.7.0-rc.1 and added axios as dev dependency
package-lock.json Updated lock file with new version and axios dependency

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

`Not enough resources: waiting ${MAIN_LOOP_COOLDOWN_MS}ms before trying again`,
{ availableMemoryGBs, availableActorJobs },
);
this.runRequestsQueue.enqueue(nextRunRequest);
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we put the request back at the bottom of the queue. I'm thinking of a refactor that would make it easier to customize this behavior, but for now this was the simplest implementation.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why would you put it at the end? The reason that is done in Crawlee was probably the idea that it will improve blocking if there is a delay in the same URL but here it makes more sense to just continue in order

@gullmar gullmar requested a review from halvko November 11, 2025 14:39
} else {
// Wait for sometime before checking again
} else if (
this.retryOnInsufficientResources &&

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know this isn't related to this PR but was there ever a use-case for not retrying? Feels like an option we preemptively added but now might keep with us but perhaps I'm wrong :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants