Skip to content

Conversation

@durran
Copy link
Member

@durran durran commented Oct 28, 2025

Description

Adds awaitMinPoolSizeMS to the unified runner.

Summary of Changes

If awaitMinPoolSizeMS is provided then the unified runner will wait the specified time while checking that the min pool size is reached. If not reached in the timeframe it will error.

What is the motivation for this change?

NODE-7143

Double check the following

  • Lint is passing (npm run check:lint)
  • Self-review completed using the steps outlined here
  • PR title follows the correct format: type(NODE-xxxx)[!]: description
    • Example: feat(NODE-1234)!: rewriting everything in coffeescript
  • Changes are covered by tests
  • New TODOs have a related JIRA ticket

@durran durran marked this pull request as ready for review October 28, 2025 14:01
@durran durran requested a review from a team as a code owner October 28, 2025 14:01
@baileympearson baileympearson self-assigned this Oct 28, 2025
@baileympearson baileympearson added the Primary Review In Review with primary reviewer, not yet ready for team's eyes label Oct 28, 2025
Comment on lines +749 to +753
return new Promise(resolve => {
while (pool.options.minPoolSize < pool.totalConnectionCount) {
// Just looping until the min pool size is reached.
}
resolve(true);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is this passing CI? Won't this block the event loop indefinitely, and we'll never be able to establish connections because this loop will be running synchronously forever?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I think this might be working because upon instantiation, the connection pool immediately starts the background thread if minPoolSize > 0, creating one connection immediately. If I'm right, using minPoolSize of > 1 would result in this hanging

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah good catch. Let me go back to the drawing board here.

Comment on lines +631 to +646
for (const server of client.topology.s.servers.values()) {
const pool = server.pool;
try {
await Promise.race([checkMinPoolSize(pool), timeout]);
} catch (error) {
if (TimeoutError.is(error)) {
throw new AssertionError(
`Timed out waiting for min pool size to be populated within ${entity.client.awaitMinPoolSizeMS}ms`
);
}
throw error;
} finally {
timeout.clear();
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will wait sequentially for each pool, potentially waiting n * awaitMinPoolSize if there are n servers in the topology.

It isn't terribly important, because I think in practice we'll always reach min pool size population pretty fast, but for correctness we probably instead want to collect an array of array of min pool size population promises and do something like Promise.race([timeout, Promise.allSettled(array of min pool size population promises)])

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I was trying to avoid that refactor but I think it's the only way now. Our min pool size population doesn't use promises but rather callbacks, but it probably needed to happen at some point anyways.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How come the connection pool's internals are relevant here? I was imagining something like:

const populations$ = client.topology.s.servers.values().map(({ pool }) => checkMinPoolSize(pool));

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Primary Review In Review with primary reviewer, not yet ready for team's eyes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants