-
Couldn't load subscription status.
- Fork 1.8k
test(NODE-7143): optin await min pool size in utr #4761
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
| return new Promise(resolve => { | ||
| while (pool.options.minPoolSize < pool.totalConnectionCount) { | ||
| // Just looping until the min pool size is reached. | ||
| } | ||
| resolve(true); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How is this passing CI? Won't this block the event loop indefinitely, and we'll never be able to establish connections because this loop will be running synchronously forever?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I think this might be working because upon instantiation, the connection pool immediately starts the background thread if minPoolSize > 0, creating one connection immediately. If I'm right, using minPoolSize of > 1 would result in this hanging
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah good catch. Let me go back to the drawing board here.
| for (const server of client.topology.s.servers.values()) { | ||
| const pool = server.pool; | ||
| try { | ||
| await Promise.race([checkMinPoolSize(pool), timeout]); | ||
| } catch (error) { | ||
| if (TimeoutError.is(error)) { | ||
| throw new AssertionError( | ||
| `Timed out waiting for min pool size to be populated within ${entity.client.awaitMinPoolSizeMS}ms` | ||
| ); | ||
| } | ||
| throw error; | ||
| } finally { | ||
| timeout.clear(); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will wait sequentially for each pool, potentially waiting n * awaitMinPoolSize if there are n servers in the topology.
It isn't terribly important, because I think in practice we'll always reach min pool size population pretty fast, but for correctness we probably instead want to collect an array of array of min pool size population promises and do something like Promise.race([timeout, Promise.allSettled(array of min pool size population promises)])
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I was trying to avoid that refactor but I think it's the only way now. Our min pool size population doesn't use promises but rather callbacks, but it probably needed to happen at some point anyways.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How come the connection pool's internals are relevant here? I was imagining something like:
const populations$ = client.topology.s.servers.values().map(({ pool }) => checkMinPoolSize(pool));
Description
Adds
awaitMinPoolSizeMSto the unified runner.Summary of Changes
If
awaitMinPoolSizeMSis provided then the unified runner will wait the specified time while checking that the min pool size is reached. If not reached in the timeframe it will error.What is the motivation for this change?
NODE-7143
Double check the following
npm run check:lint)type(NODE-xxxx)[!]: descriptionfeat(NODE-1234)!: rewriting everything in coffeescript