-
Notifications
You must be signed in to change notification settings - Fork 53
quiesce needs to keep track of blueprint ids #8919
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Some sample output from the new Initial state:
Enabled blueprint execution:
Created a demo saga:
Start quiescing:
Complete the demo saga:
|
nexus/types/src/quiesce.rs
Outdated
/// whether a saga recovery operation is ongoing, and if one is: | ||
/// - what `reassignment_generation` was when it started | ||
/// - which blueprint id we'll be fully caught up to upon completion | ||
#[serde(skip)] // XXX-dap |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
XXX
here because we don't want to skip this field?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes -- sorry I missed that! This is a problem because we don't support a tuple in this context in the OpenAPI spec. I will replace it with a struct.
nexus/types/src/quiesce.rs
Outdated
} | ||
}; | ||
|
||
q.latch_drained_blueprint_id(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it correct to latch this even if quiescing
is false?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the function checks that.
edit: to be precise, it is not correct to latch the value in this case. The function latch_drained_blueprint_id
is intended to be called at any time and will only latch the state if appropriate, and it checks that. Is there a better name for that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
latch_blueprint_id_if_drained()
maybe?
dc85999
to
6c84ded
Compare
73fff49
to
6fa9d9d
Compare
I've had to force push this for the same reason as in #8875. The diff-of-diffs is virtually empty -- nothing notable changed in the sync-up. @jgallagher do you want to take another look? (I don't think it's necessary but it's up to you!) |
nexus/types/src/quiesce.rs
Outdated
assert_eq!(qq.fully_drained_blueprint(), Some(blueprint3_id)); | ||
assert!(qq.is_fully_drained()); | ||
|
||
// Fully drained case 3: quiescing itself causes us to immediately |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Fully drained case 3: quiescing itself causes us to immediately | |
// Fully drained case 4: quiescing itself causes us to immediately |
Could some of these cases become different, smaller tests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch on the comment.
Could some of these cases become different, smaller tests?
I think this is easier said than done. It's definitely possible but I'm worried it'd be quite a lot more code, with a lot of duplication, that'd be easier to get wrong. That's because depending how fine-grained you look at it, it's currently testing about 12 behaviors (basically, each commented hunk (between blank lines) is a behavior I'm talking about). Many of those depend on complex state set up by the previous tests. This would wind up duplicated.
Subjectively, although I know it can be a little annoying when you first see a new assertion failure in a test like this, I still prefer the single straight-line test that reflects a realistic sequence to a bunch of separate-but-pretty-overlapping cases. I just wind up with higher confidence that we've covered everything.
The specific one you commented on is probably the easiest to fork off into a new test because it starts from a fresh handle. But then it'd be the only one not in the same test 🤷
@@ -353,7 +475,10 @@ impl SagaQuiesceHandle { | |||
"recovery_start() called twice without intervening \ | |||
recovery_done() (concurrent calls to recover()?)", | |||
); | |||
q.recovery_pending = Some(q.reassignment_generation); | |||
q.recovery_pending = Some(PendingRecovery { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(only semi-related to this PR)
I'm noticing that the pub async fn recover
really needs that "only-one-caller-at-a-time" property to not break preconditions, trip assertions, etc. But the signature is &self
, so Rust would be happy to allow concurrent calls.
WDYT about making it act on a &mut self
API? Would this be too onerous? it would help ensure that future usage of API also cannot be done concurrently
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same question for pub async fn reassign_sagas
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this idea, but worry it will be pretty onerous. If this ends up being owned by a &Nexus
or whatever similar top-level object it might be pretty painful; those end up cloned and shared I think? Worth double-checking though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We explored this in an earlier iteration and ran into trouble with the recovery code path holding multiple &mut
references to itself (one for its own purposes, and one for needing one to call recover()
). That seems to no longer be true. I was surprised that this does compile (leaving out the test diffs, which are mechanical):
diff --git a/nexus/src/app/background/tasks/saga_recovery.rs b/nexus/src/app/background/tasks/saga_recovery.rs
index d9f2ed00f..5e56b986b 100644
--- a/nexus/src/app/background/tasks/saga_recovery.rs
+++ b/nexus/src/app/background/tasks/saga_recovery.rs
@@ -199,7 +199,7 @@ impl<N: MakeSagaContext> BackgroundTask for SagaRecovery<N> {
async {
// We don't need the future that's returned by activate_internal().
// That's only used by the test suite.
- let _ = self.inner.activate_internal(opctx, &self.quiesce).await;
+ let _ = self.inner.activate_internal(opctx, &mut self.quiesce).await;
serde_json::to_value(&self.inner.status).unwrap()
}
.boxed()
@@ -237,7 +237,7 @@ impl<N: MakeSagaContext> SagaRecoveryInner<N> {
async fn activate_internal(
&mut self,
opctx: &OpContext,
- quiesce: &SagaQuiesceHandle,
+ quiesce: &mut SagaQuiesceHandle,
) -> Option<(
BoxFuture<'static, Result<(), Error>>,
nexus_saga_recovery::LastPassSuccess,
@@ -743,7 +743,7 @@ mod test {
);
let Some((completion_future, last_pass_success)) =
- task.inner.activate_internal(&opctx, &task.quiesce).await
+ task.inner.activate_internal(&opctx, &mut task.quiesce).await
else {
panic!("saga recovery failed");
};
@@ -821,7 +821,7 @@ mod test {
);
let Some((_, last_pass_success)) =
- task.inner.activate_internal(&opctx, &task.quiesce).await
+ task.inner.activate_internal(&opctx, &mut task.quiesce).await
else {
panic!("saga recovery failed");
};
diff --git a/nexus/types/src/quiesce.rs b/nexus/types/src/quiesce.rs
index b336cd037..d7f59edff 100644
--- a/nexus/types/src/quiesce.rs
+++ b/nexus/types/src/quiesce.rs
@@ -454,7 +454,7 @@ impl SagaQuiesceHandle {
// because it's harder to mis-use (e.g., by forgetting to call
// `recovery_done()`). But we keep the other two functions around because
// it's easier to write tests against those.
- pub async fn recover<F, T>(&self, f: F) -> T
+ pub async fn recover<F, T>(&mut self, f: F) -> T
where
F: AsyncFnOnce(&SagaRecoveryInProgress) -> (T, bool),
{
@@ -468,7 +468,7 @@ impl SagaQuiesceHandle {
///
/// Only one of these may be outstanding at a time. The caller must call
/// `saga_recovery_done()` before starting another one of these.
- fn recovery_start(&self) -> SagaRecoveryInProgress {
+ fn recovery_start(&mut self) -> SagaRecoveryInProgress {
self.inner.send_modify(|q| {
assert!(
q.recovery_pending.is_none(),
But this doesn't guarantee what you'd think it would based on the signature. The SagaQuiesceHandle
is just a handle. It's freely cloneable and all clones refer to the same underlying state. So you can absolutely call recover
concurrently from different code paths.
Maybe a safer way to enforce this would be to create a few distinct wrappers: one for saga recovery (which only exposes the recovery stuff, and uses a &mut
), one for re-assignment (which only exposes the re-assignment stuff, which could get the same treatment), and one for saga creation (which only exposes the methods for that). For this to work, none of these (including the underlying handle) could be cloneable. Instead, you construct them all at once and get exactly one of each, which we then pass to the right place. This might work but it's not a small amount of work. (I don't think this is a big practical risk because there is only one non-test code path that calls recover
and it's in a singleton context by design.)
/// Returns whether sagas are fully drained | ||
/// | ||
/// This condition is not permanent. New sagas can be re-assigned to this | ||
/// Nexus. | ||
pub fn is_fully_drained(&self) -> bool { | ||
fn is_fully_drained(&self) -> bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible this returns true if we have never tried to reassign sagas?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so; it checks for first_recovery_complete
which should be false if we've never tried, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but recovery is a distinct step from re-assignment, right? Someone could call the pub async fn recover
without calling pub async fn reassign_sagas
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@smklein That's correct (your statement is correct, and I believe it's also the correct behavior). Think of "is_fully_drained()" as "there are no sagas running and none will ever run again unless there's a subsequent re-assignment". (It is not: "there are no sagas running and none will ever run again". This is what we spent so much time in RFD 588 trying to achieve but found it at odds with the idea that "but you might need to expunge a Nexus at any point". If you want this condition, you really want fully_drained_blueprint()
.)
This is the first part of #8859. This PR adds the logic to keep track of this. Once we have db_metadata_nexus records (currently #8845), the last bit of 8859 will be to update those records whenever this value changes.
This is still a work in progress. I need to add some new tests and also put this into
omdb
.