-
Notifications
You must be signed in to change notification settings - Fork 348
Erase transient attributes in attribute manager instead of controller #3955
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks pretty good. The remaining questions/uncertainties, which I don't expect us to have answers for:
- CIB caching discrepancies: #3801 (comment)
- Might be worth adding a comment about our uncertainty here like we did for the boolean argument
- Whether constructing the
GRegex
on each function call is a significant enough amount of work to just do it once: #3801 (comment) - Consistency among nodes and daemons: #3801 (comment)
- More about caching (you've already added a TODO comment): #3801 (comment)
And I'd like to try to come up with a commit message improvement related to #3801 (comment).
There's a merge conflict that needs to be resolved too. That's all I've got.
Pushed once to address review comments. |
Previously, when the attribute manager purged a node, it would purge the node's transient attributes only from memory, and assumed the controller would purge them from the CIB. Now, the writer will purge them from the CIB as well. This happens by setting the values to NULL rather than dropping them from memory. If there is a writer, it will also erase the node's entire transient attributes section. If there is no writer, the next elected writer will erase each value as part of its normal election-win write-out. In either case, all nodes will drop the NULL values from memory after getting the notification that the erasure succeeded. This fixes a variety of timing issues when multiple nodes including the attribute writer are shutting down. If the writer leaves before some other node, the DC wipes that other node's attributes from the CIB when that other node leaves the controller process group (or all other nodes do if the DC is the leaving node). If a new writer (possibly even the node itself) is elected before the node's attribute manager leaves the cluster layer, it will write the attributes back to the CIB. Once the other node leaves the cluster layer, all attribute managers remove its attributes from memory, but they are now "stuck" in the CIB. As of this commit, the controller still erases the attributes from the CIB when the node leaves the controller process group, which is redundant but doesn't cause any new problems and will be corrected in an upcoming commit. Note: This will cause an insignificant regression if backported to Pacemaker 2. The Pacemaker 2 controller purges attributes from the CIB for leaving DCs only if they are at version 1.1.13 or later, because earlier DCs will otherwise get fenced after a clean shutdown. Since the attribute manager doesn't know the DC or its version, the attributes would now always be wiped, so old leaving DCs will get fenced. The fencing would occur only in the highly unlikely situation of a rolling upgrade from Pacemaker 2-supported versions 1.1.11 or 1.1.12, and the upgrade would still succeed without any negative impact on resources. Co-Authored-By: Reid Wahl <[email protected]> Co-Authored-By: Chris Lumens <[email protected]> Fixes T138
This hasn't been needed since attrd_cib_updated_cb() started ignoring changes from trusted sources (!cib__client_triggers_refresh()). It's especially useless now that the attribute manager handles clearing of leaving nodes' attributes from the CIB.
Now that the attribute manager will erase transient attributes from the CIB when purging a node, we don't need to do that separately in the controller.
…from cache Nothing uses the new capability yet
With recent changes, the attribute manager now handles it when the node leaves the cluster, so the controller purge is redundant. This does alter the timing somewhat, since the controller's purge occurred when the node left the controller process group, while the attribute manager's purge occurs when it leaves the cluster, but that shouldn't make a significant difference. This fixes a problem when a node's controller crashes and is respawned while fencing is disabled. Previously, another node's controller would remove that node's transient attributes from the CIB, but they would remain in the attribute managers' memory. Now, the attributes are correctly retained in the CIB in this situation. Fixes T137 Fixes T139 Co-Authored-By: Chris Lumens <[email protected]>
... instead of wiping from CIB directly Co-Authored-By: Chris Lumens <[email protected]>
It now boils down to a bool for whether we want only unlocked resources
... to controld_delete_node_history(), and controld_node_state_deletion_strings() to controld_node_history_deletion_strings(), since they only delete history now
This has only ever had two values, which basically just means it's a bool.
And again to rebase on main and resolve the merge conflicts. |
Additionally:
|
Tested both full Pacemaker nodes (https://projects.clusterlabs.org/T132#16531) and Pacemaker Remote nodes (https://bugzilla.redhat.com/show_bug.cgi?id=2030869#c21) first on main, and then with these patches. Everything appears to be working as it should. I haven't thought of a way to automate these tests yet. |
Tested T137, T138, T139, and the original shutdown related attribute (bugs linked in T137). I think at this point all that's left is checking out the rolling upgrade thing. |
I'm working on the rolling upgrade testing right now, so I want to leave some notes for myself (and anyone else who cares) here for future reference. Consider this comment from #3801:
My setup is four nodes - three full pacemaker nodes and one remote node. Let's just call them node1, node2, node3, and remote1. remote1 doesn't really matter for testing this, but having a remote node is part of my standard test cluster so whatever. I also think it's important to note that for me, node3 is the DC. I think we don't want to touch the DC at all during testing. First, build packages from the main branch and install them to all nodes. Start the cluster. Then, switch to the branch for this PR and add a patch that increases the minor-minor version of Set a transient attribute on node2:
Put node1 into standby, wait for any resources on it to migrate off, stop the cluster, and then copy over the new packages and install them. These are basically the rolling upgrade instructions (https://clusterlabs.org/projects/pacemaker/doc/2.1/Pacemaker_Administration/html/upgrading.html#rolling-node-by-node). Now start the cluster on node1 and take it out of standby. You should how have a Now, put node2 in standby and wait for resources to migrate off, and stop the cluster. DO NOT upgrade to the new packages. Query the value of the transient attribute on both node1 (which was upgraded to the packages built off this PR) and on node3 (the DC, which was not):
We would expect that node1 would have the same output as node3, so that's what Ken is referring to in that comment. I haven't tested this yet, but I believe the debug cycle for this looks like:
|
This is exactly the same as #3801 except for changing three calls to deprecated functions and rebasing on main. I'll be making additional changes from patch review here in the future. I don't see any way to do this that isn't confusing.