-
Couldn't load subscription status.
- Fork 2.6k
Improve NodesManager locking #3803
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
Hi, I’m Jit, a friendly security platform designed to help developers build secure applications from day zero with an MVS (Minimal viable security) mindset. In case there are security findings, they will be communicated to you as a comment inside the PR. Hope you’ll enjoy using Jit. Questions? Comments? Want to learn more? Get in touch with us. |
| except Exception: | ||
| # Ignore errors when closing the connection | ||
| pass | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This causes issues with the change I made to _get_or_create_cluster_node , to make a new ClusterNode (sharing the redis_connection of the old node), instead of mutating the old one in place.
Redis.__del__ already closes the connection pool down when the object is dropped, so I think this code was always unnecessary anyhow.
| if node.server_type == server_type | ||
| ] | ||
|
|
||
| def populate_startup_nodes(self, nodes): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was unused, and I don't think this is considered part of the public interface? We could also just protect this with self._lock, but it's easier to delete it if we don't need it.
| host, port, PRIMARY, tmp_nodes_cache | ||
| startup_nodes = random.sample( | ||
| list(self.startup_nodes.values()), k=len(self.startup_nodes) | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can pull this out to a separate PR if preferred, but another thing that I've observed is that all the redis clients init from the same node, which can hammer that node if you have a lot of clients. Randomizing the order should allow this to scale better.
|
Hi @praboud, thank you for your contribution! We will review it soon. |
Pull Request check-list
Please make sure to review and check all of these items:
NOTE: these things are not required to open a PR and can be done
afterwards / while the PR is open.
Description of change
NodesManagercan mutate state (ie: nodes cache + slots cache + startup nodes + default node) from multiple client threads, through both re-initializing fromCLUSTER SLOTS, and from following aMOVED/ASKredirect. Right now, there isn't proper synchronization of state across multiple threads, which can result in the state getting corrupted, or the NodesManager otherwise behaving weirdly:update_moved_exceptionjust sets an exception on theNodesManager, which we expect to trigger an update to the state the next time we fetch a node from theNodesManager. But_moved_exceptionisn't synchronized. Suppose two threads A & B sequence like: A calls update_moved_exception, B calls update_moved_exception, A calls get_node_from_slot, B calls get_node_from_slot. A's update to_moved_exceptiongets lost, and when A callsget_node_from_slot, it doesn't actually follow the redirect. To avoid this problem, I've changed the slot move logic to immediately apply the update to the slot state, rather than queueing it up for later._get_or_create_cluster_nodecan mutate theroleof aClusterNode, but the node is referenced from theslots_cache. Because we expectslots_cache[node][0]to always be the primary, this can cause strange behavior for readers ofslots_cachebetween the time_get_or_create_cluster_nodeis called, and wheninitializeresetsslots_cacheat the end of the update.initialize&_update_moved_slotsboth mutateslots_cache, and aren't synchronized with each other. This can cause the slots cache to get into a weird state, where eg: nodes are deleted from the slots cache, or duplicated.initializeallows multiple callers to initialize concurrently, which is both extra load on the cluster, and can cause strange behavior in corner cases.To fix all of these:
self._lockaround all places where any mutation happens inNodesManager.update_moved_exception&_update_moved_slotswith justmove_slot, to avoid racing multiple slot updates._initialize_lockto serialize / deduplicate calls toinitialize.I've added some tests to try to exercise the situations above, and verified that they fail before / pass after this change.
This PR mainly focuses on the sync Redis client, but I've tried to update the asyncio one as well. It doesn't (as far as I can tell) suffer from most of these issues, other than issue 4, so the changes there are a bit lighter. This PR is pretty hefty already - I'm happy to split the asyncio changes out into a separate PR if that's easier for review.
Also, in general, this PR is a lot easier to review with "ignore whitespace" turned on in the diff options, because a lot of these changes mean indenting big blocks of code inside a
with self._lockblock.