Skip to content

Conversation

@praboud
Copy link

@praboud praboud commented Oct 17, 2025

Pull Request check-list

Please make sure to review and check all of these items:

  • Do tests and lints pass with this change?
  • Do the CI tests pass with this change (enable it first in your forked repo and wait for the github action build to finish)?
  • Is the new or changed code fully tested?
  • Is a documentation update included (if this change modifies existing APIs, or introduces new ones)?
  • Is there an example added to the examples folder (if applicable)?

NOTE: these things are not required to open a PR and can be done
afterwards / while the PR is open.

Description of change

NodesManager can mutate state (ie: nodes cache + slots cache + startup nodes + default node) from multiple client threads, through both re-initializing from CLUSTER SLOTS, and from following a MOVED/ASK redirect. Right now, there isn't proper synchronization of state across multiple threads, which can result in the state getting corrupted, or the NodesManager otherwise behaving weirdly:

  1. update_moved_exception just sets an exception on the NodesManager, which we expect to trigger an update to the state the next time we fetch a node from the NodesManager. But _moved_exception isn't synchronized. Suppose two threads A & B sequence like: A calls update_moved_exception, B calls update_moved_exception, A calls get_node_from_slot, B calls get_node_from_slot. A's update to _moved_exception gets lost, and when A calls get_node_from_slot, it doesn't actually follow the redirect. To avoid this problem, I've changed the slot move logic to immediately apply the update to the slot state, rather than queueing it up for later.
  2. _get_or_create_cluster_node can mutate the role of a ClusterNode, but the node is referenced from the slots_cache. Because we expect slots_cache[node][0] to always be the primary, this can cause strange behavior for readers of slots_cache between the time _get_or_create_cluster_node is called, and when initialize resets slots_cache at the end of the update.
  3. initialize & _update_moved_slots both mutate slots_cache, and aren't synchronized with each other. This can cause the slots cache to get into a weird state, where eg: nodes are deleted from the slots cache, or duplicated.
  4. initialize allows multiple callers to initialize concurrently, which is both extra load on the cluster, and can cause strange behavior in corner cases.

To fix all of these:

  1. I've ensured that we hold self._lock around all places where any mutation happens in NodesManager.
  2. I've replaced update_moved_exception & _update_moved_slots with just move_slot, to avoid racing multiple slot updates.
  3. I've added _initialize_lock to serialize / deduplicate calls to initialize.

I've added some tests to try to exercise the situations above, and verified that they fail before / pass after this change.

This PR mainly focuses on the sync Redis client, but I've tried to update the asyncio one as well. It doesn't (as far as I can tell) suffer from most of these issues, other than issue 4, so the changes there are a bit lighter. This PR is pretty hefty already - I'm happy to split the asyncio changes out into a separate PR if that's easier for review.

Also, in general, this PR is a lot easier to review with "ignore whitespace" turned on in the diff options, because a lot of these changes mean indenting big blocks of code inside a with self._lock block.

@jit-ci
Copy link

jit-ci bot commented Oct 17, 2025

Hi, I’m Jit, a friendly security platform designed to help developers build secure applications from day zero with an MVS (Minimal viable security) mindset.

In case there are security findings, they will be communicated to you as a comment inside the PR.

Hope you’ll enjoy using Jit.

Questions? Comments? Want to learn more? Get in touch with us.

except Exception:
# Ignore errors when closing the connection
pass

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This causes issues with the change I made to _get_or_create_cluster_node , to make a new ClusterNode (sharing the redis_connection of the old node), instead of mutating the old one in place.

Redis.__del__ already closes the connection pool down when the object is dropped, so I think this code was always unnecessary anyhow.

if node.server_type == server_type
]

def populate_startup_nodes(self, nodes):
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was unused, and I don't think this is considered part of the public interface? We could also just protect this with self._lock, but it's easier to delete it if we don't need it.

host, port, PRIMARY, tmp_nodes_cache
startup_nodes = random.sample(
list(self.startup_nodes.values()), k=len(self.startup_nodes)
)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can pull this out to a separate PR if preferred, but another thing that I've observed is that all the redis clients init from the same node, which can hammer that node if you have a lot of clients. Randomizing the order should allow this to scale better.

@petyaslavova
Copy link
Collaborator

Hi @praboud, thank you for your contribution! We will review it soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants