-
Notifications
You must be signed in to change notification settings - Fork 2.2k
graph/db: add zombie channels cleanup routine #10015
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
graph/db: add zombie channels cleanup routine #10015
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @GustavoStingelin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request addresses a potential memory leak in the GraphCache
by implementing a "zombie channel" cleanup mechanism. It introduces a dedicated index to track channels that cannot be immediately removed due to incomplete node information and a background process to periodically prune these channels, ensuring the cache remains consistent and efficient.
Highlights
- Zombie Channel Tracking: Introduced a
zombieIndex
within theGraphCache
to temporarily store channel IDs that cannot be fully removed immediately due to missing node information (e.g., azeroVertex
). - Background Cleanup Process: Added a new background goroutine (
zombieCleaner
) that periodically (every 24 hours) attempts to clean up channels listed in thezombieIndex
, ensuring they are eventually removed from the cache. - Robust Channel Removal: Modified the
RemoveChannel
method to identify channels where one or both associated nodes are unknown (represented by azeroVertex
) and adds them to thezombieIndex
for deferred cleanup, preventing potential memory leaks.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a background process to clean up zombie channels from the graph cache, addressing a potential memory leak. The implementation includes a dedicated goroutine and a zombie index to track channels awaiting removal. The review suggests improvements for maintainability and performance, such as making the cleanup interval configurable and optimizing the cleanup logic.
thanks for the PR @GustavoStingelin! |
8acd2d2
to
0142868
Compare
@ellemouton ready! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well done! 👏
I ran the tests and everything LGTM ✅
Here are the benchmark results on my machine:
goos: linux
goarch: amd64
pkg: github.com/lightningnetwork/lnd/graph/db
cpu: Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz
=== RUN BenchmarkGraphCacheCleanupZombies
BenchmarkGraphCacheCleanupZombies
BenchmarkGraphCacheCleanupZombies-8 5 207848522 ns/op 207.8 ms/op 31292878 B/op 245935 allocs/op
PASS
ok github.com/lightningnetwork/lnd/graph/db 9.526s
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking great so far! Thanks for this 🙏
6a1bd16
to
46d2623
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR! My main question is - does it cost more if we just remove it directly inside RemoveChannel
? And if the zombies are cleaned per X hours, does it mean the pathfinding may fail due to the zombies?
46d2623
to
4f05f0d
Compare
4f05f0d
to
918e871
Compare
Just rebased. Let me know for additional comments! |
@saubyk, could you assign me to this? |
@yyforyongyu: review reminder |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm! thanks for this 🎉
graph/db/graph_cache.go
Outdated
@@ -305,6 +410,9 @@ func (c *GraphCache) getChannels(node route.Vertex) []*DirectedChannel { | |||
i++ | |||
} | |||
|
|||
// Copy the slice to clean up the unused pre allocated tail entries. | |||
copy(channelsCopy, channelsCopy[:i]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can just return channelsCopy[:i]
- in Go the re-slicing [:i]
already creates a new slice header that points to the same underlying array as channelsCopy
, but its length is i
. And the copy will do nothing here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And the copy will do nothing here
It's a slice trick, 😃.
My original intent here was to free the underlying array, since it could still hold unused tail entries that the GC wouldn’t reclaim as long as the slice referenced them. By using copy
, we force a new backing array to be created, which allows those tail elements to be freed earlier and potentially saves some bytes in the current cycle.
After rethinking this, I realized it might not be worth the extra cost. The zombie cleaner will eventually release memory anyway, and the “tail” entries are likely irrelevant compared to the overhead of copying and allocating a new array.
So I switched to simply using [:i]
. This means the unused entries remain in the underlying array, but they’re not visible through the slice header, and the tradeoff avoids the extra copy and allocation cost.
"Because the “deleted” value is referenced in the underlying array, the deleted value is still “reachable” during GC, even though the value cannot be referenced by your code. If the underlying array is long-lived, this represents a leak"
for j := range numChannels / 10 { | ||
cache.RemoveChannel(zeroVertex, zeroVertex, | ||
uint64(j*1000*10)) | ||
cache.RemoveChannel(zeroVertex, zeroVertex, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so we are marking 20% channels as zombies? what does the +5
mean here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did a small refactor in this bench to make it more readable. The setup is:
- 10% of existing channels are marked as zombies.
- Another 10% worth of entries are marked as zombies using IDs that do not exist in the map.
So the run includes 10% real zombies and 10% of "ghost zombies".
The +5 is just to generate IDs outside the existing range.
graph/db/graph_cache.go
Outdated
@@ -83,6 +95,9 @@ func NewGraphCache(preAllocNumNodes int) *GraphCache { | |||
map[route.Vertex]*lnwire.FeatureVector, | |||
preAllocNumNodes, | |||
), | |||
zombieIndex: make(map[uint64]struct{}), | |||
zombieCleanerInterval: time.Hour, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should make this time.Hour
a var above for easy reference.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to be sure to understand the solution space a bit better.
Builder.MarkZombieEdge
is only called in validateFundingTransaction
with leads to the only call of ChannelGraph.MarkEdgeZombie
with zero pubkeys. Are edges even added to the graph (and cache) if they fail validateFundingTransaction
and is there thus a problem at all?
Additionally, there seems to only be a single chain of call sites Builder.MarkZombieEdge
-> ChannelGraph.MarkEdgeZombie
-> GraphCache.RemoveChannel
, so would it work if we'd pass in the pubkeys to Builder.MarkZombieEdge
and change ChannelGraph.MarkEdgeZombie
to call V1Store.MarkEdgeZombie(chanID, zero, zero)
instead, but have the real pubkeys available in GraphCache.RemoveChannel
?
Yeah this will fix the leaky cache completely, but it will also open an attack surface such that a malicious node can remove other nodes from our graph cache, the attack scenario,
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pending @bitromortac 's confirm of the analysis, otherwise LGTM👏
918e871
to
d5785c4
Compare
Fix a bug that leaks zombie channels in the memory graph, resulting in incorrect path finding and memory usage.
d5785c4
to
0f62b18
Compare
Hmmm great point!! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm fairly sure my suggested workaround isn't relevant, because if vialidateFundingTransaction
(only call site for RemoveChannel
with empty pubkeys) errors, we won't add the edge to the graph and therefore don't have issues with those channels in the cache. So unless an issue with this can be demonstrated, I think we should not pursue this PR to keep the complexity out (although the code itself looks good and is well tested).
@GustavoStingelin based on the last comment, I am pulling this out of release 0.20's scope. If you agree with @bitromortac 's assessment we can close this pr and revisit in the future if an issue arises. Thanks. |
agreed. |
Closing the pr based on the above comment. |
This PR addresses issue #9524, which caused zombie channels to remain in the in-memory graph cache. This led to incorrect pathfinding behavior and unnecessary memory consumption.
Benchmark
To evaluate the performance impact of the cleanup logic, I added a benchmark simulating a node graph with 50,000 nodes and 500,000 channels. On my machine, the cleanup took approximately 120 ms, which I think is acceptable for a daily cleanup routine. Additionally, we could potentially improve this by using the channelCache struct, but it appears underutilized.