-
Notifications
You must be signed in to change notification settings - Fork 79
Description
In block_cache.c block_cache_check_cancel() there is this line :
r = entry->dirty;
With this line in place, I see behaviour like this
where a block undergoing constant changes (e.g. block 0)
never gets pushed to s3 because the PUT is always aborted :
2025-10-21T11:56:14.969847-07:00 localhost s3backer: PUT https://s3.amazonaws.com/mybucket/testix66/00000000
2025-10-21T11:56:15.011921-07:00 localhost s3backer: write aborted: PUT https://s3.amazonaws.com/mybucket/testix66/00000000
2025-10-21T11:56:15.037777-07:00 localhost s3backer: PUT https://s3.amazonaws.com/mybucket/testix66/00000000
2025-10-21T11:56:15.064385-07:00 localhost s3backer: write aborted: PUT https://s3.amazonaws.com/mybucket/testix66/00000000
2025-10-21T11:56:15.089897-07:00 localhost s3backer: PUT https://s3.amazonaws.com/mybucket/testix66/00000000
2025-10-21T11:56:15.125574-07:00 localhost s3backer: write aborted: PUT https://s3.amazonaws.com/mybucket/testix66/00000000
...
Even though the PUT is aborted, it is still probably
causing spam http traffic and holding up a worker thread.
If I comment the "r = entry->dirty" line out, then we never abort PUT,
and instead we get the more reasonable behaviour where the dirty
block is written to s3 and gets handled by this code in block_cache_worker_main()
// Block was modified while being written (WRITING2), so it stays DIRTY
TAILQ_INSERT_TAIL(&priv->dirties, entry, link);
entry->timeout = now + priv->dirty_timeout; // update for 2nd write timing conservatively
I'm continuing my quest to dig deeper, but I figured I'd start
a new issue for a more specific topic. Please let me know if
I should be communicating differently, I'm new at this.
Thanks,
-Truxton