Skip to content

Conversation

hawkw
Copy link
Member

@hawkw hawkw commented Aug 4, 2025

No description provided.

hawkw added 29 commits July 8, 2025 14:18
Currently, `snitc-core` will panic on reads if we have lost data and
there is no space for an additional loss record in the buffer. This is
because reads while the store is in the `Losing` state will attempt to
insert a loss record to report that data has been lost. However, if
there is no space in the queue, inserting the loss record will panic.

This commit fixes this by reserving space for one additional loss
record at all times. We will no longer admit new messages into the
buffer if they mean we cannot encode a loss record when more messages
come in. This, unfortunately, means that we can no longer admit entries
that fill the entire queue, due to this reservation.
previously, we had a ringbuf entry that contained both the current and
requested restart IDs, and was therefore 256B --- meaning the ringbuf
was an array of 16 entries that were at least 257B each (including the
enum discriminant). that's like 4KiB of ringbuf, which is quite
uncomfortable. by moving things around a bit so that we have separate
entries for "we got a request with this restart ID" and "restart IDs
didn't match", we can make the max sized ringbuf entry 128B (plus an
enum discriminant), which basically cuts the size of the ringbuf in
half.
this is intended to help test cases where ereports are generated before
VPD is set.
that should save a byte or so...
Base automatically changed from eliza/snitch-again to master August 13, 2025 16:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants