Skip to content

Commit 2dfa799

Browse files
committed
Use Ram buffers to early return, shard the map
When we send back data from the remote it's already resident in RAM, keep a handle to it so we can just send it back on another request if it's currently being written. The map was a severe bottleneck. Shard based on it
1 parent 6a513b3 commit 2dfa799

File tree

5 files changed

+442
-237
lines changed

5 files changed

+442
-237
lines changed

oxcache/src/cache/bucket.rs

Lines changed: 13 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,9 @@
11
use crate::request::GetRequest;
22
use nvme::types::{Byte, Zone};
33
use std::sync::{Arc, atomic::{AtomicUsize, Ordering}};
4-
use tokio::sync::Notify;
4+
use tokio::sync::{Notify, RwLock};
55
use lru_mem::HeapSize;
6+
use bytes::Bytes;
67

78
#[derive(Debug, Clone)]
89
pub struct Chunk {
@@ -149,8 +150,18 @@ impl From<GetRequest> for Chunk {
149150
}
150151
}
151152

153+
/// Data source for cache hits - either from disk or RAM buffer
154+
#[derive(Debug)]
155+
pub enum DataSource {
156+
Disk(PinGuard),
157+
Ram(Bytes),
158+
}
159+
152160
#[derive(Debug)]
153161
pub enum ChunkState {
154162
Ready(Arc<PinnedChunkLocation>),
155-
Waiting(Arc<Notify>),
163+
Waiting {
164+
notify: Arc<Notify>,
165+
buffer: Arc<RwLock<Option<Bytes>>>,
166+
},
156167
}

0 commit comments

Comments
 (0)