Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# libCacheSim Python Binding

[![Build](https://github.com/cacheMon/libCacheSim-python/actions/workflows/build.yml/badge.svg)](https://github.com/cacheMon/libCacheSim-python/actions/workflows/build.yml)
[![Documentation](https://github.com/cacheMon/libCacheSim-python/actions/workflows/docs.yml/badge.svg)](docs.libcachesim.com/python)
[![Documentation](https://github.com/cacheMon/libCacheSim-python/actions/workflows/docs.yml/badge.svg)]([docs.libcachesim.com/python](https://github.com/cacheMon/libCacheSim-python/actions/workflows/docs.yml))


libCacheSim is fast with the features from [underlying libCacheSim lib](https://github.com/1a1a11a/libCacheSim):
Expand Down
27 changes: 11 additions & 16 deletions docs/src/en/getting_started/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,11 +91,10 @@ With libcachesim installed, you can start cache simulation for some eviction alg

The above example demonstrates the basic workflow of using `libcachesim` for cache simulation:

1. Use `DataLoader` to download a cache trace file from an S3 bucket.
2. Open and efficiently process the trace file with `TraceReader`.
3. Initialize a cache object (here, `S3FIFO`) with a specified cache size (e.g., 1MB).
4. Run the simulation on the entire trace using `process_trace` to obtain object and byte miss ratios.
5. Optionally, process only a portion of the trace by specifying `start_req` and `max_req` for partial simulation.
1. Open and efficiently process the trace file with `TraceReader`.
2. Initialize a cache object (here, `S3FIFO`) with a specified cache size (e.g., 1MB).
3. Run the simulation on the entire trace using `process_trace` to obtain object and byte miss ratios.
4. Optionally, process only a portion of the trace by specifying `start_req` and `max_req` for partial simulation.

This workflow applies to most cache algorithms and trace types, making it easy to get started and customize your experiments.

Expand All @@ -108,12 +107,9 @@ Here is an example demonstrating how to use `TraceAnalyzer`.
import libcachesim as lcs

# Step 1: Get one trace from S3 bucket
URI = "cache_dataset_oracleGeneral/2007_msr/msr_hm_0.oracleGeneral.zst"
dl = lcs.DataLoader()
dl.load(URI)

URI = "s3://cache-datasets/cache_dataset_oracleGeneral/2007_msr/msr_hm_0.oracleGeneral.zst"
reader = lcs.TraceReader(
trace = dl.get_cache_path(URI),
trace = URI,
trace_type = lcs.TraceType.ORACLE_GENERAL_TRACE,
reader_init_params = lcs.ReaderInitParam(ignore_obj_size=False)
)
Expand Down Expand Up @@ -143,12 +139,11 @@ Here is an example demonstrating how to use `TraceAnalyzer`.

The above code demonstrates how to perform trace analysis using `libcachesim`. The workflow is as follows:

1. Download a trace file from an S3 bucket using `DataLoader`.
2. Open the trace file with `TraceReader`, specifying the trace type and any reader initialization parameters.
3. Configure the analysis options with `AnalysisOption` to enable or disable specific analyses (such as request rate, size, etc.).
4. Optionally, set additional analysis parameters with `AnalysisParam`.
5. Create a `TraceAnalyzer` object with the reader, output directory, and the chosen options and parameters.
6. Run the analysis with `analyzer.run()`.
1. Open the trace file with `TraceReader`, specifying the trace type and any reader initialization parameters. The URI starting with `s3://`, will download a trace file from an S3 bucket.
2. Configure the analysis options with `AnalysisOption` to enable or disable specific analyses (such as request rate, size, etc.).
3. Optionally, set additional analysis parameters with `AnalysisParam`.
4. Create a `TraceAnalyzer` object with the reader, output directory, and the chosen options and parameters.
5. Run the analysis with `analyzer.run()`.

After running, you can access the analysis results, such as summary statistics (`stat`) or detailed results (e.g., `example_analysis.size`).

Expand Down
9 changes: 4 additions & 5 deletions examples/plugin_cache/s3fifo.py
Original file line number Diff line number Diff line change
Expand Up @@ -193,17 +193,16 @@ def cache_free_hook(cache):
cache_name="S3FIFO",
)

URI = "cache_dataset_oracleGeneral/2007_msr/msr_hm_0.oracleGeneral.zst"
dl = lcs.DataLoader()
dl.load(URI)
URI = "s3://cache-datasets/cache_dataset_oracleGeneral/2007_msr/msr_hm_0.oracleGeneral.zst"

# Step 2: Open trace and process efficiently
# Open trace
reader = lcs.TraceReader(
trace=dl.get_cache_path(URI),
trace=URI,
trace_type=lcs.TraceType.ORACLE_GENERAL_TRACE,
reader_init_params=lcs.ReaderInitParam(ignore_obj_size=True),
)

# Use native S3FIFO for reference
ref_s3fifo = S3FIFO(cache_size=1024, small_size_ratio=0.1, ghost_size_ratio=0.9, move_to_main_threshold=2)

# for req in reader:
Expand Down
6 changes: 2 additions & 4 deletions examples/trace_analysis.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,10 @@
import libcachesim as lcs

# Step 1: Get one trace from S3 bucket
URI = "cache_dataset_oracleGeneral/2007_msr/msr_hm_0.oracleGeneral.zst"
dl = lcs.DataLoader()
dl.load(URI)
URI = "s3://cache-datasets/cache_dataset_oracleGeneral/2007_msr/msr_hm_0.oracleGeneral.zst"

reader = lcs.TraceReader(
trace=dl.get_cache_path(URI),
trace=URI,
trace_type=lcs.TraceType.ORACLE_GENERAL_TRACE,
reader_init_params=lcs.ReaderInitParam(ignore_obj_size=False),
)
Expand Down
21 changes: 13 additions & 8 deletions src/exception.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -17,36 +17,41 @@ void register_exception(py::module& m) {
static py::exception<CacheException> exc_cache(m, "CacheException");
static py::exception<ReaderException> exc_reader(m, "ReaderException");

// Single exception translator with catch blocks ordered from most-specific to least-specific
py::register_exception_translator([](std::exception_ptr p) {
try {
if (p) std::rethrow_exception(p);
} catch (const CacheException& e) {
exc_cache(e.what());
// Custom exception: CacheException
py::set_error(exc_cache, e.what());
} catch (const ReaderException& e) {
exc_reader(e.what());
}
});

py::register_exception_translator([](std::exception_ptr p) {
try {
if (p) std::rethrow_exception(p);
// Custom exception: ReaderException
py::set_error(exc_reader, e.what());
} catch (const std::bad_alloc& e) {
// Memory allocation error
PyErr_SetString(PyExc_MemoryError, e.what());
} catch (const std::invalid_argument& e) {
// Invalid argument error
PyErr_SetString(PyExc_ValueError, e.what());
} catch (const std::out_of_range& e) {
// Out of range error
PyErr_SetString(PyExc_IndexError, e.what());
} catch (const std::domain_error& e) {
// Domain error
PyErr_SetString(PyExc_ValueError,
("Domain error: " + std::string(e.what())).c_str());
} catch (const std::overflow_error& e) {
// Overflow error
PyErr_SetString(PyExc_OverflowError, e.what());
} catch (const std::range_error& e) {
// Range error
PyErr_SetString(PyExc_ValueError,
("Range error: " + std::string(e.what())).c_str());
} catch (const std::runtime_error& e) {
// Generic runtime error
PyErr_SetString(PyExc_RuntimeError, e.what());
} catch (const std::exception& e) {
// Catch-all for any other std::exception
PyErr_SetString(PyExc_RuntimeError,
("C++ exception: " + std::string(e.what())).c_str());
}
Expand Down
Loading