-
Notifications
You must be signed in to change notification settings - Fork 7
Decouple background source in ResultsHandler #436
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR refactors the results‐handling logic by introducing a dedicated PickleCache
class to manage pickle I/O and adds an optional background_source
parameter to allow separate loading and merging of background trials.
- Introduced
PickleCache
to encapsulate pickle directory (read/merge/clean) logic. - Added
background_source
toResultsHandler
to load and merge background data separately. - Refactored
ResultsHandler
to usePickleCache
and removed the legacymerge_pickle_data
method.
class PickleCache: | ||
def __init__(self, pickle_path: Path, background_only: bool = False): | ||
self.path = pickle_path | ||
# self.pickle_path.mkdir(parents=True, exist_ok=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] Remove or implement the commented-out directory creation logic in PickleCache.__init__
to avoid dead code.
# self.pickle_path.mkdir(parents=True, exist_ok=True) | |
self.path.mkdir(parents=True, exist_ok=True) |
Copilot uses AI. Check for mistakes.
|
||
if scale_label == background_label and background_label in output_dict: | ||
logger.info( | ||
f"Appending f{n_pending} background data to {len(output_dict[background_label]['TS'])} existing trials ({scale_label=})" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's an extra 'f' before {n_pending}
causing the log to print 'f'; remove the 'f' so it reads {n_pending}
.
f"Appending f{n_pending} background data to {len(output_dict[background_label]['TS'])} existing trials ({scale_label=})" | |
f"Appending {n_pending} background data to {len(output_dict[background_label]['TS'])} existing trials ({scale_label=})" |
Copilot uses AI. Check for mistakes.
self.pickle_cache.merge_and_load(output_dict=self.results) | ||
# Load the background trials. Will override the existing one. | ||
if self.pickle_cache_bg is not None: | ||
print("NOTE!!!! Loading BG") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace this print
statement with a logger
call (e.g., logger.info
) to maintain consistent logging practices.
print("NOTE!!!! Loading BG") | |
logger.info("NOTE!!!! Loading BG") |
Copilot uses AI. Check for mistakes.
print("NOTE!!!! Loading BG") | ||
self.pickle_cache_bg.merge_and_load(output_dict=self.results) | ||
else: | ||
print("NOTE!!!! No BG pickle cache") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace this print
statement with a logger
call (e.g., logger.warning
) for consistency and better control over output.
print("NOTE!!!! No BG pickle cache") | |
logger.warning("NOTE!!!! No BG pickle cache") |
Copilot uses AI. Check for mistakes.
sys.exit() | ||
if not scale_shortener(0.0) in self.results: | ||
logger.error( | ||
f"No key equal to '0' in results! Keys are {self.results.keys()}" | ||
) | ||
|
||
sys.exit() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid calling sys.exit
during initialization; consider raising an exception so callers can handle errors more gracefully.
sys.exit() | |
if not scale_shortener(0.0) in self.results: | |
logger.error( | |
f"No key equal to '0' in results! Keys are {self.results.keys()}" | |
) | |
sys.exit() | |
raise RuntimeError("No data was found by ResultsHandler object!") | |
if not scale_shortener(0.0) in self.results: | |
logger.error( | |
f"No key equal to '0' in results! Keys are {self.results.keys()}" | |
) | |
raise KeyError(f"No key equal to '0' in results! Keys are {self.results.keys()}") |
Copilot uses AI. Check for mistakes.
sys.exit() | ||
if not scale_shortener(0.0) in self.results: | ||
logger.error( | ||
f"No key equal to '0' in results! Keys are {self.results.keys()}" | ||
) | ||
|
||
sys.exit() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid calling sys.exit
during initialization; consider raising an exception so callers can handle errors more gracefully.
sys.exit() | |
if not scale_shortener(0.0) in self.results: | |
logger.error( | |
f"No key equal to '0' in results! Keys are {self.results.keys()}" | |
) | |
sys.exit() | |
raise RuntimeError("No data was found by ResultsHandler object!") | |
if not scale_shortener(0.0) in self.results: | |
logger.error( | |
f"No key equal to '0' in results! Keys are {self.results.keys()}" | |
) | |
raise RuntimeError("No key equal to '0' in results!") |
Copilot uses AI. Check for mistakes.
In order to be able to load the background from a different analysis, the pickle directory structure can no longer being semi-statically defined at the
ResultsHandler
level, but is defined, starting from a base path, by means of a utility classPickleCache
. Likewise, the entire managing logic (read/merge/write/clean) of the pickle cache is now encoded there.A
background_source
parameter can be passed to theResultsHandler
constructor, so background data can be additionally loaded (and merged).The solution is still not the cleanest, but works. Still at RFC stage.