-
Notifications
You must be signed in to change notification settings - Fork 578
FEAT: add support for multimodal data from HarmBench #1110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
FEAT: add support for multimodal data from HarmBench #1110
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks very good. A few small comments but otherwise ready to merge!
) | ||
prompts.append(image_prompt) | ||
except Exception as e: | ||
logger.warning(f"Failed to fetch image for behavior {behavior_id}: {e}. Skipping this example.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this happen? We have a similar case in @AdrGav941 's PR #1098 which apparently fails to fetch most images because they're simply not available. I would definitely print a warning with the number of prompts failing to fetch at the end (if there are multiple).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome! Just one question really and then we're ready to merge.
This PR adds a new function under
pyrit\datasets
for fetching multimodal examples from the HarmBench dataset:fetch_harmbench_multimodal_dataset_async
.Resolves #355
Tests and Documentation
✔️ The new function is documented & has relevant unit tests