# create / activate your venv or conda env first
pip install -r requirements.txt
# (Optional) install a GPU wheel if you have CUDA 12.1
pip install --upgrade torch --index-url https://download.pytorch.org/whl/cu121The pinned versions avoid the numpy 2.0 / transformers padding bug that
throws “expected np.ndarray (got numpy.ndarray)”.
VL-DNP is based on Qwen 2.5-VL-7B-Instruct and Stable Diffusion v1.4
# VLM evaluating during Diffusion sampling.
python code/dpm_with_VLM.py \
--path results \
--vlm_step 5 6 7 9 12 16 21 27 34 42 \
--obj ring-a-bell-16 \
--neg_guidance 15--pathdirectory to save generated image.--vlm_stepsteps that VLM generates negative prompt.--objevaluating prompt set. coco is for normal prompts. ring-a-bell is for adversarial prompts.--neg_guidancenegative guidance scale to be used
# negative prompting evaluation using various negative guidance sclae
python code/negative_prompt.py \
--path results_neg_prompt \
--neg_guidance 15 \
--obj ring-a-bell-16--pathdirectory to save generated image.--objevaluating prompt set. coco is for normal prompts. ring-a-bell is for adversarial prompts.--neg_guidancenegative guidance scale to be used
You can download Classifier model at Nudenet Classifier
Download and place at 'classifier/'
Evaluation can be done by
# Nudity evaluation using Nudenet Classifier
python code/nudity_eval.py \
--dir ./results/dir--dirdirectory of images to be evaluated. After evaluation, it will output Attack Success Rate and Toxic Rate.
Adversarial Prompt Sets are from
Download and place at 'prompt_set/'
If you find this code useful for your research, please cite as follows:
@misc{chang2025dynamicvlmguidednegativeprompting,
title={Dynamic VLM-Guided Negative Prompting for Diffusion Models},
author={Hoyeon Chang and Seungjin Kim and Yoonseok Choi},
year={2025},
eprint={2510.26052},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2510.26052},
}