Hi DeepTeam team 👋
I am facing an issue with most modern LLMs (e.g. Anthropic models): they frequently refuse to generate the attack prompts themselves, even though the context is a controlled red teaming evaluation.
Since they don't want to generate JSON like format, the response cannot be used.
This makes it difficult to use them as attack generators, since the simulator model is blocked by safety refusals before the target model is even tested.
Questions:
- Is there a recommended way to handle this kind of issue?
- Are there specific model families or configurations known to work better as attack generators (e.g. open-source or less restrictive models)?
Thanks for the framework, any help would be appreciated!!
Hi DeepTeam team 👋
I am facing an issue with most modern LLMs (e.g. Anthropic models): they frequently refuse to generate the attack prompts themselves, even though the context is a controlled red teaming evaluation.
Since they don't want to generate JSON like format, the response cannot be used.
This makes it difficult to use them as attack generators, since the simulator model is blocked by safety refusals before the target model is even tested.
Questions:
Thanks for the framework, any help would be appreciated!!