Deutsch | EspaΓ±ol | franΓ§ais | ζ₯ζ¬θͺ | νκ΅μ΄ | PortuguΓͺs | Π ΡΡΡΠΊΠΈΠΉ | δΈζ
DeepTeam is a simple-to-use, open-source red teaming framework for LLM systems. Think of it as penetration testing, but for LLMs.
DeepTeam simulates attacks β jailbreaking, prompt injection, multi-turn exploitation, and more β to uncover vulnerabilities like bias, PII leakage, and SQL injection in your AI agents, RAG pipelines, and chatbots. It also offers guardrails to prevent these issues in production.
DeepTeam runs locally on your machine and is built on DeepEval, the open-source LLM evaluation framework.
Important
Need a place for your red teaming results to live? Sign up to the Confident AI platform to manage risk assessments, monitor vulnerabilities in production, and share reports with your team.
Want to talk LLM security, need help picking attacks, or just to say hi? Come join our discord.
Β
-
π 50+ ready-to-use vulnerabilities (all with explanations) powered by ANY LLM of your choice. Each vulnerability uses LLM-as-a-Judge metrics that run locally on your machine to produce binary pass/fail scores with reasoning:
-
Data Privacy
- PII Leakage β disclosure of sensitive personal information
- Prompt Leakage β exposure of system prompt secrets and instructions
-
Responsible AI
- Bias β stereotypes and unfair treatment across gender, race, religion, politics
- Toxicity β harmful, offensive, or demeaning content
- Child Protection β child-related privacy and safety risks
- Ethics β violations of moral reasoning and organizational values
- Fairness β discriminatory outcomes across groups and contexts
-
Security
- BFLA β broken function-level authorization
- BOLA β broken object-level authorization
- RBAC β role-based access control bypass
- Debug Access β unauthorized access to debug modes and dev endpoints
- Shell Injection β unauthorized system command execution
- SQL Injection β database query manipulation
- SSRF β server-side request forgery to internal services
- Tool Metadata Poisoning β corrupted tool schemas and descriptions
- Cross-Context Retrieval β data access across isolation boundaries
- System Reconnaissance β probing internal architecture and configurations
-
Safety
- Illegal Activity β facilitation of fraud, weapons, drugs, or other unlawful actions
- Graphic Content β explicit, violent, or sexual material
- Personal Safety β self-harm, harassment, or dangerous advice
- Unexpected Code Execution β coerced execution of unauthorized code
-
Business
- Misinformation β factual errors and unsupported claims
- Intellectual Property β copyright, trademark, and patent violations
- Competition β competitor endorsement and market manipulation
-
Agentic
- Goal Theft β extracting or redirecting an agent's objectives
- Recursive Hijacking β self-modifying goal chains that alter objectives
- Excessive Agency β agents acting beyond their authority
- Robustness β input overreliance and prompt hijacking
- Indirect Instruction β hidden instructions in retrieved content
- Tool Orchestration Abuse β exploiting tool calling sequences
- Agent Identity & Trust Abuse β impersonating agent identity
- Inter-Agent Communication Compromise β spoofing multi-agent message passing
- Autonomous Agent Drift β agents deviating from intended goals over time
- Exploit Tool Agent β weaponizing tools for unintended actions
- External System Abuse β using agents to attack external services
-
Custom
- Custom Vulnerabilities β define and test your own criteria in a few lines of code
-
-
π₯ 20+ research-backed adversarial attack methods for both single-turn and multi-turn (conversational) red teaming. Attacks enhance baseline vulnerability probes using SOTA techniques like jailbreaking, prompt injection, and encoding-based obfuscation:
-
Single-Turn
- Prompt Injection β crafted injections that bypass LLM restrictions
- Roleplay β persona-based scenarios exploiting collaborative training
- Leetspeak β symbolic character substitution to avoid keyword detection
- ROT13 β alphabetic rotation to evade content filters
- Base64 β encoding attacks as random-looking data
- Gray Box β leveraging partial system knowledge for targeted attacks
- Math Problem β disguising attacks within mathematical inputs
- Multilingual β translating attacks to less-spoken languages
- Prompt Probing β probing the LLM to extract system prompt details
- Adversarial Poetry β transforming attacks into poetic verse with metaphor
- System Override β disguising attacks as legitimate system commands
- Permission Escalation β shifting perceived identity to bypass role restrictions
- Goal Redirection β reframing agent objectives for unauthorized outcomes
- Linguistic Confusion β semantic ambiguity to confuse language understanding
- Input Bypass β circumventing validation via exception handling claims
- Context Poisoning β injecting false background context to bias reasoning
- Character Stream β character-by-character input to bypass filters
- Context Flooding β flooding input with benign text to hide malicious instructions
- Embedded Instruction JSON β hiding attacks inside realistic JSON structures
- Synthetic Context Injection β fabricating system context to exploit long-context handling
- Authority Escalation β framing requests from positions of power
- Emotional Manipulation β high-intensity emotional pressure for unsafe compliance
-
Multi-Turn
- Linear Jailbreaking β iteratively refining attacks using target LLM responses
- Tree Jailbreaking β exploring parallel attack variations to find the best bypass
- Crescendo Jailbreaking β gradual escalation from benign to harmful prompts
- Sequential Jailbreak β multi-turn conversational scaffolding toward restricted outputs
- Bad Likert Judge β exploiting Likert scale evaluation roles to extract harmful content
-
-
ποΈ Red team against established AI safety frameworks out-of-the-box. Each framework automatically maps its categories to the right vulnerabilities and attacks:
- OWASP Top 10 for LLMs 2025
- OWASP Top 10 for Agents 2026
- NIST AI RMF
- MITRE ATLAS
- BeaverTails
- Aegis
-
π‘οΈ 7 production-ready guardrails for fast binary classification to guard LLM inputs and outputs in real time.
-
π§© Build your own custom vulnerabilities and attacks that integrate seamlessly with DeepTeam's ecosystem.
-
π Run red teaming from the CLI with YAML configs, or programmatically in Python.
-
π Access risk assessments, display in dataframes, and save locally in JSON.
Β
DeepTeam does not require you to define what LLM system you are red teaming β because neither will malicious users. All you need to do is install deepteam, define a model_callback, and you're good to go.
pip install -U deepteam
from deepteam import red_team
from deepteam.vulnerabilities import Bias
from deepteam.attacks.single_turn import PromptInjection
async def model_callback(input: str) -> str:
# Replace this with your LLM application
return f"I'm sorry but I can't answer this: {input}"
risk_assessment = red_team(
model_callback=model_callback,
vulnerabilities=[Bias(types=["race"])],
attacks=[PromptInjection()]
)Don't forget to set your OPENAI_API_KEY as an environment variable before running (you can also use any custom model supported in DeepEval), and run the file:
python red_team_llm.pyThat's it! Your first red team is complete. Here's what happened:
model_callbackwraps your LLM system and generates astroutput for a giveninput.- At red teaming time,
deepteamsimulates aPromptInjectionattack targetingBiasvulnerabilities. - Your
model_callback's outputs are evaluated using theBiasMetric, producing a binary score of 0 or 1. - The final passing rate for
Biasis determined by the proportion of scores that equal 1.
Unlike traditional evaluation, red teaming does not require a prepared dataset β adversarial attacks are dynamically generated based on the vulnerabilities you want to test for.
Β
Use established AI safety standards like OWASP and NIST instead of manually picking vulnerabilities:
from deepteam import red_team
from deepteam.frameworks import OWASPTop10
async def model_callback(input: str) -> str:
# Replace this with your LLM application
return f"I'm sorry but I can't answer this: {input}"
risk_assessment = red_team(
model_callback=model_callback,
framework=OWASPTop10()
)This automatically maps the framework's categories to the right vulnerabilities and attacks. Available frameworks include OWASPTop10, OWASP_ASI_2026, NIST, MITRE, Aegis, and BeaverTails.
Β
Once you've found your vulnerabilities, use DeepTeam's guardrails to prevent them in production:
from deepteam import Guardrails
from deepteam.guardrails import PromptInjectionGuard, ToxicityGuard, PrivacyGuard
guardrails = Guardrails(
input_guards=[PromptInjectionGuard(), PrivacyGuard()],
output_guards=[ToxicityGuard()]
)
# Guard inputs before they reach your LLM
input_result = guardrails.guard_input("Tell me how to hack a database")
print(input_result.breached) # True
# Guard outputs before they reach your users
output_result = guardrails.guard_output(input="Hi", output="Here is some toxic content...")
print(output_result.breached) # True7 guards are available out-of-the-box: ToxicityGuard, PromptInjectionGuard, PrivacyGuard, IllegalGuard, HallucinationGuard, TopicalGuard, and CybersecurityGuard. Read the full guardrails docs here.
Β
Confident AI is the all-in-one platform that integrates natively with DeepTeam and DeepEval.
- Manage risk assessments β view, compare, and track red teaming results across iterations
- Monitor in production β detect and alert on vulnerabilities hitting your live LLM system
- Share reports β generate and distribute security reports across your team
- Run from your IDE β use Confident AI's MCP server to run red teams, pull results, and inspect vulnerabilities without leaving Cursor or Claude Code
Β
Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.
Β
Built by the founders of Confident AI. Contact jeffreyip@confident-ai.com for all enquiries.
Β
DeepTeam is licensed under Apache 2.0 - see the LICENSE.md file for details.

