Skip to content
Ethan Troy edited this page Dec 28, 2025 · 1 revision

Wilma Security Wiki

Welcome to the Wilma Security Wiki - your guide to understanding and securing AWS Bedrock deployments.

What is Wilma?

Wilma is an AWS Bedrock security configuration checker that helps you identify and fix security vulnerabilities in your GenAI deployments. Unlike traditional cloud security tools, Wilma focuses on threats unique to Large Language Models and generative AI systems.

Why GenAI Security is Different

Traditional application security focuses on SQL injection, XSS, and authentication bypasses. GenAI introduces entirely new attack vectors:

  • Prompt Injection: Attackers manipulate AI behavior through crafted inputs
  • Data Poisoning: Compromising training data or RAG knowledge bases
  • Model Extraction: Stealing your fine-tuned models
  • Excessive Agency: AI agents performing unauthorized actions
  • PII Leakage: Models memorizing and exposing sensitive data

Wilma checks for these and 40+ other GenAI-specific security issues.

Quick Links

Understanding the Threats

AWS Bedrock Security Deep Dives

Using Wilma

Security Best Practices

Educational Philosophy

This wiki teaches why things are insecure, not just what to fix. Each security check includes:

  1. The Threat: What attack does this prevent?
  2. Real-World Impact: What happens if exploited?
  3. How Attackers Think: Understanding the attacker's perspective
  4. Defense in Depth: Why multiple layers matter
  5. Remediation: Concrete steps to fix the issue

Contributing

Found a gap in our security coverage? Have a real-world attack example to share? Contribute to Wilma on GitHub.

License

Wilma is free and open source under GPL v3. Built by Ethan Troy for the GenAI security community.


Start Learning: GenAI Security Fundamentals →

Clone this wiki locally