| Version | Supported |
|---|---|
| latest | ✅ |
We take the security of AI Prompts Library seriously. If you discover a security vulnerability, please follow these steps:
Security vulnerabilities should not be disclosed publicly until they have been addressed.
Please report security vulnerabilities by emailing:
Include the following information:
- Description of the vulnerability
- Steps to reproduce
- Potential impact
- Any suggested fixes (optional)
- Initial Response: Within 48 hours
- Status Update: Within 7 days
- Resolution: Depends on severity and complexity
- We will acknowledge receipt of your report
- We will investigate and validate the issue
- We will work on a fix and coordinate disclosure
- We will credit you in the release notes (unless you prefer anonymity)
This security policy applies to:
- The AI Prompts Library repository
- All prompt files and documentation
- Configuration files and GitHub Actions workflows
- Prompts that generate harmful content (report as regular issue with
securitylabel) - Third-party services linked from prompts
- User-generated content in issues/discussions
When contributing prompts, please ensure:
- No Secrets: Never include API keys, tokens, or credentials
- No PII: Avoid personal identifiable information in examples
- Safe Examples: Use placeholder values like
YOUR_API_KEYorexample@email.com - Responsible AI: Consider potential misuse of prompts
We appreciate security researchers who help keep our community safe. Contributors who report valid security issues will be:
- Credited in our security advisories (if desired)
- Added to our Hall of Fame (coming soon)
- Thanked in release notes
Thank you for helping keep AI Prompts Library secure! 🙏