Runtime-secured AI tooling framework for production-grade LLM applications, protecting against prompt injection, jailbreaks, and adversarial attacks.
- Updated
- Python
![]() |
VOOZH | about |
Runtime-secured AI tooling framework for production-grade LLM applications, protecting against prompt injection, jailbreaks, and adversarial attacks.
LLM Penetration Testing Framework - Discover vulnerabilities in AI applications before attackers do. 100attacks + AI-powered adaptive mode.
🛡️ Protect LLM applications with PromptShields, a robust security framework designed to prevent prompt injection, jailbreaks, and data leakage.
A deterministic runtime security SDK for LLM applications that prevents prompt injection, data leakage, and rogue agent behavior using high-performance, auditable rule-based guards instead of probabilistic AI inference.
Add a description, image, and links to the openai-security topic page so that developers can more easily learn about it.
To associate your repository with the openai-security topic, visit your repo's landing page and select "manage topics."