Arcjet JavaScript (JS) / TypeScript SDK. Stop bots and automated attacks from burning your AI budget, leaking data, or misusing tools with Arcjet's AI security building blocks.
- Updated
- TypeScript
![]() |
VOOZH | about |
Arcjet JavaScript (JS) / TypeScript SDK. Stop bots and automated attacks from burning your AI budget, leaking data, or misusing tools with Arcjet's AI security building blocks.
PromptMe is an educational project that showcases security vulnerabilities in large language models (LLMs) and their web integrations. It includes 10 hands-on challenges inspired by the OWASP LLM Top 10, demonstrating how these vulnerabilities can be discovered and exploited in real-world scenarios.
A comprehensive reference for securing Large Language Models (LLMs). Covers OWASP GenAI Top-10 risks, prompt injection, adversarial attacks, real-world incidents, and practical defenses. Includes catalogs of red-teaming tools, guardrails, and mitigation strategies to help developers, researchers, and security teams deploy AI responsibly.
Open source prompt injection protection for Agents calling tools (via MCP, CLI or direct function calling). Detect and defend against prompt injection attacks. 22MB, CPU-only, < 10ms latency.
Arcjet Python SDK. Stop bots and automated attacks from burning your AI budget, leaking data, or misusing tools with Arcjet's AI security building blocks.
[New Preprint] PISanitizer: Preventing Prompt Injection to Long-Context LLMs via Prompt Sanitization
AgentWard – Built for all, hardened for OpenClaw.
Protect your LLMs from prompt injection and jailbreak attacks. Easy-to-use Python package with multiple detection methods, CLI tool, and FastAPI integration.
Detect and sanitize prompt injection attacks in Rails apps. Protects against direct injection (users hacking your LLMs via form inputs) and indirect injection (malicious prompts stored for other LLMs to scrape). ~70 detection patterns across 7 attack categories with configurable sensitivity levels. Now includes resource extraction detection pattern
A CLI-driven security proxy that scans every HTTP request for threats using the Citadel AI engine — paid per request via the x402 protocol.
Silent dependency injection through AI documentation pipelines. 240 isolated Docker runs proving Context Hub's zero-sanitization MCP server lets poisoned docs compromise developer projects without warning.
A multi-layered prompt injection detection system built with Laravel.
🛡️ Explore tools for securing Large Language Models, uncovering their strengths and weaknesses in the realm of offensive and defensive security.
Linux sandbox for Cursor using bwrap and linux namespaces
MCP Config Monitor for Claude Desktop
LangGuard Python Library
This repository is meant to be an inspiration and rapid-start workspace for building apps quickly. It combines experiments, starter flows, and reusable tooling in one growing repo so ideas can move into working prototypes with minimal setup.
Prompt Rejector protects your AI-powered applications from prompt injection attacks, jailbreak attempts, and traditional web vulnerabilities.
玄武 Genbu — AI 防禦與保護模組。防止 prompt injection、記憶污染、鏈式攻擊與跨實例污染。基於 LDRIT 設計。
Add a description, image, and links to the prompt-injection-defense topic page so that developers can more easily learn about it.
To associate your repository with the prompt-injection-defense topic, visit your repo's landing page and select "manage topics."