VOOZH about

URL: https://github.com/topics/prompt-injection-defense

⇱ prompt-injection-defense · GitHub Topics · GitHub


Skip to content
#

prompt-injection-defense

Here are 36 public repositories matching this topic...

PromptMe is an educational project that showcases security vulnerabilities in large language models (LLMs) and their web integrations. It includes 10 hands-on challenges inspired by the OWASP LLM Top 10, demonstrating how these vulnerabilities can be discovered and exploited in real-world scenarios.

  • Updated
  • Python

A comprehensive reference for securing Large Language Models (LLMs). Covers OWASP GenAI Top-10 risks, prompt injection, adversarial attacks, real-world incidents, and practical defenses. Includes catalogs of red-teaming tools, guardrails, and mitigation strategies to help developers, researchers, and security teams deploy AI responsibly.

  • Updated

Detect and sanitize prompt injection attacks in Rails apps. Protects against direct injection (users hacking your LLMs via form inputs) and indirect injection (malicious prompts stored for other LLMs to scrape). ~70 detection patterns across 7 attack categories with configurable sensitivity levels. Now includes resource extraction detection pattern

  • Updated
  • Ruby

This repository is meant to be an inspiration and rapid-start workspace for building apps quickly. It combines experiments, starter flows, and reusable tooling in one growing repo so ideas can move into working prototypes with minimal setup.

  • Updated
  • HTML

Improve this page

Add a description, image, and links to the prompt-injection-defense topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the prompt-injection-defense topic, visit your repo's landing page and select "manage topics."

Learn more

You can’t perform that action at this time.