Skip to content
You signed in with another tab or window. to refresh your session.
You signed out in another tab or window. to refresh your session.
You switched accounts on another tab or window. to refresh your session.
Here are
6 public repositories
matching this topic...
Open-source LLM router & AI cost optimizer. Routes simple prompts to cheap/local models, complex ones to premium — automatically. Drop-in OpenAI-compatible proxy for Claude Code, Codex, Cursor, OpenClaw. Saves 40-70% on AI API costs. Self-hosted, no middleman.
Blazing-fast, zero-cost local LLM router. Classify and route prompts to specialized AI models with <1ms latency using heuristic rules.
👁 PiPiMink
Route every prompt to the best LLM — for you specifically.
Intelligent LLM router that dynamically selects the best AI model for each prompt based on complexity, cost, and performance.
Route prompts to local large language models instantly to cut costs and reduce latency with smart, zero-overhead classification under 1 millisecond.
MCP server that intelligently routes prompts across Claude, Gemini, and GPT-4o based on task type — minimising token cost without sacrificing quality.
Improve this page
Add a description, image, and links to the
prompt-routing
topic page so that developers can more easily learn about it.
Curate this topic
Add this topic to your repo
To associate your repository with the
prompt-routing
topic, visit your repo's landing page and select "manage topics."
Learn more
You can’t perform that action at this time.