Manifest
Manifest is an open-source Backend-as-a-Service allowing developers to create a backend easily and quickly.
Reduce your AI costs
What is Manifest?
Manifest is a smart model router for agents and AI applications that redirects each query to the right model, saving up to 70% in AI costs.
- 🔀 Routing based on complexity, specificity and custom HTTP headers
- 🎛️ Mix your providers: API keys, Subscriptions, Local models, Custom providers
- 📊 Track every single dollar, setup notifications and limits
- 🚑 Fallback on different models when queries fails
Quick start
Cloud version
Go to app.manifest.build and follow the guide.
Self-hosted
Manifest ships as a Docker image. One command:
bash <(curl -sSL https://raw.githubusercontent.com/mnfst/manifest/main/docker/install.sh)
Open http://localhost:2099 and sign up — the first account you create becomes the admin. Full self-hosting guide: docker/DOCKER_README.md.
The legacy
manifestnpm package is deprecated and no longer published.
Providers
Manifest connects to 300+ models across 16 providers plus any custom provider (OpenAI/Anthropic compatible). Bring your own API key, reuse a paid subscription you already have, or run models locally — all routed through
the same /auto endpoint.
| Provider | API key | Subscription | Featured models |
|---|---|---|---|
| OpenAI | ✅ | ✅ ChatGPT Plus / Pro / Team | gpt-5, gpt-5-mini, o4, o4-mini |
| Anthropic | ✅ | ✅ Claude Max / Pro | claude-opus-4-7, claude-sonnet-4-6, claude-haiku-4-5 |
| ✅ | — | gemini-2.5-pro, gemini-2.5-flash, gemini-2.0-flash | |
| xAI | ✅ | — | grok-4, grok-3, grok-code-fast |
| DeepSeek | ✅ | — | deepseek-v3.2, deepseek-r1 |
| Mistral | ✅ | — | mistral-large, codestral, magistral |
| Qwen (Alibaba) | ✅ | — | qwen3-max, qwen3-coder, qwq-32b |
| Moonshot (Kimi) | ✅ | — | kimi-k2, moonshot-v1-128k |
| MiniMax | ✅ | ✅ MiniMax Coding Plan | minimax-m2, abab7-chat-preview |
| Z.ai (Zhipu) | ✅ | ✅ GLM Coding Plan | glm-4.6, glm-4.5-air |
| OpenCode | — | ✅ Go subscription | Routes via OpenCode Go catalog |
| Ollama | 🖥️ Local | ✅ Ollama Cloud | Any GGUF model, port 11434 |
| LM Studio | 🖥️ Local | — | Any GGUF model, port 1234 |
| llama.cpp | 🖥️ Local | — | Any GGUF model, port 8080 |
| OpenRouter | ✅ | — | Routes to 300+ models across labs |
| GitHub Copilot | — | ✅ Copilot subscription | OAuth, no API key needed |
| Custom (OpenAI/Anthropic-compatible) | ✅ | — | Any /v1/chat/completions or /v1/messages endpoint |