Misc > PraisonAI
Production-ready Multi-AI Agents framework with self-reflection. Fastest agent instantiation (3.77μs), 100+ LLM support via LiteLLM, MCP integration, agentic workflows (route/parallel/loop/repeat), built-in memory, Python & JS SDKs.
PraisonAI 🦞 — Hire a 24/7 AI Workforce. Stop writing boilerplate and start shipping autonomous, self-improving agents that research, plan, and execute tasks across your apps. From one agent to an entire organization, deployed in 5 lines of code.
curl -fsSL https://praison.ai/install.sh | bash
██████╗ ██████╗ █████╗ ██╗███████╗ ██████╗ ███╗ ██╗ █████╗ ██╗
██╔══██╗██╔══██╗██╔══██╗██║██╔════╝██╔═══██╗████╗ ██║ ██╔══██╗██║
██████╔╝██████╔╝███████║██║███████╗██║ ██║██╔██╗ ██║ ███████║██║
██╔═══╝ ██╔══██╗██╔══██║██║╚════██║██║ ██║██║╚██╗██║ ██╔══██║██║
██║ ██║ ██║██║ ██║██║███████║╚██████╔╝██║ ╚████║ ██║ ██║██║
╚═╝ ╚═╝ ╚═╝╚═╝ ╚═╝╚═╝╚══════╝ ╚═════╝ ╚═╝ ╚═══╝ ╚═╝ ╚═╝╚═╝
pip install praisonai
* export TAVILY_API_KEY=xxxxx
🎯 Use Cases
AI agents solving real-world problems across industries:
| Use Case | Description |
|---|---|
| 🔍 Research & Analysis | Conduct deep research, gather information, and generate insights from multiple sources automatically |
| 💻 Code Generation | Write, debug, and refactor code with AI agents that understand your codebase and requirements |
| ✍️ Content Creation | Generate blog posts, documentation, marketing copy, and technical writing with multi-agent teams |
| 📊 Data Pipelines | Extract, transform, and analyze data from APIs, databases, and web sources automatically |
| 🤖 Customer Support | Deploy 24/7 support bots on Telegram, Discord, Slack with memory and knowledge-backed responses |
| ⚙️ Workflow Automation | Automate multi-step business processes with agents that hand off tasks, verify results, and self-correct |
🚀 Meet your first Agent (Under 1 Minute)
- Install the lightweight core SDK:
pip install praisonaiagents
export OPENAI_API_KEY="your-api-key"
- Run your first autonomous agent:
from praisonaiagents import Agent
# Give your agent a goal, and watch it work.
agent = Agent(instructions="You are a senior data analyst.")
agent.start("Analyze the top 3 tech trends of 2026 and format as a markdown table.")
🌌 The PraisonAI Ecosystem
Start simple with the core SDK, or expand to full visual builders and dashboards when you're ready.
- Core SDK (
praisonaiagents): For pure Python development.pip install praisonaiagents - 💻 PraisonAI CLI (
praisonai): For terminal-based developers.pip install praisonai - 🦞 Claw Dashboard: Connect agents directly to Telegram, Slack, or Discord.
pip install "praisonai[claw]" - 🔗 Flow Visual Builder: Drag-and-drop workflow creation.
pip install "praisonai[flow]" - 🤖 PraisonAI UI: Clean chat interface.
pip install "praisonai[ui]"
JavaScript SDK
npm install praisonai
🧠 Supported Providers & Features
Powered by 100+ LLMs (OpenAI, Anthropic, Gemini & local models).
View all 24 providers with examples
| Provider | Example |
|---|---|
| OpenAI | Example |
| Anthropic | Example |
| Google Gemini | Example |
| Ollama | Example |
| Groq | Example |
| DeepSeek | Example |
| xAI Grok | Example |
| Mistral | Example |
| Cohere | Example |
| Perplexity | Example |
| Fireworks | Example |
| Together AI | Example |
| OpenRouter | Example |
| HuggingFace | Example |
| Azure OpenAI | Example |
| AWS Bedrock | Example |
| Google Vertex | Example |
| Databricks | Example |
| Cloudflare | Example |
| AI21 | Example |
| Replicate | Example |
| SageMaker | Example |
| Moonshot | Example |
| vLLM | Example |
"Grok 3 customer support" — Elon Musk quoting PraisonAI's tutorial
🌟 Why PraisonAI?
| Feature | How | |
|---|---|---|
| 🔌 | MCP Protocol — stdio, HTTP, WebSocket, SSE | tools=MCP("npx ...") |
| 🧠 | Planning Mode — plan → execute → reason | planning=True |
| 🔍 | Deep Research — multi-step autonomous research | Docs |
| 🤖 | External Agents — orchestrate Claude Code, Gemini CLI, Codex | Docs |
| 🔄 | Agent Handoffs — seamless conversation passing | handoff=True |
| 🛡️ | Guardrails — input/output validation | Docs |
| Web Search + Fetch — native browsing | web_search=True |
|
| 🪞 | Self Reflection — agent reviews its own output | Docs |
| 🔀 | Workflow Patterns — route, parallel, loop, repeat | Docs |
| 🧠 | Memory (zero deps) — works out of the box | memory=True |
View all 25 features
| Feature | How | |
|---|---|---|
| 💡 | Prompt Caching — reduce latency + cost | prompt_caching=True |
| 💾 | Sessions + Auto-Save — persistent state across restarts | auto_save="my-project" |
| 💭 | Thinking Budgets — control reasoning depth | thinking_budget=1024 |
| 📚 | RAG + Quality-Based RAG — auto quality scoring retrieval | Docs |
| 📊 | Model Router — auto-routes to cheapest capable model | Docs |
| 🧊 | Shadow Git Checkpoints — auto-rollback on failure | Docs |
| 📡 | A2A Protocol — agent-to-agent interop | Docs |
| 📏 | Context Compaction — never hit token limits | Docs |
| 📡 | Telemetry — OpenTelemetry traces, spans, metrics | Docs |
| 📜 | Policy Engine — declarative agent behavior control | Docs |
| 🔄 | Background Tasks — fire-and-forget agents | Docs |
| 🔁 | Doom Loop Detection — auto-recovery from stuck agents | Docs |
| 🕸️ | Graph Memory — Neo4j-style relationship tracking | Docs |
| 🏖️ | Sandbox Execution — isolated code execution | Docs |
| 🖥️ | Bot Gateway — multi-agent routing across channels | Docs |
📘 Using Python Code
1. Single Agent
from praisonaiagents import Agent
agent = Agent(instructions="You are a helpful AI assistant")
agent.start("Write a movie script about a robot in Mars")
2. Multi Agents
from praisonaiagents import Agent, Agents
research_agent = Agent(instructions="Research about AI")
summarise_agent = Agent(instructions="Summarise research agent's findings")
agents = Agents(agents=[research_agent, summarise_agent])
agents.start()
3. MCP (Model Context Protocol)
from praisonaiagents import Agent, MCP
# stdio - Local NPX/Python servers
agent = Agent(tools=MCP("npx @modelcontextprotocol/server-memory"))
# Streamable HTTP - Production servers
agent = Agent(tools=MCP("https://api.example.com/mcp"))
# WebSocket - Real-time bidirectional
agent = Agent(tools=MCP("wss://api.example.com/mcp", auth_token="token"))
# With environment variables
agent = Agent(
tools=MCP(
command="npx",
args=["-y", "@modelcontextprotocol/server-brave-search"],
env={"BRAVE_API_KEY": "your-key"}
)
)
📖 Full MCP docs — stdio, HTTP, WebSocket, SSE transports
4. Custom Tools
from praisonaiagents import Agent, tool
@tool
def search(query: str) -> str:
"""Search the web for information."""
return f"Results for: {query}"
@tool
def calculate(expression: str) -> float:
"""Evaluate a math expression."""
return eval(expression)
agent = Agent(
instructions="You are a helpful assistant",
tools=[search, calculate]
)
agent.start("Search for AI news and calculate 15*4")
📖 Full tools docs — BaseTool, tool packages, 100+ built-in tools
5. Persistence (Databases)
from praisonaiagents import Agent, db
agent = Agent(
name="Assistant",
db=db(database_url="postgresql://localhost/mydb"),
session_id="my-session"
)
agent.chat("Hello!") # Auto-persists messages, runs, traces
📖 Full persistence docs — PostgreSQL, MySQL, SQLite, MongoDB, Redis, and 20+ more
6. PraisonAI Claw 🦞 (Dashboard UI)
Connect your AI agents to Telegram, Discord, Slack, WhatsApp and more — all from a single command.
pip install "praisonai[claw]"
praisonai claw
Open http://localhost:8082 — the dashboard comes with 13 built-in pages: Chat, Agents, Memory, Knowledge, Channels, Guardrails, Cron, and more. Add messaging channels directly from the UI.
📖 Full Claw docs — platform tokens, CLI options, Docker, and YAML agent mode
7. Langflow Integration 🔗 (Visual Flow Builder)
Build multi-agent workflows visually with drag-and-drop components in Langflow.
pip install "praisonai[flow]"
praisonai flow
Open http://localhost:7861 — use the Agent and Agent Team components to create sequential or parallel workflows. Connect Chat Input → Agent Team → Chat Output for instant multi-agent pipelines.
📖 Full Flow docs — visual agent building, component reference, and deployment
8. PraisonAI UI 🤖 (Clean Chat)
Lightweight chat interface for your AI agents.
pip install "praisonai[ui]"
praisonai ui
📄 Using YAML (No Code)
Example 1: Two Agents Working Together
Create agents.yaml:
framework: praisonai
topic: "Write a blog post about AI"
agents:
researcher:
role: Research Analyst
goal: Research AI trends and gather information
instructions: "Find accurate information about AI trends"
writer:
role: Content Writer
goal: Write engaging blog posts
instructions: "Write clear, engaging content based on research"
Run with:
praisonai agents.yaml
The agents automatically work together sequentially
Example 2: Agent with Custom Tool
Create two files in the same folder:
agents.yaml:
framework: praisonai
topic: "Calculate the sum of 25 and 15"
agents:
calculator_agent:
role: Calculator
goal: Perform calculations
instructions: "Use the add_numbers tool to help with calculations"
tools:
- add_numbers
tools.py:
def add_numbers(a: float, b: float) -> float:
"""
Add two numbers together.
Args:
a: First number
b: Second number
Returns:
The sum of a and b
"""
return a + b
Run with:
praisonai agents.yaml
💡 Tips:
- Use the function name (e.g.,
add_numbers) in the tools list, not the file name- Tools in
tools.pyare automatically discovered- The function's docstring helps the AI understand how to use it
🎯 CLI Quick Reference
| Category | Commands |
|---|---|
| Execution | praisonai, --auto, --interactive, --chat |
| Research | research, --query-rewrite, --deep-research |
| Planning | --planning, --planning-tools, --planning-reasoning |
| Workflows | workflow run, workflow list, workflow auto |
| Memory | memory show, memory add, memory search, memory clear |
| Knowledge | knowledge add, knowledge query, knowledge list |
| Sessions | session list, session resume, session delete |
| Tools | tools list, tools info, tools search |
| MCP | mcp list, mcp create, mcp enable |
| Development | commit, docs, checkpoint, hooks |
| Scheduling | schedule start, schedule list, schedule stop |
✨ Key Features
🤖 Core Agents
| Feature | Code | Docs |
|---|---|---|
| Single Agent | Example | 📖 |
| Multi Agents | Example | 📖 |
| Auto Agents | Example | 📖 |
| Self Reflection AI Agents | Example | 📖 |
| Reasoning AI Agents | Example | 📖 |
| Multi Modal AI Agents | Example | 📖 |
🔄 Workflows
| Feature | Code | Docs |
|---|---|---|
| Simple Workflow | Example | 📖 |
| Workflow with Agents | Example | 📖 |
Agentic Routing (route()) |
Example | 📖 |
Parallel Execution (parallel()) |
Example | 📖 |
Loop over List/CSV (loop()) |
Example | 📖 |
Evaluator-Optimizer (repeat()) |
Example | 📖 |
| Conditional Steps | Example | 📖 |
| Workflow Branching | Example | 📖 |
| Workflow Early Stop | Example | 📖 |
| Workflow Checkpoints | Example | 📖 |
💻 Code & Development
| Feature | Code | Docs |
|---|---|---|
| Code Interpreter Agents | Example | 📖 |
| AI Code Editing Tools | Example | 📖 |
| External Agents (All) | Example | 📖 |
| Claude Code CLI | Example | 📖 |
| Gemini CLI | Example | 📖 |
| Codex CLI | Example | 📖 |
| Cursor CLI | Example | 📖 |
🧠 Memory & Knowledge
| Feature | Code | Docs |
|---|---|---|
| Memory (Short & Long Term) | Example | 📖 |
| File-Based Memory | Example | 📖 |
| Claude Memory Tool | Example | 📖 |
| Add Custom Knowledge | Example | 📖 |
| RAG Agents | Example | 📖 |
| Chat with PDF Agents | Example | 📖 |
| Data Readers (PDF, DOCX, etc.) | CLI | 📖 |
| Vector Store Selection | CLI | 📖 |
| Retrieval Strategies | CLI | 📖 |
| Rerankers | CLI | 📖 |
| Index Types (Vector/Keyword/Hybrid) | CLI | 📖 |
| Query Engines (Sub-Question, etc.) | CLI | 📖 |
🔬 Research & Intelligence
| Feature | Code | Docs |
|---|---|---|
| Deep Research Agents | Example | 📖 |
| Query Rewriter Agent | Example | 📖 |
| Native Web Search | Example | 📖 |
| Built-in Search Tools | Example | 📖 |
| Unified Web Search | Example | 📖 |
| Web Fetch (Anthropic) | Example | 📖 |
📋 Planning & Execution
| Feature | Code | Docs |
|---|---|---|
| Planning Mode | Example | 📖 |
| Planning Tools | Example | 📖 |
| Planning Reasoning | Example | 📖 |
| Prompt Chaining | Example | 📖 |
| Evaluator Optimiser | Example | 📖 |
| Orchestrator Workers | Example | 📖 |
👥 Specialized Agents
| Feature | Code | Docs |
|---|---|---|
| Data Analyst Agent | Example | 📖 |
| Finance Agent | Example | 📖 |
| Shopping Agent | Example | 📖 |
| Recommendation Agent | Example | 📖 |
| Wikipedia Agent | Example | 📖 |
| Programming Agent | Example | 📖 |
| Math Agents | Example | 📖 |
| Markdown Agent | Example | 📖 |
| Prompt Expander Agent | Example | 📖 |
🎨 Media & Multimodal
| Feature | Code | Docs |
|---|---|---|
| Image Generation Agent | Example | 📖 |
| Image to Text Agent | Example | 📖 |
| Video Agent | Example | 📖 |
| Camera Integration | Example | 📖 |
🔌 Protocols & Integration
| Feature | Code | Docs |
|---|---|---|
| MCP Transports | Example | 📖 |
| WebSocket MCP | Example | 📖 |
| MCP Security | Example | 📖 |
| MCP Resumability | Example | 📖 |
| MCP Config Management | Docs | 📖 |
| LangChain Integrated Agents | Example | 📖 |
🛡️ Safety & Control
| Feature | Code | Docs |
|---|---|---|
| Guardrails | Example | 📖 |
| Human Approval | Example | 📖 |
| Rules & Instructions | Docs | 📖 |
⚙️ Advanced Features
| Feature | Code | Docs |
|---|---|---|
| Async & Parallel Processing | Example | 📖 |
| Parallelisation | Example | 📖 |
| Repetitive Agents | Example | 📖 |
| Agent Handoffs | Example | 📖 |
| Stateful Agents | Example | 📖 |
| Autonomous Workflow | Example | 📖 |
| Structured Output Agents | Example | 📖 |
| Model Router | Example | 📖 |
| Prompt Caching | Example | 📖 |
| Fast Context | Example | 📖 |
🛠️ Tools & Configuration
| Feature | Code | Docs |
|---|---|---|
| 100+ Custom Tools | Example | 📖 |
| YAML Configuration | Example | 📖 |
| 100+ LLM Support | Example | 📖 |
| Callback Agents | Example | 📖 |
| Hooks | Example | 📖 |
| Middleware System | Example | 📖 |
| Configurable Model | Example | 📖 |
| Rate Limiter | Example | 📖 |
| Injected Tool State | Example | 📖 |
| Shadow Git Checkpoints | Example | 📖 |
| Background Tasks | Example | 📖 |
| Policy Engine | Example | 📖 |
| Thinking Budgets | Example | 📖 |
| Output Styles | Example | 📖 |
| Context Compaction | Example | 📖 |
📊 Monitoring & Management
| Feature | Code | Docs |
|---|---|---|
| Sessions Management | Example | 📖 |
| Auto-Save Sessions | Docs | 📖 |
| History in Context | Docs | 📖 |
| Telemetry | Example | 📖 |
| Langfuse Tracing | Docs | 📖 |
| Project Docs (.praison/docs/) | Docs | 📖 |
| AI Commit Messages | Docs | 📖 |
| @Mentions in Prompts | Docs | 📖 |
🖥️ CLI Features
| Feature | Code | Docs |
|---|---|---|
| Slash Commands | Example | 📖 |
| Autonomy Modes | Example | 📖 |
| Cost Tracking | Example | 📖 |
| Repository Map | Example | 📖 |
| Interactive TUI | Example | 📖 |
| Git Integration | Example | 📖 |
| Sandbox Execution | Example | 📖 |
| CLI Compare | Example | 📖 |
| Profile/Benchmark | Docs | 📖 |
| Auto Mode | Docs | 📖 |
| Init | Docs | 📖 |
| File Input | Docs | 📖 |
| Final Agent | Docs | 📖 |
| Max Tokens | Docs | 📖 |
🧪 Evaluation
| Feature | Code | Docs |
|---|---|---|
| Accuracy Evaluation | Example | 📖 |
| Performance Evaluation | Example | 📖 |
| Reliability Evaluation | Example | 📖 |
| Criteria Evaluation | Example | 📖 |
💻 Using JavaScript Code
npm install praisonai
export OPENAI_API_KEY=xxxxxxxxxxxxxxxxxxxxxx
const { Agent } = require('praisonai');
const agent = new Agent({ instructions: 'You are a helpful AI assistant' });
agent.start('Write a movie script about a robot in Mars');
⚡ Performance
PraisonAI is built for speed, with agent instantiation in under 4μs. This reduces overhead, improves responsiveness, and helps multi-agent systems scale efficiently in real-world production workloads.
| Performance Metric | PraisonAI |
|---|---|
| Avg Instantiation Time | 3.77 μs |
⭐ Star History
🔍 Langfuse Tracing
pip install "praisonai[langfuse]"
praisonai langfuse
🎓 Video Tutorials
Learn PraisonAI through our comprehensive video series:
View all 22 video tutorials
👥 Contributing
We welcome contributions! Fork the repo, create a branch, and submit a PR → Contributing Guide.
❓ FAQ & Troubleshooting
ModuleNotFoundError: No module named 'praisonaiagents'
Install the package:
pip install praisonaiagents
API key not found / Authentication error
Ensure your API key is set:
export OPENAI_API_KEY=your_key_here
For other providers, see Models docs.
How do I use a local model (Ollama)?
# Start Ollama server first
ollama serve
# Set environment variable
export OPENAI_BASE_URL=http://localhost:11434/v1
See Models docs for more details.
How do I persist conversations to a database?
Use the db parameter:
from praisonaiagents import Agent, db
agent = Agent(
name="Assistant",
db=db(database_url="postgresql://localhost/mydb"),
session_id="my-session"
)
See Persistence docs for supported databases.
How do I enable agent memory?
from praisonaiagents import Agent
agent = Agent(
name="Assistant",
memory=True, # Enables file-based memory (no extra deps!)
user_id="user123"
)
See Memory docs for more options.
How do I run multiple agents together?
from praisonaiagents import Agent, Agents
agent1 = Agent(instructions="Research topics")
agent2 = Agent(instructions="Summarize findings")
agents = Agents(agents=[agent1, agent2])
agents.start()
See Agents docs for more examples.
How do I use MCP tools?
from praisonaiagents import Agent, MCP
agent = Agent(
tools=MCP("npx @modelcontextprotocol/server-memory")
)
See MCP docs for all transport options.
Getting Help
Made with ❤️ by the PraisonAI Team
📚 Documentation • GitHub • ▶️ YouTube • 𝕏 X • 💼 LinkedIn





















