A single API to route, optimize, and safely access any LLM provider.Smart fallbacks, prompt security, and unified billing in one open-source orchestrator.
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://smartinfer.io/v1',
apiKey: 'sk-si-...',
});
const completion = await client.chat.completions.create({
model: 'smartinfer/my-pipeline',
messages: [{ role: 'user', content: 'Hello, AI' }],
});Route requests intelligently, enforce security policies, and monitor usage — all from one control plane.
Connect multiple providers — OpenRouter, OpenAI, Anthropic — and configure smart fallbacks. Never experience downtime when a single provider fails.
Integrated with LLM Guard to automatically scan prompts and responses for PII, leaks, and prompt injections before they reach the provider.
Self-hostable and built for transparency. Maintain complete control over your credentials and routing logic without vendor lock-in.
Build multi-step AI pipelines with a visual editor. Chain models, add guardrails, and deploy complex workflows in minutes.
Real-time dashboards for token usage, latency, costs, and error rates across every provider and pipeline you run.
Per-team API keys with granular rate limits, budget caps, and audit logs. Full multi-tenant isolation out of the box.
Get started in under 5 minutes. Free and open source, forever.