A zaguán is an architectural passage between the street and the home. We are the intelligent passage between your code and every AI model on earth.
Stop writing custom adapters for Anthropic, Gemini, and Mistral. Zaguán is the translation layer that lets you swap models by changing one line of code.
Each provider has different parameters, formats, and quirks. Zaguán handles all of it automatically. This is what truly differentiates us from "just a router."
Standard OpenAI SDK call - works everywhere
client.chat.completions.create({
model: "claude-3-5-sonnet",
messages: [{
role: "user",
content: "Explain quantum computing"
}],
temperature: 0.7
})Automatically optimized for Anthropic's API
// Adapted for Anthropic's API
{
model: "claude-3-5-sonnet-20241022",
max_tokens: 4096, // ← Required parameter added
messages: [...],
temperature: 0.7,
system: "You are a helpful assistant"
// ← Extracted from messages and formatted
}The Difference:
Result: Your request works perfectly, first try. No trial-and-error, no reading provider docs.
Other gateways forward your errors. We translate your intent.
This is what truly differentiates Zaguán from “just a router.”
Built for solo developers, small teams, and agencies who want to ship AI features fast without maintaining their own gateway.
Plug in like OpenAI. Behind the scenes, we talk to Anthropic, OpenAI, Ollama, etc. with the right options and formats.
We normalize and enrich requests per model: system prompts, parameters, safety options - so you don't have to learn every vendor's quirks.
Clear SDKs, examples, and sane defaults. No need for your own prompt router or home-grown gateway.
Logs and metrics that help you debug prompts and model choices without building your own dashboards.
OpenAI changes pricing? Google ships a better model? Anthropic goes down? With Zaguán, you aren't married to a provider. Switch your entire backend in seconds, not days.
• Locked into one provider's pricing• Outages take your entire app down• Switching requires weeks of refactoring• No negotiating leverage with vendors
Virtual Models like zaguan/deepseek-r1-0528 intelligently route across multiple providers. Switch providers in one line of code. Automatic failover means zero downtime. You keep vendors honest.
Switch providers in seconds, not days. No code refactoring required. You own your infrastructure, not them.
Health checks, circuit breakers, and intelligent routing. Zero downtime when providers fail.
What's Next? We're building Premium Virtual Models for SOTA commercial LLMs (GPT-5, Claude Opus, o1). As a Founder, you'll be first to test them.
Virtual Models work with the standard OpenAI SDK
Standard Call
response = client.chat.completions.create(
model="claude-3-haiku",
messages=[...]
)Virtual Model (Beta)
# Automatic failover, zero downtime
response = client.chat.completions.create(
model="zaguan/deepseek-r1-0528",
messages=[...]
)Because Zaguán isn't a bloated VC startup, we don't have to pivot to crypto or ads to satisfy investors. We just serve API requests reliably. Profitable and going nowhere.
One flat monthly price. No token billing surprises, no hidden costs. You know exactly what you'll pay.
No API key management. Just sign up and start building. We handle the complexity for you.
Self-sustaining infrastructure. Not a side project that'll disappear. Built for the long haul.
Full access to each provider's features - Anthropic's structured prompts, Gemini's tuning, and more.
Official SDKs for Python, TypeScript, and Go. Type-safe, fully documented, and battle-tested with comprehensive test coverage.
pip install zaguan-sdknpm i @zaguan_ai/sdkgo get zaguan-sdk-goClear SDKs, real examples in popular stacks (Node, Python, Go), and responsive support from the creator. Early users help shape the roadmap - you're not just a customer.
Too many developers have been burned by services that shut down. We're building for the long term:
This isn't a side project. It's my full focus.
We handle authentication, rate limits, and API versioning across all providers. You focus on shipping features.
Zaguán is bootstrapped and profitable. By joining the Founder's Plan, you aren't just buying credits - you're funding independent infrastructure. In return, your pricing is capped at €15/mo for life.
Standard pricing will be €39/month after the first 200 founders. Your price is locked forever, no matter what features we add.
Compare:
• ChatGPT Plus: $20/month
• Claude Pro: $20/month
• Grok (xAI): $30/month
• Perplexity Pro: $20/month
• Total for 4 chatbot services: $90/month
Premium Tiers:
• ChatGPT Pro: $200/month (GPT-4o unlimited)
• Claude Max: €90/month (Claude Opus unlimited)
• Super Grok: $300/month (unlimited access)
• Total for 3 premium services: $590/month
Zaguán Founder's: €15/month (46 models, all providers)
Savings: 83-97% vs. premium subscriptions
This isn't a discount. It's a partnership. Your founder price is locked in forever.
Stop managing keys and start building. Get access to 500+ AI models through one simple API.
Today we're focused on solo devs, small teams, and agencies. If you need dedicated Enterprise features (custom contracts, on-prem deployment, SSO, dedicated support), contact us - we're planning our roadmap based on real demand.
We're actively developing:
As a Founder member, you'll be first to test new features and help shape what we build next.
Most gateways stop writing custom adapters for Anthropic, Gemini, and Mistral. Zaguán is the translation layer that lets you swap models by changing one line of code. Most gateways are just proxies that pass requests through. Zaguán's translation layer actively adapts your OpenAI-style calls to work optimally with each provider's specific API, parameters, and features. You get better results with less code, and you're not locked into any single vendor.
Join our newsletter for exclusive insights on AI development, new model releases, and tips for building better AI applications.