🐍Python SDK
v0.1.0
Zaguan Python SDK
The official Python SDK for Zaguan CoreX. Build production-ready AI applications with both synchronous and asynchronous support, full type hints with Pydantic validation, and comprehensive coverage of OpenAI-compatible endpoints.
Installation
Install via pip:
pip install zaguan-sdkRequirements:
- •Python 3.8 or higher
- •Dependencies:
httpxandpydantic
Quick Start
Basic Chat Completion
from zaguan_sdk import ZaguanClient, ChatRequest, Message
# Initialize the client
client = ZaguanClient(
base_url="https://api.zaguanai.com",
api_key="your-api-key"
)
# Simple chat completion
response = client.chat(ChatRequest(
model="openai/gpt-4o-mini",
messages=[Message(role="user", content="What is Python?")]
))
print(response.choices[0].message.content)Streaming Responses
# Stream responses in real-time
for chunk in client.chat_stream(ChatRequest(
model="openai/gpt-4o-mini",
messages=[Message(role="user", content="Tell me a story")]
)):
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)Async Support
import asyncio
from zaguan_sdk import AsyncZaguanClient
async def main():
async with AsyncZaguanClient(
base_url="https://api.zaguanai.com",
api_key="your-api-key"
) as client:
response = await client.chat(ChatRequest(
model="anthropic/claude-3-5-sonnet",
messages=[Message(role="user", content="Hello!")]
))
print(response.choices[0].message.content)
asyncio.run(main())Key Features
Core Capabilities
- •Sync & Async clients (ZaguanClient & AsyncZaguanClient)
- •Full type hints and Pydantic validation
- •Real-time response streaming
- •OpenAI-compatible interface
- •Multi-provider support (15+ providers)
Complete API Coverage
- •Chat completions & function calling
- •Audio processing (Whisper, TTS)
- •Image generation (DALL-E)
- •Text embeddings
- •Content moderation & credits tracking
Advanced Examples
Multi-Provider Usage
# Switch providers without changing code
models = [
"openai/gpt-4o",
"anthropic/claude-3-5-sonnet",
"google/gemini-2.0-flash",
"deepseek/deepseek-chat"
]
for model in models:
response = client.chat(ChatRequest(
model=model,
messages=[Message(role="user", content="Hi!")]
))
print(f"{model}: {response.choices[0].message.content}")Embeddings for Semantic Search
from zaguan_sdk import EmbeddingRequest
# Create embeddings
response = client.create_embeddings(EmbeddingRequest(
model="openai/text-embedding-3-small",
input=["Python is great", "I love coding"]
))
for embedding in response.data:
print(f"Embedding: {embedding.embedding[:5]}...") # First 5 dimensionsAudio Transcription
# Transcribe audio file
transcription = client.create_transcription(
file_path="meeting.mp3",
model="whisper-1",
language="en"
)
print(transcription.text)Image Generation
from zaguan_sdk import ImageGenerationRequest
# Generate image with DALL-E
response = client.create_image(ImageGenerationRequest(
prompt="A serene mountain landscape at sunset",
model="dall-e-3",
size="1024x1024",
quality="hd"
))
print(response.data[0].url)Supported Providers
Access 15+ AI providers through one unified API:
OpenAI
GPT-4o, GPT-4, GPT-3.5
Anthropic
Claude 3.5 Sonnet, Opus
Google
Gemini 2.0, Gemini Pro
DeepSeek
DeepSeek-V3, Reasoner
Alibaba
Qwen 2.5
xAI
Grok 2
Perplexity
Sonar
Cohere
Command R+
Groq
Llama 3, Mixtral
Migration from OpenAI SDK
Switching from OpenAI SDK? It's just 3 lines:
Before (OpenAI SDK)
from openai import OpenAI
client = OpenAI(api_key="sk-...")After (Zaguan SDK)
from zaguan_sdk import ZaguanClient
client = ZaguanClient(
base_url="https://api.zaguanai.com",
api_key="your-zaguan-key"
)
# Everything else stays exactly the same! 🎉
response = client.chat.completions.create(...)Resources & Support
Need help? Check out the examples directory or contact us at [email protected]