Documentation Index
Fetch the complete documentation index at: https://docs.supaproxy.cloud/llms.txt
Use this file to discover all available pages before exploring further.
Provider plugins abstract AI model backends behind a unified interface. Every provider returns the same AIResponse shape, regardless of the upstream API. This means the agent loop, cost tracking, and observability work identically across providers.
Installation
npm install @supaproxy/providers
ProviderPlugin interface
interface ProviderPlugin<TConfig = unknown> {
/** Unique type identifier, e.g. "anthropic", "openai" */
type: string;
/** Human-readable name */
name: string;
/** Short description */
description: string;
/** Zod schema for configuration fields */
configSchema: ZodSchema<TConfig>;
/** Send a completion request and return a normalized response */
complete(request: AIRequest, config: TConfig): Promise<AIResponse>;
}
Normalized response
All providers return the same AIResponse shape. The server never handles provider-specific response formats outside the plugin.
interface AIResponse {
/** The model's text response */
content: string;
/** Tool calls requested by the model */
toolCalls: ToolCall[];
/** Token usage and cost */
usage: AIUsage;
/** Stop reason */
stopReason: "end_turn" | "tool_use" | "max_tokens";
}
interface AIUsage {
inputTokens: number;
outputTokens: number;
/** Provider calculates cost based on its own pricing */
cost_usd: number;
}
The cost_usd field is calculated by the provider plugin, not the server. Each provider owns its pricing logic, so costs are always accurate even when pricing changes.
Available plugins
Claude models via the Anthropic API.| Config field | Type | Description |
|---|
apiKey | string | Anthropic API key |
model | string | Model ID (e.g. claude-sonnet-4-20250514) |
maxTokens | number | Maximum output tokens |
import { anthropicProvider } from "@supaproxy/providers";
GPT models via the OpenAI API.| Config field | Type | Description |
|---|
apiKey | string | OpenAI API key |
model | string | Model ID (e.g. gpt-4o) |
maxTokens | number | Maximum output tokens |
import { openaiProvider } from "@supaproxy/providers";
Adding a custom provider
To add a new provider (e.g. a self-hosted model via Ollama):
1. Implement the interface
import { z } from "zod";
import type { ProviderPlugin, AIRequest, AIResponse } from "@supaproxy/providers";
const configSchema = z.object({
baseUrl: z.string().url().default("http://localhost:11434"),
model: z.string().default("llama3"),
});
type OllamaConfig = z.infer<typeof configSchema>;
export const ollamaProvider: ProviderPlugin<OllamaConfig> = {
type: "ollama",
name: "Ollama",
description: "Self-hosted models via Ollama.",
configSchema,
async complete(request, config): Promise<AIResponse> {
const response = await fetch(`${config.baseUrl}/api/chat`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
model: config.model,
messages: request.messages,
tools: request.tools,
}),
});
const data = await response.json();
// Normalize to AIResponse
return {
content: data.message.content,
toolCalls: parseToolCalls(data.message.tool_calls),
usage: {
inputTokens: data.prompt_eval_count,
outputTokens: data.eval_count,
cost_usd: 0, // Self-hosted, no API cost
},
stopReason: data.message.tool_calls ? "tool_use" : "end_turn",
};
},
};
2. Register the plugin
import { providerRegistry } from "@supaproxy/providers";
import { ollamaProvider } from "./ollama";
providerRegistry.register(ollamaProvider);
The server will use the registered provider when a workspace is configured with type: "ollama".