Providers
Providers are thin adapters that connect PromptC to LLM APIs.
Supported Providers
| Provider | SDK Required | Default Model |
|---|---|---|
| openai | openai | gpt-4o-mini |
| anthropic | @anthropic-ai/sdk | claude-3-5-sonnet-20241022 |
| @google/generative-ai | gemini-2.0-flash | |
| groq | None (uses fetch) | llama-3.3-70b-versatile |
| cerebras | None (uses fetch) | llama3.1-8b |
| ollama | None (uses fetch) | llama3.2 |
Creating Providers
Use the createProvider factory function:
import { createProvider } from "@mzhub/promptc";
// OpenAI
const openai = createProvider("openai", {
apiKey: process.env.OPENAI_API_KEY,
defaultModel: "gpt-4o" // Optional
});
// Anthropic
const anthropic = createProvider("anthropic", {
apiKey: process.env.ANTHROPIC_API_KEY,
defaultModel: "claude-3-5-sonnet-20241022"
});
// Google Gemini
const google = createProvider("google", {
apiKey: process.env.GOOGLE_API_KEY
});
// Groq (fast inference)
const groq = createProvider("groq", {
apiKey: process.env.GROQ_API_KEY
});
// Cerebras (fast inference)
const cerebras = createProvider("cerebras", {
apiKey: process.env.CEREBRAS_API_KEY
});
// Ollama (local, no API key needed)
const ollama = createProvider("ollama", {
baseUrl: "http://localhost:11434" // Optional, this is the default
});Provider Configuration
| Option | Type | Description |
|---|---|---|
| apiKey | string | API key (reads from env by default) |
| baseUrl | string | Custom API endpoint |
| defaultModel | string | Model to use for all requests |
Environment Variables
If you don't provide an apiKey, providers automatically read from environment variables: OPENAI_API_KEY, ANTHROPIC_API_KEY, GOOGLE_API_KEY, GROQ_API_KEY, CEREBRAS_API_KEY.
Switching Providers
You can use the same schema with different providers:
import { defineSchema, Predict, createProvider, z } from "@mzhub/promptc";
const schema = defineSchema({
description: "Summarize text",
inputs: { text: z.string() },
outputs: { summary: z.string() }
});
// Create multiple providers
const openai = createProvider("openai", { apiKey: "..." });
const anthropic = createProvider("anthropic", { apiKey: "..." });
// Same schema, different providers
const openaiProgram = new Predict(schema, openai);
const anthropicProgram = new Predict(schema, anthropic);
// Compare outputs
const result1 = await openaiProgram.run({ text: "..." });
const result2 = await anthropicProgram.run({ text: "..." });Local Models with Ollama
Run models locally with Ollama:
# Install Ollama from https://ollama.ai
# Pull a model
ollama pull llama3.2const ollama = createProvider("ollama", {
defaultModel: "llama3.2"
});
const program = new Predict(schema, ollama);
const result = await program.run({ text: "..." });No API Costs
Ollama runs entirely locally, so you can iterate without API costs. Great for development and testing.
Provider Interface
All providers implement this interface:
interface LLMProvider {
name: string;
defaultModel: string;
complete(params: CompletionParams): Promise<CompletionResult>;
}
interface CompletionParams {
prompt: string;
model?: string;
temperature?: number;
maxTokens?: number;
responseFormat?: "text" | "json";
}
interface CompletionResult {
content: string;
usage: {
inputTokens: number;
outputTokens: number;
};
}Next: Evaluators →