createProvider
Factory function to create LLM provider instances.
Signature
function createProvider(
name: ProviderName,
config?: ProviderConfig
): LLMProvider
type ProviderName =
| "openai"
| "anthropic"
| "google"
| "groq"
| "cerebras"
| "ollama";Parameters
| Parameter | Type | Description |
|---|---|---|
| name | ProviderName | Provider identifier |
| config.apiKey | string? | API key (reads from env if not provided) |
| config.baseUrl | string? | Custom API endpoint |
| config.defaultModel | string? | Model to use for requests |
Returns
An LLMProvider instance:
interface LLMProvider {
name: string;
defaultModel: string;
complete(params: CompletionParams): Promise<CompletionResult>;
}
interface CompletionParams {
prompt: string;
model?: string;
temperature?: number;
maxTokens?: number;
responseFormat?: "text" | "json";
}
interface CompletionResult {
content: string;
usage: {
inputTokens: number;
outputTokens: number;
};
}Environment Variables
If apiKey is not provided, providers read from:
| Provider | Environment Variable |
|---|---|
| openai | OPENAI_API_KEY |
| anthropic | ANTHROPIC_API_KEY |
| GOOGLE_API_KEY | |
| groq | GROQ_API_KEY |
| cerebras | CEREBRAS_API_KEY |
| ollama | (none required) |
Default Models
| Provider | Default Model |
|---|---|
| openai | gpt-4o-mini |
| anthropic | claude-3-5-sonnet-20241022 |
| gemini-2.0-flash | |
| groq | llama-3.3-70b-versatile |
| cerebras | llama3.1-8b |
| ollama | llama3.2 |
Examples
import { createProvider } from "@mzhub/promptc";
// OpenAI with default settings
const openai = createProvider("openai");
// OpenAI with custom model
const gpt4 = createProvider("openai", {
apiKey: process.env.OPENAI_API_KEY,
defaultModel: "gpt-4o"
});
// Anthropic
const claude = createProvider("anthropic", {
apiKey: process.env.ANTHROPIC_API_KEY
});
// Ollama (local)
const ollama = createProvider("ollama", {
baseUrl: "http://localhost:11434",
defaultModel: "llama3.2"
});
// Direct completion (rarely needed - use Programs instead)
const result = await openai.complete({
prompt: "Hello, world!",
temperature: 0.7,
maxTokens: 100
});See also: Providers Guide