Multi-Provider Example
Test the same program across different LLM providers to compare performance, latency, and cost.
What You'll Learn
- How to use different LLM providers with the same schema
- How to benchmark latency across providers
- How to handle missing API keys gracefully
- Using
Predictfor simple tasks
Supported Providers
PromptC supports multiple LLM providers out of the box:
| Provider | API Key Env Var | Notes |
|---|---|---|
| openai | OPENAI_API_KEY | GPT-4, GPT-3.5 |
| anthropic | ANTHROPIC_API_KEY | Claude models |
| GOOGLE_API_KEY | Gemini models | |
| groq | GROQ_API_KEY | Fast inference |
| cerebras | CEREBRAS_API_KEY | Ultra-fast inference |
| ollama | (none) | Local models, no API key needed |
1. Define a Simple Schema
We'll use sentiment analysis as our test case—simple enough to compare providers without confounding factors.
import { defineSchema, Predict, createProvider, z } from "@mzhub/promptc";
const SentimentAnalyzer = defineSchema({
description: "Analyze the sentiment of text as positive, negative, or neutral",
inputs: { text: z.string() },
outputs: {
sentiment: z.enum(["positive", "negative", "neutral"]),
confidence: z.number(),
},
});Using Predict
For simple classification tasks,
Predict is faster than ChainOfThought since it skips the reasoning step. Use it when you don't need to see the model's thinking.2. Create a Provider Testing Function
This function tests a single provider and handles errors gracefully:
async function testProvider(providerName, apiKeyEnv) {
const apiKey = process.env[apiKeyEnv];
if (!apiKey) {
console.log(`⏭️ Skipping ${providerName} (no ${apiKeyEnv})`);
return;
}
try {
const provider = createProvider(providerName, { apiKey });
const program = new Predict(SentimentAnalyzer, provider);
const start = Date.now();
const result = await program.run({
text: "I absolutely love this product!",
});
const latency = Date.now() - start;
console.log(`\n✅ ${providerName}`);
console.log(` Sentiment: ${result.result.sentiment}`);
console.log(` Confidence: ${(result.result.confidence * 100).toFixed(0)}%`);
console.log(` Latency: ${latency}ms`);
console.log(` Tokens: ${result.trace.usage.inputTokens + result.trace.usage.outputTokens}`);
} catch (error) {
console.log(`\n❌ ${providerName}: ${error.message}`);
}
}3. Test All Providers
Loop through each provider. The function skips providers without API keys:
async function main() {
console.log("Testing sentiment analysis across providers...");
await testProvider("openai", "OPENAI_API_KEY");
await testProvider("anthropic", "ANTHROPIC_API_KEY");
await testProvider("google", "GOOGLE_API_KEY");
await testProvider("groq", "GROQ_API_KEY");
await testProvider("cerebras", "CEREBRAS_API_KEY");
// Ollama (local, no API key)
try {
const provider = createProvider("ollama");
const program = new Predict(SentimentAnalyzer, provider);
const start = Date.now();
const result = await program.run({
text: "I absolutely love this product!",
});
console.log(`\n✅ ollama (local)`);
console.log(` Sentiment: ${result.result.sentiment}`);
console.log(` Latency: ${Date.now() - start}ms`);
} catch {
console.log(`\n⏭️ Skipping ollama (not running locally)`);
}
}
main().catch(console.error);Example Output
Testing sentiment analysis across providers...
✅ openai
Sentiment: positive
Confidence: 95%
Latency: 823ms
Tokens: 47
✅ anthropic
Sentiment: positive
Confidence: 92%
Latency: 1204ms
Tokens: 52
✅ cerebras
Sentiment: positive
Confidence: 90%
Latency: 156ms
Tokens: 45
⏭️ Skipping google (no GOOGLE_API_KEY)
⏭️ Skipping ollama (not running locally)Choosing a Provider
- Best quality: OpenAI GPT-4 or Anthropic Claude
- Fastest: Cerebras or Groq
- Privacy: Ollama (runs locally)
- Cost-effective: Groq or Cerebras for high volume
Full Example
multi-provider.ts
import { defineSchema, Predict, createProvider, z } from "@mzhub/promptc";
const SentimentAnalyzer = defineSchema({
description: "Analyze the sentiment of text as positive, negative, or neutral",
inputs: { text: z.string() },
outputs: {
sentiment: z.enum(["positive", "negative", "neutral"]),
confidence: z.number(),
},
});
async function testProvider(providerName, apiKeyEnv) {
const apiKey = process.env[apiKeyEnv];
if (!apiKey) {
console.log(`⏭️ Skipping ${providerName} (no ${apiKeyEnv})`);
return;
}
try {
const provider = createProvider(providerName, { apiKey });
const program = new Predict(SentimentAnalyzer, provider);
const start = Date.now();
const result = await program.run({ text: "I absolutely love this product!" });
console.log(`\n✅ ${providerName} - ${result.result.sentiment} (${Date.now() - start}ms)`);
} catch (error) {
console.log(`\n❌ ${providerName}: ${error.message}`);
}
}
async function main() {
console.log("Testing sentiment analysis across providers...");
await testProvider("openai", "OPENAI_API_KEY");
await testProvider("anthropic", "ANTHROPIC_API_KEY");
await testProvider("cerebras", "CEREBRAS_API_KEY");
}
main().catch(console.error);Next: Load Compiled Example →