Compilers
Compilers automatically optimize your prompts based on evaluation data.
BootstrapFewShot
Finds the optimal few-shot examples from your training set:
import { BootstrapFewShot, exactMatch } from "@mzhub/promptc";
const compiler = new BootstrapFewShot(exactMatch());
const result = await compiler.compile(program, trainset, {
candidates: 10, // Number of example combinations to try
concurrency: 5, // Parallel API calls
examplesPerCandidate: 3, // Examples per prompt
validationSplit: 0.3, // 30% for validation
earlyStopThreshold: 0.5, // Skip candidates scoring below this
});InstructionRewrite
Uses an LLM to generate and test instruction variations:
import { InstructionRewrite, exactMatch, createProvider } from "@mzhub/promptc";
const optimizerProvider = createProvider("openai", {
apiKey: process.env.OPENAI_API_KEY
});
const compiler = new InstructionRewrite(exactMatch());
const result = await compiler.compile(program, trainset, {
provider: optimizerProvider, // LLM to generate variations
instructionVariations: 5, // Number of instruction rewrites
candidates: 5, // Few-shot combinations per instruction
concurrency: 3
});How InstructionRewrite Works
The compiler asks an LLM to generate variations of your schema description, then tests each variation with different few-shot combinations to find the best performing prompt.
Compilation Options
| Option | Default | Description |
|---|---|---|
| candidates | 10 | Number of candidate prompts to evaluate |
| concurrency | 5 | Parallel API calls during evaluation |
| examplesPerCandidate | 3 | Few-shot examples per prompt |
| validationSplit | 0.3 | Fraction of data for validation |
| seed | undefined | Random seed for reproducibility |
| earlyStopThreshold | 0 | Skip candidates below this score |
| budget.maxTokens | undefined | Stop if token budget exceeded |
| budget.onBudgetWarning | undefined | Callback when nearing budget |
| onProgress | undefined | Progress callback |
Progress Tracking
Monitor compilation progress in real-time:
const result = await compiler.compile(program, trainset, {
candidates: 20,
onProgress: ({
candidatesEvaluated,
totalCandidates,
currentBestScore,
tokensUsed
}) => {
const pct = ((candidatesEvaluated / totalCandidates) * 100).toFixed(0);
console.log(`[${pct}%] Best: ${(currentBestScore * 100).toFixed(1)}%`);
console.log(`Tokens: ${tokensUsed}`);
}
});Budget Control
Limit token usage during compilation:
const result = await compiler.compile(program, trainset, {
candidates: 50,
budget: {
maxTokens: 100000, // Stop after 100k tokens
onBudgetWarning: (used, max) => {
console.warn(`Budget: ${used}/${max} tokens used`);
}
}
});Cost Estimation
Estimate costs before running:
const estimate = compiler.estimateCost(trainset.length, {
candidates: 20
});
console.log("Estimated API calls:", estimate.estimatedCalls);
console.log("Estimated tokens:", estimate.estimatedTokens);
// Rough cost estimate (depends on model pricing)
const costPer1kTokens = 0.002; // Example for GPT-4o-mini
const estimatedCost = (estimate.estimatedTokens / 1000) * costPer1kTokens;
console.log("Estimated cost: $" + estimatedCost.toFixed(2));Compilation Result
The compiler returns a structured result:
interface CompilationResult {
meta: {
score: number; // Best validation score (0-1)
compiledAt: string; // ISO timestamp
strategy: string; // "BootstrapFewShot" or "InstructionRewrite"
tokenUsage: {
inputTokens: number;
outputTokens: number;
totalTokens: number;
calls: number;
};
};
config: {
instructions: string; // Optimized instructions
fewShotExamples: Array; // Selected examples
};
}Using Compiled Results
import { createCompiledProgram } from "@mzhub/promptc";
// Create production-ready program
const compiled = createCompiledProgram(program, result);
// Use .run() for inference
const output = await compiled.run({ text: "..." });
// Serialize to JSON
const json = compiled.toJSON();
fs.writeFileSync("prompt.json", json);Next: API Reference →