Programs
Programs define how your LLM executes. PromptC provides two built-in program types.
Predict
The simplest program type. Sends input directly to the LLM and returns the output:
import { Predict } from "@mzhub/promptc";
const program = new Predict(schema, provider);
const result = await program.run({ text: "Hello world" });
console.log(result.result); // The LLM output
console.log(result.trace); // Execution metadataChainOfThought
Adds a reasoning step before generating the final output. This often improves accuracy for complex tasks:
import { ChainOfThought } from "@mzhub/promptc";
const program = new ChainOfThought(schema, provider);
const result = await program.run({ text: "Complex question here" });
console.log(result.trace.reasoning); // Step-by-step reasoning
console.log(result.result); // Final answerWhen to use ChainOfThought
Use ChainOfThought for tasks requiring logic, math, multi-step reasoning, or when you want to understand the model's thought process.
Program Output
Both program types return the same structure:
interface ProgramOutput<T> {
result: T; // The validated output matching your schema
trace: {
reasoning?: string; // Only for ChainOfThought
rawResponse: string; // Raw LLM response
usage: {
inputTokens: number;
outputTokens: number;
};
};
}Program Configuration
Customize execution with runtime options:
const result = await program.run(
{ text: "Input here" },
{
// Override the default instructions
instructions: "Be very concise",
// Provide few-shot examples
fewShotExamples: [
{ input: { text: "Example 1" }, output: { result: "Output 1" } },
{ input: { text: "Example 2" }, output: { result: "Output 2" } }
]
}
);| Option | Type | Description |
|---|---|---|
| instructions | string | Override the schema description |
| fewShotExamples | Array | Examples to include in the prompt |
Model Selection
The model is determined by the provider. To use a different model, configure it when creating the provider:
// Use a specific model
const provider = createProvider("openai", {
apiKey: process.env.OPENAI_API_KEY,
defaultModel: "gpt-4o" // Override the default
});
// Or create multiple providers
const fast = createProvider("openai", { defaultModel: "gpt-4o-mini" });
const smart = createProvider("openai", { defaultModel: "gpt-4o" });
// Use different providers for different programs
const quickProgram = new Predict(schema, fast);
const accurateProgram = new ChainOfThought(schema, smart);Error Handling
Programs throw errors on validation failures or API errors:
try {
const result = await program.run({ text: "Input" });
} catch (error) {
if (error.message.includes("validation")) {
// Schema validation failed
console.error("Invalid output from LLM");
} else {
// API or network error
console.error("API error:", error.message);
}
}Validation Failures
If the LLM returns output that doesn't match your schema, validation will fail. Consider using more lenient schema types or adding descriptions to guide the model.
Next: Providers →