Load Compiled Example

Learn how to load a previously compiled program from JSON for production deployment.

What You'll Learn

  • How to save compiled configurations as JSON
  • How to load and use them in production
  • Best practices for version control of prompts
  • Running optimized inference with minimal overhead

The Compilation Workflow

The typical workflow separates compilation (expensive, done once) from inference (fast, done many times):

🔧
Compile
→
💾
Save JSON
→
🚀
Load & Run

1. Save After Compilation

After running compiler.compile(), save the result as JSON:

import { writeFileSync } from "fs";

// After compilation...
const result = await compiler.compile(program, trainset, { candidates: 10 });

// Save to JSON file
writeFileSync("name-extractor.json", JSON.stringify(result, null, 2));
What's in the JSON?
The JSON contains the optimized configuration including the best instructions, selected few-shot examples, compilation score, and token usage stats.

2. Load in Production

In your production code, load the JSON and create a compiled program:

import { readFileSync } from "fs";
import {
  defineSchema,
  ChainOfThought,
  loadCompiledProgram,
  createProvider,
  z,
} from "@mzhub/promptc";

// Must match the schema used during compilation
const NameExtractor = defineSchema({
  description: "Extract proper names of people from text",
  inputs: { text: z.string() },
  outputs: { names: z.array(z.string()) },
});

// Load the saved config
const savedJson = readFileSync("name-extractor.json", "utf-8");

// Create the same program (schema + provider)
const provider = createProvider("openai", {
  apiKey: process.env.OPENAI_API_KEY,
});
const program = new ChainOfThought(NameExtractor, provider);

// Load the compiled program
const compiled = loadCompiledProgram(savedJson, program);
Schema Must Match
The schema used when loading must exactly match the schema used during compilation. Different schemas will cause runtime errors.

3. Inspect the Loaded Program

You can inspect the compiled configuration:

console.log("Loaded compiled program:");
console.log(`  Strategy: ${compiled.meta.strategy}`);
console.log(`  Score: ${(compiled.meta.score * 100).toFixed(1)}%`);
console.log(`  Examples: ${compiled.config.fewShotExamples.length}`);

4. Run Inference

Use the compiled program for fast, optimized inference:

const testCases = [
  "Warren Buffett is a legendary investor.",
  "The meeting with Sarah and John went well.",
  "No specific people were mentioned.",
];

console.log("Running inference:");
for (const text of testCases) {
  const result = await compiled.run({ text });
  console.log(`  Input: "${text}"`);
  console.log(`  Names: ${result.result.names.length > 0 
    ? result.result.names.join(", ") 
    : "(none)"}`);
}

Version Control Best Practices

Treat compiled JSON files like code:

  • Commit to git: Track changes to your optimized prompts
  • Use semantic versioning: e.g., name-extractor-v1.2.json
  • Include metadata: Add compilation date and score in the filename or a manifest
  • Roll back easily: If a new prompt performs worse in production, revert to the previous version

Full Example

load-compiled.ts
import { readFileSync } from "fs";
import {
  defineSchema,
  ChainOfThought,
  loadCompiledProgram,
  createProvider,
  z,
} from "@mzhub/promptc";

// Must match the schema used during compilation
const NameExtractor = defineSchema({
  description: "Extract proper names of people from text",
  inputs: { text: z.string() },
  outputs: { names: z.array(z.string()) },
});

async function main() {
  // 1. Load the saved config
  const savedJson = readFileSync("name-extractor.json", "utf-8");

  // 2. Create the same program (schema + provider)
  const provider = createProvider("openai", {
    apiKey: process.env.OPENAI_API_KEY,
  });
  const program = new ChainOfThought(NameExtractor, provider);

  // 3. Load the compiled program
  const compiled = loadCompiledProgram(savedJson, program);

  console.log("Loaded compiled program:");
  console.log(`  Strategy: ${compiled.meta.strategy}`);
  console.log(`  Score: ${(compiled.meta.score * 100).toFixed(1)}%`);

  // 4. Use in production
  const testCases = [
    "Warren Buffett is a legendary investor.",
    "The meeting with Sarah and John went well.",
    "No specific people were mentioned.",
  ];

  console.log("\nRunning inference:");
  for (const text of testCases) {
    const result = await compiled.run({ text });
    console.log(`  "${text}" → ${result.result.names.join(", ") || "(none)"}`);
  }
}

main().catch(console.error);