Now in public beta

Compile Your Prompts.
Ship Faster.

Type-safe LLM program compiler for JavaScript. Define schemas, write programs, and let the compiler optimize your prompts automatically using evaluation data.

$npm install @mzhub/promptc
example.js
import { defineSchema, ChainOfThought, BootstrapFewShot, exactMatch, createProvider, z } from "@mzhub/promptc";

// 1. Define your schema
const NameExtractor = defineSchema({
  description: "Extract proper names from text",
  inputs: { text: z.string() },
  outputs: { names: z.array(z.string()) },
;

// 2. Create provider and program
const provider = createProvider("cerebras", { apiKey: process.env.CEREBRAS_API_KEY });
const program = new ChainOfThought(NameExtractor, provider);

// 3. Compile with training data
const compiler = new BootstrapFewShot(exactMatch());
const result = await compiler.compile(program, trainset);

console.log("Best score:", result.meta.score); // 0.92

Everything You Need

A complete toolkit for building, optimizing, and deploying LLM pipelines with confidence.

Type-Safe Schemas

Define input/output contracts with Zod. Get full TypeScript inference and runtime validation.

DSPy-Style Compilation

BootstrapFewShot and InstructionRewrite compilers automatically find optimal prompts.

Multiple Evaluators

Exact match, partial match, array overlap, or use an LLM as a judge for complex tasks.

Provider Agnostic

OpenAI, Anthropic, Google, Cerebras, Groq, or Ollama. Switch providers without changing code.

Cost Tracking

Estimate costs before running and track token usage during compilation.

Production Ready

Save compiled configs to JSON. Load and run without recompilation. Built for production.

Works With Your Favorite LLM Providers

Swap providers with a single line of code

OpenAIOpenAI
AnthropicAnthropic
GoogleGoogle
CerebrasCerebras
GroqGroq
OllamaOllama

Ready to Optimize Your Prompts?

Start compiling your prompts today. Free tier available with 50 compilations per month.