Vizpy's PromptGrad and ContraPrompt finds where your module fails, extracts the rule behind each failure, and injects precise corrections into your instructions in a single API call.
import dspy
import vizpy
# Use any model you have access to
dspy.configure(lm=dspy.LM("your_provider/model"))
# ContraPrompt — best for classification
optimizer = vizpy.ContraPromptOptimizer(metric=my_metric)
# PromptGrad — best for generation
optimizer = vizpy.PromptGradOptimizer(metric=my_metric)
# Outperforms GEPA on every benchmark
optimized = optimizer.optimize(module, train_examples)Trusted by teams at
All benchmarks are open source. Read more about how these optimizers work in our launch blogs: PromptGrad and ContraPrompt.
BBH
Naive CoT: 26.11%
HotPotQA
Naive CoT: 26.99%
GPQA Diamond
Naive CoT: 58.11%
GDPR-Bench
Naive CoT: 10.12%
Pass your module, examples, and metric. Get back a module with better instructions — and a plain-English explanation of every rule that was added.
Runs epoch-based failure analysis: computes textual gradients across batches and accumulates correction rules into a separate local rules section — your base instructions stay frozen. Best for generation tasks and continuous metrics.
Mines contrastive pairs — wrong attempt vs. corrected one — and extracts the rule that separates them. Best for classification tasks with clear right/wrong labels.
Pass any dspy.Module, get back an optimized module. Your signature and structure are unchanged — only the instructions improve.
Start free, scale as you need.