Back to Prompt Library
testing

Benchmarking with Confident AI

Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.

Linked challenge: Autonomous High-Throughput Data Labeling with LlamaIndex Workflows

Format
Text-first
Lines
1
Sections
1
Linked challenge
Autonomous High-Throughput Data Labeling with LlamaIndex Workflows

Prompt source

Original prompt text with formatting preserved for inspection.

1 lines
1 sections
No variables
0 checklist items
Integrate Confident AI's DeepEval into the workflow. Write a test case that sends a batch of processed labels to Confident AI to evaluate 'Answer Relevancy' and 'Contextual Precision'. Print a report of the labeling metrics after the workflow finishes.

Adaptation plan

Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.

Keep stable

Preserve the rubric, target behavior, and pass-fail criteria as the baseline for evaluation.

Tune next

Adjust fixtures, mocks, and thresholds to the system under test instead of weakening the assertions.

Verify after

Make sure the prompt catches regressions instead of just mirroring the happy-path examples.