Back to Prompt Library
testing

Set up Giskard for Agent Evaluation

Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.

Linked challenge: Real-time Voice Assistant with Personalized Context

Format
Text-first
Lines
1
Sections
1
Linked challenge
Real-time Voice Assistant with Personalized Context

Prompt source

Original prompt text with formatting preserved for inspection.

1 lines
1 sections
No variables
0 checklist items
Configure Giskard to evaluate the safety, factual accuracy, and coherence of your OpenAI Agent's responses. Create an initial test suite that includes checks for hallucination, bias, and adherence to specific content policies. Describe how you would integrate Giskard into a CI/CD pipeline for continuous evaluation. Provide a basic Python code example for a Giskard test.

Adaptation plan

Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.

Keep stable

Preserve the rubric, target behavior, and pass-fail criteria as the baseline for evaluation.

Tune next

Adjust fixtures, mocks, and thresholds to the system under test instead of weakening the assertions.

Verify after

Make sure the prompt catches regressions instead of just mirroring the happy-path examples.