Back to Prompt Library
testing

Setting up Evaluation with Evidently AI

Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.

Linked challenge: Cyberthreat Orchestrator Agent

Format
Code-aware
Lines
22
Sections
5
Linked challenge
Cyberthreat Orchestrator Agent

Prompt source

Original prompt text with formatting preserved for inspection.

22 lines
5 sections
No variables
1 code block
Design an evaluation pipeline using Evidently AI to assess the 'ThreatDetectionAndClassification' task. Describe how you would collect the agent's outputs and ground truth, and define key metrics Evidently AI should track, such as 'ThreatTypeAccuracy' and 'SeverityClassificationF1Score'. Provide a Python snippet demonstrating how to initialize an Evidently AI monitoring dashboard and log relevant data from your agent's performance.

```python
import evidently
from evidently.report import Report
from evidently.metric_preset import DataDriftPreset
from evidently.metric_preset import TextOverviewPreset # For LLM specific outputs

def evaluate_threat_detection(actual_outputs, ground_truth):
    # Prepare data for Evidently AI
    # ... e.g., combine actual_outputs and ground_truth into a DataFrame ...

    report = Report(metrics=[
        DataDriftPreset(), # For input/output data drift
        # Add custom metrics for classification accuracy, F1 score etc.
        # Evidently AI might require custom metric definitions for direct LLM eval, 
        # or conversion of LLM outputs to structured data first.
    ])
    # report.run(reference_data=ref_df, current_data=current_df)
    # report.save_html("threat_detection_report.html")

# Conceptual usage:
# agent_output = threat_detection_agent(sample_input)
# expected_output = get_ground_truth(sample_input)
# evaluate_threat_detection([agent_output], [expected_output])
```

Adaptation plan

Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.

Keep stable

Preserve the rubric, target behavior, and pass-fail criteria as the baseline for evaluation.

Tune next

Adjust fixtures, mocks, and thresholds to the system under test instead of weakening the assertions.

Verify after

Make sure the prompt catches regressions instead of just mirroring the happy-path examples.