Back to Prompt Library
implementation

Develop Privacy Auditor Logic and Capably Reporting

Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.

Linked challenge: Multi-Agent Ad Policy Auditor

Format
Code-aware
Lines
18
Sections
4
Linked challenge
Multi-Agent Ad Policy Auditor

Prompt source

Original prompt text with formatting preserved for inspection.

18 lines
4 sections
No variables
1 code block
Implement the `PrivacyAuditor` agent's logic to evaluate proposed ad strategies against a given privacy policy. This agent should identify and report violations. Finally, design how Capably would monitor this AutoGen workflow (conceptually, as direct integration may vary) and output a compliance report. You'll need to simulate the Capably integration by ensuring the AutoGen system produces structured outputs that Capably could consume.

```python
# PrivacyAuditor logic (conceptual)
class PrivacyAuditorAgent(autogen.AssistantAgent):
    def __init__(self, name, llm_config, privacy_policy):
        super().__init__(name, llm_config=llm_config, system_message=f"You are a privacy expert. Evaluate ad strategies against the policy: {privacy_policy}. Report any violations.")
        self.privacy_policy = privacy_policy

    def check_compliance(self, ad_strategy_proposal: dict, user_topics: list) -> list:
        violations = []
        # Implement your privacy checking logic here
        # e.g., if 'user_topics' were used directly for fine-grained targeting without consent
        if "direct_targeting_violation" in ad_strategy_proposal.get("flags", []):
            violations.append("Direct user topic targeting without consent.")
        # ... more policy checks
        return violations

# Integrate Capably by ensuring structured output that can be parsed by an external system.
# For example, agents could write structured JSON logs that Capably would ingest.
```

Adaptation plan

Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.

Keep stable

Hold the task contract and output shape stable so generated implementations remain comparable.

Tune next

Update libraries, interfaces, and environment assumptions to match the stack you actually run.

Verify after

Test failure handling, edge cases, and any code paths that depend on hidden context or secrets.