Back to Prompt Library
implementation

Initial Audit Task Execution

Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.

Linked challenge: AI Policy Audit Agent with OpenAI Agents

Format
Code-aware
Lines
54
Sections
9
Linked challenge
AI Policy Audit Agent with OpenAI Agents

Prompt source

Original prompt text with formatting preserved for inspection.

54 lines
9 sections
No variables
1 code block
Simulate an initial audit by providing the agent with a document excerpt and a specific policy area (e.g., 'Data Privacy'). The agent should:
1. Use the `policy_retriever` tool to find relevant policy documents.
2. Analyze the document excerpt in the context of retrieved policies.
3. Identify any potential violations or risks.
4. Use the `mem0_saver` tool to store a summary of its initial findings for future reference.

```python
# ... (previous agent setup code)

thread = client.beta.threads.create()

message = client.beta.threads.messages.create(
    thread_id=thread.id,
    role="user",
    content="Please audit the following document excerpt for Data Privacy policy compliance: 'Our internal analytics system collects IP addresses without notifying users, storing them indefinitely.'"
)

run = client.beta.threads.runs.create(
    thread_id=thread.id,
    assistant_id=assistant.id
)

# Polling mechanism to check run status and handle tool calls
while run.status in ['queued', 'in_progress', 'requires_action']:
    if run.status == 'requires_action':
        print('Agent requires action (tool call)...')
        tool_outputs = []
        for tool_call in run.required_action.submit_tool_outputs.tool_calls:
            if tool_call.function.name == 'policy_retriever':
                args = eval(tool_call.function.arguments)
                output = policy_retriever(args['query'])
                tool_outputs.append({
                    "tool_call_id": tool_call.id,
                    "output": str(output)
                })
            elif tool_call.function.name == 'mem0_saver':
                args = eval(tool_call.function.arguments)
                output = mem0_saver(args['key'], args['value'])
                tool_outputs.append({
                    "tool_call_id": tool_call.id,
                    "output": str(output)
                })
        if tool_outputs:
            run = client.beta.threads.runs.submit_tool_outputs(
                thread_id=thread.id,
                run_id=run.id,
                tool_outputs=tool_outputs
            )
        else:
            # Handle cases where no tool outputs are generated but action is required
            break

    run = client.beta.threads.runs.retrieve(thread_id=thread.id, run_id=run.id)
    # time.sleep(1) # Add a small delay if polling frequently

if run.status == 'completed':
    messages = client.beta.threads.messages.list(thread_id=thread.id)
    for msg in messages.data:
        if msg.role == 'assistant':
            print(f"Agent: {msg.content[0].text.value}")

```

Adaptation plan

Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.

Keep stable

Hold the task contract and output shape stable so generated implementations remain comparable.

Tune next

Update libraries, interfaces, and environment assumptions to match the stack you actually run.

Verify after

Test failure handling, edge cases, and any code paths that depend on hidden context or secrets.