Operator-ready prompt for reuse, tuning, and workspace runs.
This item is set up for developers who want to inspect the original language, fork it into Workspace, and adapt the evidence model without losing the source prompt structure.
Implementation handoffs, eval setup, and prompt tuning where you need the original structure intact.
Inspect first, copy once, then fork into Workspace when you want variants, notes, and model settings attached to the same run.
Swap domain facts, examples, and any hard-coded entities for your own context.
Tighten the evidence or verification requirement if this is headed toward production.
Decide which failure mode you want to evaluate first before you branch the prompt.
This prompt already carries implementation detail, tool context, and a final-output instruction. Keep that structure intact when you tune it, or your comparison runs get noisy fast.
Open this prompt inside Workspace when you want a live iteration loop.
Copy for quick reuse, or run it in Workspace to keep prompt variants, model settings, and prompt-history changes in one place.
Structured source with 23 active lines to adapt.
Already linked to a challenge workflow.
Sign in to keep private prompt variations.
Prompt content
Original prompt text with formatting preserved for inspection and clean copy.
Design a Testaify test suite for your 'GlobalTaxAdvisor' agent focusing on the 'LegalComplianceQuery' task. Define a few test cases, each with an input query, country, and the expected 'advice', 'is_compliant' status, and list of 'citations'. Describe how Testaify would run these tests and generate reports on the agent's performance, including specific assertion checks for correctness and completeness. Provide a conceptual Python structure for setting up these tests.
```python
# Conceptual Testaify usage (Testaify is an example tool for structured eval)
# from testaify import TestSuite, TestCase, assert_equals, assert_contains
# class GlobalTaxAdvisorTestSuite(TestSuite):
# def setup(self):
# self.advisor_agent = initialize_openai_assistant()
# @TestCase(name="German Corporate Tax Query")
# def test_german_corporate_tax(self):
# input_data = {"query": "corporate tax in Germany", "country": "Germany", "context": "small business"}
# agent_output = self.advisor_agent.run(input_data)
# assert_equals(agent_output['is_compliant'], True)
# assert_contains(agent_output['advice'], "15%")
# assert_contains(agent_output['citations'], "German Corporate Tax Act")
# @TestCase(name="UK Income Tax Brackets")
# def test_uk_income_tax(self):
# input_data = {"query": "income tax brackets UK", "country": "UK", "context": "individual"}
# agent_output = self.advisor_agent.run(input_data)
# assert_equals(agent_output['is_compliant'], True)
# assert_contains(agent_output['advice'], "basic rate")
# if __name__ == '__main__'
# TestSuite.main(GlobalTaxAdvisorTestSuite)
```Adaptation plan
Keep the source stable, then branch your edits in a predictable order so the next prompt run is easier to evaluate.
Preserve the rubric, target behavior, and pass-fail criteria as the baseline for evaluation.
Adjust fixtures, mocks, and thresholds to the system under test instead of weakening the assertions.
Make sure the prompt catches regressions instead of just mirroring the happy-path examples.
Copy once for a pristine source snapshot, then move the prompt into Workspace when you want variants, run history, and side-by-side tuning without losing the original.
Prompt diagnostics
Quick signals for how structured this prompt already is and where adaptation work is likely to happen first.
This prompt already mixes executable detail with instructions, so the safest path is to tune examples and interfaces before you rewrite the overall scaffold.
Global Tax & Legal Compliance Advisor Agent
This challenge focuses on developing a sophisticated legal and tax compliance advisor using the OpenAI Agents SDK. The agent will interpret complex regulatory texts, answer specific compliance queries for various jurisdictions, and justify its advice by citing relevant statutes. A core component will be the integration with a simulated MCP knowledge base, powered by Pinecone, to provide the agent with a vast, searchable repository of legal and tax documents. The challenge emphasizes advanced tool use, multi-LLM verification (using GPT-4o for primary analysis and Claude Opus 4.1 for cross-validation), and rigorous evaluation of accuracy and transparency.
Use the challenge page to recover the original task boundaries before you tune the prompt. That keeps your variants grounded in the same evaluation target instead of drifting into a different problem.