Back to Prompt Library
planning
Design the LlamaIndex Financial Agent and Tools
Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.
Linked challenge: Agent for Auditable Financial Model Generation
Format
Code-aware
Lines
11
Sections
1
Linked challenge
Agent for Auditable Financial Model Generation
Prompt source
Original prompt text with formatting preserved for inspection.
11 lines
1 sections
No variables
1 code block
Design a LlamaIndex agent named 'FinancialAnalystAgent' that takes structured financial data (e.g., CSV content) and projection parameters as input. Define custom tools for this agent, such as `parse_financial_csv`, `calculate_projection`, and `generate_audit_step`. These tools will use GPT-5 for complex reasoning within `calculate_projection` and Claude Sonnet 4 for generating concise `generate_audit_step` summaries. Describe how these tools would be integrated into the LlamaIndex agent's query engine. ```python
from llama_index.core.agent import FunctionCallingAgentWorker, AgentRunner
from llama_index.llms.openai import OpenAI # Assuming GPT-5 via OpenAI API
from llama_index.llms.anthropic import Anthropic # Assuming Claude Sonnet 4 via Anthropic API
from llama_index.core.tools import FunctionTool # Define tool functions here
def parse_financial_csv(csv_content: str) -> dict: # Implementation to parse CSV into structured data return {"data": []} def calculate_projection(data: dict, params: dict) -> dict: # Use GPT-5 for complex calculation logic llm_gpt5 = OpenAI(model="gpt-5") # ... logic here ... return {"projection": {}} def generate_audit_step(action_description: str) -> str: # Use Claude Sonnet 4 for audit trail generation llm_claude_sonnet_4 = Anthropic(model="claude-sonnet-4") # ... logic here ... return f"Audit: {action_description}" # Instantiate tools
parse_tool = FunctionTool.from_defaults(fn=parse_financial_csv, name="parse_financial_csv", description="...")
# ... other tool instantiations ... # Create agent worker and runner
# worker = FunctionCallingAgentWorker.from_tools([parse_tool, ...], llm=llm_gpt5)
# agent = AgentRunner(worker)
```Adaptation plan
Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.
Keep stable
Preserve the role framing, objective, and reporting structure so comparison runs stay coherent.
Tune next
Swap in your own domain constraints, anomaly thresholds, and examples before you branch variants.
Verify after
Check whether the prompt asks for the right evidence, confidence signal, and escalation path.