Operator-ready prompt for reuse, tuning, and workspace runs.
This item is set up for developers who want to inspect the original language, fork it into Workspace, and adapt the evidence model without losing the source prompt structure.
Implementation handoffs, eval setup, and prompt tuning where you need the original structure intact.
Inspect first, copy once, then fork into Workspace when you want variants, notes, and model settings attached to the same run.
Swap domain facts, examples, and any hard-coded entities for your own context.
Tighten the evidence or verification requirement if this is headed toward production.
Decide which failure mode you want to evaluate first before you branch the prompt.
This prompt already carries implementation detail, tool context, and a final-output instruction. Keep that structure intact when you tune it, or your comparison runs get noisy fast.
Open this prompt inside Workspace when you want a live iteration loop.
Copy for quick reuse, or run it in Workspace to keep prompt variants, model settings, and prompt-history changes in one place.
Structured source with 9 active lines to adapt.
Already linked to a challenge workflow.
Sign in to keep private prompt variations.
Prompt content
Original prompt text with formatting preserved for inspection and clean copy.
Outline a strategy for deploying the GPT-5 and Claude Sonnet 4 models used by your LlamaIndex agent using Ray Serve and Novita AI. Describe how Ray Serve would manage the inference endpoints for both models, ensuring scalability and reliability. Explain how Novita AI's capabilities could be integrated to optimize the inference runtime and cost for the financial analysis tasks. Provide conceptual code snippets for setting up a Ray Serve deployment. ```python
from ray import serve @serve.deployment
class GPT5Model: def __init__(self): # Initialize GPT-5 client pass async def __call__(self, text: str): # Call GPT-5 API return {"output": "..."} @serve.deployment
class ClaudeSonnet4Model: def __init__(self): # Initialize Claude Sonnet 4 client pass async def __call__(self, text: str): # Call Claude Sonnet 4 API return {"output": "..."} # serve.run(
# GPT5Model.bind(),
# ClaudeSonnet4Model.bind(),
# ) # Conceptual: LlamaIndex agent configuration to use Serve endpoints
# llm_gpt5 = OpenAI(model="hosted-gpt5", api_base="http://localhost:8000/gpt5")
```Adaptation plan
Keep the source stable, then branch your edits in a predictable order so the next prompt run is easier to evaluate.
Preserve the source structure until you know which part of the prompt is actually driving the result quality.
Change domain facts, examples, and tool context first before you rewrite the instruction scaffold.
Validate one failure mode at a time so prompt changes stay attributable instead of getting noisy.
Copy once for a pristine source snapshot, then move the prompt into Workspace when you want variants, run history, and side-by-side tuning without losing the original.
Prompt diagnostics
Quick signals for how structured this prompt already is and where adaptation work is likely to happen first.
This prompt already mixes executable detail with instructions, so the safest path is to tune examples and interfaces before you rewrite the overall scaffold.
Agent for Auditable Financial Model Generation
To make financial modeling predictable and auditable, this challenge focuses on building an AI agent system using LlamaIndex for advanced financial analysis. Unlike traditional multi-agent tool-calling applications, this challenge emphasizes LlamaIndex's agentic capabilities for structured data processing, tool use, and complex reasoning without relying on tool orchestrations. Participants will design an agent that can ingest raw financial data (e.g., CSV, JSON), apply business logic, generate financial models, and produce comprehensive audit trails. The system will use GPT-5 for core reasoning and model generation, with Claude Sonnet 5 for summarization and clarification. Ray Serve and Novita AI will be leveraged for efficient and scalable deployment of these models, ensuring reliable inference. The agent will interact with simulated financial APIs and spreadsheet tools, producing auditable outputs that explain its reasoning and data transformations, enhancing trust and transparency in AI-driven financial insights.
Use the challenge page to recover the original task boundaries before you tune the prompt. That keeps your variants grounded in the same evaluation target instead of drifting into a different problem.