Back to Prompt Library
implementation
Build RAG Pipeline with LlamaIndex
Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.
Linked challenge: Intuit's Fin-Analyst: OpenAI o3 with DSPy & Model Context Protocol for Advanced Financial Insights
Format
Text-first
Lines
1
Sections
1
Linked challenge
Intuit's Fin-Analyst: OpenAI o3 with DSPy & Model Context Protocol for Advanced Financial Insights
Prompt source
Original prompt text with formatting preserved for inspection.
1 lines
1 sections
No variables
0 checklist items
Create a RAG pipeline using LlamaIndex to index a corpus of simulated financial regulations, IRS publications, and Intuit help articles. Integrate this RAG system into your DSPy Modules, allowing OpenAI o3 to retrieve and cite relevant documents when performing tax calculations or explaining financial concepts. Emphasize how the agent will decide when to use RAG versus direct model reasoning.
Adaptation plan
Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.
Keep stable
Hold the task contract and output shape stable so generated implementations remain comparable.
Tune next
Update libraries, interfaces, and environment assumptions to match the stack you actually run.
Verify after
Test failure handling, edge cases, and any code paths that depend on hidden context or secrets.