Back to Prompt Library
planning
Design LlamaIndex RAG Pipeline Architecture
Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.
Linked challenge: Document AI: Summarize & Extract from Enterprise Content
Format
Text-first
Lines
1
Sections
1
Linked challenge
Document AI: Summarize & Extract from Enterprise Content
Prompt source
Original prompt text with formatting preserved for inspection.
1 lines
1 sections
No variables
0 checklist items
Design a LlamaIndex-based RAG pipeline for processing heterogeneous enterprise documents. Focus on creating custom document loaders for PDF and audio transcripts (e.g., using `pypdf` for PDFs, or simple regex for `.txt` transcripts), defining appropriate chunking strategies for Gemini 2.5 Pro, and outlining the indexing process using MongoDB Atlas Vector Search. Describe how to integrate LlamaIndex's knowledge graph functionality to enrich document understanding. Ensure your architecture can support generating summaries and answering complex queries.
Adaptation plan
Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.
Keep stable
Preserve the role framing, objective, and reporting structure so comparison runs stay coherent.
Tune next
Swap in your own domain constraints, anomaly thresholds, and examples before you branch variants.
Verify after
Check whether the prompt asks for the right evidence, confidence signal, and escalation path.