Back to Prompt Library
implementation

Design LlamaIndex Data Connectors

Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.

Linked challenge: LLM-Powered Legal & Market Intelligence

Format
Text-first
Lines
2
Sections
2
Linked challenge
LLM-Powered Legal & Market Intelligence

Prompt source

Original prompt text with formatting preserved for inspection.

2 lines
2 sections
No variables
0 checklist items
Using LlamaIndex, design and implement a data ingestion pipeline that can pull data from a simulated legal document repository (e.g., local PDF files for filings), a news API (simulated with local JSON files), and a public company website (simulated via web scraping of a local HTML file).

Your task is to use `SimpleDirectoryReader`, `WebPageReader`, or custom `BaseReader` implementations to load these documents, and then create a `VectorStoreIndex` using `PineconeVectorStore` for efficient retrieval. Provide Python code snippets for initializing LlamaIndex, defining your readers, and setting up the index.

Adaptation plan

Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.

Keep stable

Hold the task contract and output shape stable so generated implementations remain comparable.

Tune next

Update libraries, interfaces, and environment assumptions to match the stack you actually run.

Verify after

Test failure handling, edge cases, and any code paths that depend on hidden context or secrets.