Back to Prompt Library
implementation
Implement Agentic Query Planning
Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.
Linked challenge: LLM-Powered Legal & Market Intelligence
Format
Code-aware
Lines
21
Sections
7
Linked challenge
LLM-Powered Legal & Market Intelligence
Prompt source
Original prompt text with formatting preserved for inspection.
21 lines
7 sections
No variables
1 code block
Building upon your LlamaIndex data pipeline, implement an agentic query engine capable of handling complex queries. The agent should be able to break down a high-level query like 'Analyze the strategic implications of the Musk vs. OpenAI lawsuit by summarizing key legal points and market reactions' into sub-queries. Utilize `QueryEngineTool` and `LlamaPack` or a custom `RouterQueryEngine` to orchestrate this process with GPT-4o. Show how the agent routes questions to specific sub-query engines or tools. Include Python code.
```python
from llama_index.core import VectorStoreIndex, Document
from llama_index.core.tools import QueryEngineTool, ToolMetadata
from llama_index.core.agent import AgentRunner
from llama_index.llms.openai import OpenAI
from llama_index.vector_stores.pinecone import PineconeVectorStore
# ... other necessary imports
# Assume 'legal_index' and 'news_index' are already created VectorStoreIndex instances
# backed by PineconeVectorStore
legal_query_engine = legal_index.as_query_engine(similarity_top_k=3)
news_query_engine = news_index.as_query_engine(similarity_top_k=5)
legal_tool = QueryEngineTool(query_engine=legal_query_engine, metadata=ToolMetadata(name='legal_analyzer', description='Provides summaries and context from legal documents and filings.'))
news_tool = QueryEngineTool(query_engine=news_query_engine, metadata=ToolMetadata(name='market_news_analyzer', description='Provides insights and sentiment from market news articles and reports.'))
llm = OpenAI(model='gpt-4o', api_key='YOUR_OPENAI_API_KEY')
# Your task: Initialize an agent (e.g., FunctionCallingAgentWorker or ReActAgent) with these tools
# and demonstrate how it handles a complex query.
# For example, using AgentRunner with a custom agent worker:
# agent = AgentRunner(your_agent_worker)
# response = agent.chat('Analyze the strategic implications of the Musk vs. OpenAI lawsuit...')
```Adaptation plan
Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.
Keep stable
Hold the task contract and output shape stable so generated implementations remain comparable.
Tune next
Update libraries, interfaces, and environment assumptions to match the stack you actually run.
Verify after
Test failure handling, edge cases, and any code paths that depend on hidden context or secrets.