Back to Prompt Library
planning

Initialize LlamaIndex Agent Team for Trend Analysis

Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.

Linked challenge: Developer Sentiment & AI Trend Analysis Agent

Format
Code-aware
Lines
42
Sections
1
Linked challenge
Developer Sentiment & AI Trend Analysis Agent

Prompt source

Original prompt text with formatting preserved for inspection.

42 lines
1 sections
No variables
1 code block
Set up a LlamaIndex `AgentRunner` with two main agents: a 'Trend Analyst' and a 'Sentiment Processor'. The 'Trend Analyst' will use a `QueryEngineTool` connected to an index built from a collection of tech news articles and forum posts. The 'Sentiment Processor' will use a custom tool to interact with Hume AI for emotional analysis of voice transcripts. Both agents should leverage Claude 4 Sonnet as their underlying LLM for reasoning and text generation.
```python
from llama_index.core.agent import ReActAgent
from llama_index.core.tools import FunctionTool, QueryEngineTool, ToolMetadata
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from llama_index.llms.anthropic import Anthropic
from llama_index.llms.base import ChatMessage
import os # Initialize Claude 4 Sonnet LLM
# Ensure ANTHROPIC_API_KEY is set in your environment
claude4_sonnet_llm = Anthropic(model="claude-4-sonnet", api_key=os.getenv("ANTHROPIC_API_KEY")) # 1. Setup for Trend Analyst: Indexing tech news and forum posts
# Assuming 'data/tech_news' and 'data/forum_posts' contain relevant documents
# For simplicity, we'll use a mock directory. In real scenario, use LlamaIndex data connectors.
# documents = SimpleDirectoryReader(input_dir="./data/tech_news").load_data()
# forum_docs = SimpleDirectoryReader(input_dir="./data/forum_posts").load_data()
# all_docs = documents + forum_docs
# index = VectorStoreIndex.from_documents(all_docs, llm=claude4_sonnet_llm)
# query_engine = index.as_query_engine(llm=claude4_sonnet_llm) # trend_search_tool = QueryEngineTool(
# query_engine=query_engine,
# metadata=ToolMetadata(
# name="trend_search_engine",
# description="Searches indexed tech news and developer forums for AI trends and topics."
# )
# ) # 2. Setup for Sentiment Processor: Hume AI Tool (placeholder for actual implementation)
# def hume_ai_sentiment_analysis(text_or_audio_path: str) -> str:
# """Analyzes text or audio (via path) for emotions and sentiment using Hume AI."""
# # This function would call the Hume AI API, securely authenticated by Aembit
# # For now, return mock data
# if "thrilled" in text_or_audio_path or "exciting" in text_or_audio_path:
# return '{"sentiment": "positive", "emotions": [{"emotion": "joy", "score": 0.9}]}'
# return '{"sentiment": "neutral", "emotions": []}' # hume_sentiment_tool = FunctionTool.from_defaults(
# fn=hume_ai_sentiment_analysis,
# name="hume_ai_sentiment_analyzer",
# description="Analyzes input text or audio transcript for sentiment and emotional cues using Hume AI. Input can be a string of text or a path to an audio file."
# ) # Initialize the ReActAgent with the tools
# agent = ReActAgent.from_tools(
# [trend_search_tool, hume_sentiment_tool],
# llm=claude4_sonnet_llm,
# verbose=True
# ) # Example interaction:
# response = agent.chat("Analyze the latest AI trends in inference hardware and gather developer sentiment regarding recent announcements.")
# print(response)
```

Adaptation plan

Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.

Keep stable

Preserve the role framing, objective, and reporting structure so comparison runs stay coherent.

Tune next

Swap in your own domain constraints, anomaly thresholds, and examples before you branch variants.

Verify after

Check whether the prompt asks for the right evidence, confidence signal, and escalation path.