Operator-ready prompt for reuse, tuning, and workspace runs.
This item is set up for developers who want to inspect the original language, fork it into Workspace, and adapt the evidence model without losing the source prompt structure.
Implementation handoffs, eval setup, and prompt tuning where you need the original structure intact.
Inspect first, copy once, then fork into Workspace when you want variants, notes, and model settings attached to the same run.
Swap domain facts, examples, and any hard-coded entities for your own context.
Tighten the evidence or verification requirement if this is headed toward production.
Decide which failure mode you want to evaluate first before you branch the prompt.
This prompt already carries implementation detail, tool context, and a final-output instruction. Keep that structure intact when you tune it, or your comparison runs get noisy fast.
Open this prompt inside Workspace when you want a live iteration loop.
Copy for quick reuse, or run it in Workspace to keep prompt variants, model settings, and prompt-history changes in one place.
Structured source with 32 active lines to adapt.
Already linked to a challenge workflow.
Sign in to keep private prompt variations.
Prompt content
Original prompt text with formatting preserved for inspection and clean copy.
Using `LangGraph` within `LangChain`, define an initial graph with at least three nodes: `researcher`, `analyst`, and `synthesizer`. The `researcher` should use a web search tool (e.g., `SerpAPI`) to gather information. The `analyst` should process this information. The `synthesizer` should compile a report. Define the state for your graph and the edges between these nodes, considering how information flows sequentially.
```python
from typing import List, Annotated, TypedDict
from langchain_core.messages import BaseMessage
from langgraph.graph import StateGraph, END
class AgentState(TypedDict):
research_query: str
raw_search_results: List[str]
analyzed_data: str
final_report: str
def researcher_node(state: AgentState):
# Simulate SerpAPI call
print(f'Researcher is searching for: {state["research_query"]}')
return {'raw_search_results': ['Search Result 1', 'Search Result 2']}
def analyst_node(state: AgentState):
print('Analyst is processing search results...')
# Simulate analysis with Claude Opus 4.1
return {'analyzed_data': 'Processed insights from results.'}
def synthesizer_node(state: AgentState):
print('Synthesizer is generating report...')
# Simulate report generation with Claude Opus 4.1
return {'final_report': 'Comprehensive Report.'}
workflow = StateGraph(AgentState)
workflow.add_node('researcher', researcher_node)
workflow.add_node('analyst', analyst_node)
workflow.add_node('synthesizer', synthesizer_node)
workflow.set_entry_point('researcher')
workflow.add_edge('researcher', 'analyst')
workflow.add_edge('analyst', 'synthesizer')
workflow.add_edge('synthesizer', END)
app = workflow.compile()
```Adaptation plan
Keep the source stable, then branch your edits in a predictable order so the next prompt run is easier to evaluate.
Preserve the role framing, objective, and reporting structure so comparison runs stay coherent.
Swap in your own domain constraints, anomaly thresholds, and examples before you branch variants.
Check whether the prompt asks for the right evidence, confidence signal, and escalation path.
Copy once for a pristine source snapshot, then move the prompt into Workspace when you want variants, run history, and side-by-side tuning without losing the original.
Prompt diagnostics
Quick signals for how structured this prompt already is and where adaptation work is likely to happen first.
This prompt already mixes executable detail with instructions, so the safest path is to tune examples and interfaces before you rewrite the overall scaffold.
Robotics & Biotech Research Navigator Agent
Inspired by the advancements in robotics foundation models and the push for AI in traditional Chinese medicine, this challenge focuses on building a sophisticated multi-agent research system. Your task is to design and implement an autonomous research navigator that can ingest vast amounts of scientific literature (e.g., papers on robotics, biotechnology, or drug discovery), identify key trends, synthesize novel insights, and generate structured summaries or reports. Utilizing LangGraph, you will orchestrate a team of specialized agents, each with a distinct role—such as a 'Researcher' for information gathering, an 'Analyst' for data interpretation, and a 'Synthesizer' for report generation. The system should manage complex, stateful workflows, allowing agents to collaborate, iterate on findings, and dynamically adapt their research path based on intermediate results. This challenge emphasizes robust information retrieval, advanced reasoning, and structured output generation for scientific applications.
Use the challenge page to recover the original task boundaries before you tune the prompt. That keeps your variants grounded in the same evaluation target instead of drifting into a different problem.