Refine Reasoning for Hallucination Reduction

testingChallenge

Prompt Content

Analyze the `reasoning_trace` generated by Llama 3.3 during initial simulation runs. Identify instances where the agent's reasoning seems to deviate from the explicit supply chain model or produces illogical actions (hallucinations). Refine your LangGraph state transitions, tool definitions, and Llama 3.3 prompts (e.g., by adding more constraints, few-shot examples, or explicit validation steps) to mitigate these issues and improve reasoning coherence and accuracy.

Try this prompt

Open the workspace to execute this prompt with free credits, or use your own API keys for unlimited usage.

Usage Tips

Copy the prompt and paste it into your preferred AI tool (Claude, ChatGPT, Gemini)

Customize placeholder values with your specific requirements and context

For best results, provide clear examples and test different variations