Back to Prompt Library
implementation|testing

Langfuse Observability and Monitoring

Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.

Linked challenge: Intelligent Hospitality Agent for Personalized Guest Services

Format
Text-first
Lines
1
Sections
1
Linked challenge
Intelligent Hospitality Agent for Personalized Guest Services

Prompt source

Original prompt text with formatting preserved for inspection.

1 lines
1 sections
No variables
0 checklist items
Integrate Langfuse into your OpenAI Agents SDK project. Configure it to trace all agent interactions, including LLM calls, tool executions, and intermediate steps. Demonstrate how to view a complete trace of a multi-turn conversation in the Langfuse UI, highlighting agent decisions and data flow. Explain how this helps debug and improve agent performance and reliability.

Adaptation plan

Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.

Keep stable

Preserve the source structure until you know which part of the prompt is actually driving the result quality.

Tune next

Change domain facts, examples, and tool context first before you rewrite the instruction scaffold.

Verify after

Validate one failure mode at a time so prompt changes stay attributable instead of getting noisy.