Operator-ready prompt for reuse, tuning, and workspace runs.
This item is set up for developers who want to inspect the original language, fork it into Workspace, and adapt the evidence model without losing the source prompt structure.
Implementation handoffs, eval setup, and prompt tuning where you need the original structure intact.
Inspect first, copy once, then fork into Workspace when you want variants, notes, and model settings attached to the same run.
Swap domain facts, examples, and any hard-coded entities for your own context.
Tighten the evidence or verification requirement if this is headed toward production.
Decide which failure mode you want to evaluate first before you branch the prompt.
This prompt already carries implementation detail, tool context, and a final-output instruction. Keep that structure intact when you tune it, or your comparison runs get noisy fast.
Open this prompt inside Workspace when you want a live iteration loop.
Copy for quick reuse, or run it in Workspace to keep prompt variants, model settings, and prompt-history changes in one place.
Structured source with 23 active lines to adapt.
Already linked to a challenge workflow.
Sign in to keep private prompt variations.
Prompt content
Original prompt text with formatting preserved for inspection and clean copy.
Write the Python code to integrate VAPI for voice input/output and ERNIE 4.0 as the primary generative model within your LangChain agent. This should include setting up VAPI for streaming audio and creating a custom LangChain LLM wrapper for ERNIE 4.0, or utilizing a pre-existing integration if available. Show how the transcribed text from VAPI is passed to the LangChain agent and how the agent's text response is converted back to speech. Include error handling for API calls.
```python
from langchain.agents import AgentExecutor, create_react_agent
from langchain_core.messages import HumanMessage, AIMessage
from langchain_core.prompts import PromptTemplate
from langchain_core.tools import Tool
# Assuming VAPI and ERNIE 4.0 wrappers/SDKs are installed
# from vapi_sdk import VapiClient
# from ernie_llm import ErnieLLM
# ... (define custom tools like MusicSearchTool, PlaylistBuilderTool)
# llm = ErnieLLM(api_key="YOUR_ERNIE_API_KEY")
# tools = [MusicSearchTool(), PlaylistBuilderTool()]
# prompt = PromptTemplate(...)
# agent = create_react_agent(llm, tools, prompt)
# agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, memory=ConversationBufferMemory())
# def handle_voice_input(audio_data):
# # Use VAPI to transcribe audio
# transcribed_text = vapi_client.transcribe(audio_data)
# response_text = agent_executor.invoke({"input": transcribed_text})["output"]
# # Use VAPI to synthesize speech
# vapi_client.synthesize_speech(response_text)
# return response_text
```Adaptation plan
Keep the source stable, then branch your edits in a predictable order so the next prompt run is easier to evaluate.
Hold the task contract and output shape stable so generated implementations remain comparable.
Update libraries, interfaces, and environment assumptions to match the stack you actually run.
Test failure handling, edge cases, and any code paths that depend on hidden context or secrets.
Copy once for a pristine source snapshot, then move the prompt into Workspace when you want variants, run history, and side-by-side tuning without losing the original.
Prompt diagnostics
Quick signals for how structured this prompt already is and where adaptation work is likely to happen first.
This prompt already mixes executable detail with instructions, so the safest path is to tune examples and interfaces before you rewrite the overall scaffold.
Voice-Activated Dynamic Playlist Generator
Develop a cutting-edge voice-activated AI agent that generates dynamic, personalized music playlists based on user prompts, mood, and past listening habits. The agent should leverage advanced generative AI capabilities to create unique playlist narratives and adapt in real-time. Emphasize fairness in recommendations and seamless deployment. This challenge involves building a sophisticated LangChain application that integrates a voice interface and a powerful large language model for creative content generation and robust evaluation for ethical AI practices. Focus on designing an extensible system capable of handling complex user interactions and evolving content preferences. The system should process natural language voice inputs, interpret nuanced requests, and curate playlists. This requires not just matching keywords but understanding the emotional tone and contextual needs of the user to deliver truly personalized musical experiences. The solution should also demonstrate how to monitor and mitigate potential biases in AI-generated recommendations, ensuring a diverse and equitable output.
Use the challenge page to recover the original task boundaries before you tune the prompt. That keeps your variants grounded in the same evaluation target instead of drifting into a different problem.