Back to Prompt Library
planning

Initialize CrewAI Agents and Tools

Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.

Linked challenge: Agents for Prompt-Driven Brand Sentiment & Affinity

Format
Code-aware
Lines
6
Sections
1
Linked challenge
Agents for Prompt-Driven Brand Sentiment & Affinity

Prompt source

Original prompt text with formatting preserved for inspection.

6 lines
1 sections
No variables
1 code block
Begin by setting up your CrewAI environment. Define three core agents: a 'PromptMonitor' to ingest and filter raw prompts, a 'BrandAnalyzer' to extract brands and determine sentiment using DeepSeek R1, and a 'ReportGenerator' to synthesize findings. Each agent should have distinct roles and goals. Configure tools for `ZapierInterfaces` to receive new prompts (simulate via webhook) and `KoreAIAssistant` for custom workflow automation (e.g., triggering alerts). Integrate DeepSeek R1 via its API for the BrandAnalyzer's core reasoning. Use the following snippet to start your agent definitions: ```python
from crewai import Agent, Task, Crew, Process
from langchain_community.llms import DeepSeekLLM # Example integration class CustomZapierTool: # Placeholder for actual Zapier integration # ... methods for Zapier interactions pass class CustomKoreAITool: # Placeholder for actual Kore.ai interactions # ... methods for Kore.ai interactions pass deepseek_llm = DeepSeekLLM(model='DeepSeek R1', api_key='YOUR_DEEPSEEK_API_KEY') prompt_monitor = Agent( role='Prompt Monitor', goal='Ingest and pre-process raw AI-generated prompts.', backstory='Expert in data ingestion and filtering, ensuring only relevant prompts are passed for analysis.', tools=[CustomZapierTool()], llm=deepseek_llm, verbose=True
) # Define other agents and their tasks here...
```
Focus on defining the agents' roles, goals, and the initial set of tools they will use. Make sure the BrandAnalyzer explicitly uses DeepSeek R1 for sentiment analysis.

Adaptation plan

Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.

Keep stable

Preserve the role framing, objective, and reporting structure so comparison runs stay coherent.

Tune next

Swap in your own domain constraints, anomaly thresholds, and examples before you branch variants.

Verify after

Check whether the prompt asks for the right evidence, confidence signal, and escalation path.