Back to Prompt Library
implementation

Implement Lakera Guard for AI Security

Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.

Linked challenge: Google ADK Multi-Model Inference Routing with DeepSeek R1 for Cerebras/Trainium Optimization

Format
Code-aware
Lines
4
Sections
1
Linked challenge
Google ADK Multi-Model Inference Routing with DeepSeek R1 for Cerebras/Trainium Optimization

Prompt source

Original prompt text with formatting preserved for inspection.

4 lines
1 sections
No variables
1 code block
Integrate Lakera Guard into your Google ADK agent's input processing pipeline. Before any inference request is routed to DeepSeek R1 or the simulated Trainium, the Lakera API should scan the prompt for potential security threats like prompt injection, hate speech, or PII. If a threat is detected, the agent should block the request and log the incident. Provide the Python code for Lakera integration. ```python
import requests LAKERA_API_URL = "https://api.lakera.ai/v1/prompt_moderation"
LAKERA_API_KEY = "YOUR_LAKERA_API_KEY" def scan_prompt_with_lakera(prompt: str) -> dict: headers = {"Authorization": f"Bearer {LAKERA_API_KEY}"} payload = {"input": prompt} try: response = requests.post(LAKERA_API_URL, headers=headers, json=payload) response.raise_for_status() return response.json() except requests.exceptions.RequestException as e: print(f"Lakera API error: {e}") return {"flagged": False, "reason": "API_ERROR"} def secure_inference_router(prompt: str, user_id: str) -> str: scan_result = scan_prompt_with_lakera(prompt) if scan_result.get('flagged'): print(f"Prompt blocked by Lakera: {scan_result.get('reason')}") return "Your request was blocked due to security concerns." # If not flagged, proceed with existing routing logic # return agent_with_memory(prompt, user_id) # Call your routing agent return "Prompt allowed. Proceeding to inference."
```

Adaptation plan

Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.

Keep stable

Hold the task contract and output shape stable so generated implementations remain comparable.

Tune next

Update libraries, interfaces, and environment assumptions to match the stack you actually run.

Verify after

Test failure handling, edge cases, and any code paths that depend on hidden context or secrets.