Back to Prompt Library
implementation

Integrate Letta for Persistent Memory and Great Expectations for Data Quality

Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.

Linked challenge: Policy Impact Analysis Agent

Format
Code-aware
Lines
7
Sections
1
Linked challenge
Policy Impact Analysis Agent

Prompt source

Original prompt text with formatting preserved for inspection.

7 lines
1 sections
No variables
1 code block
Enhance your agent with persistent memory using Letta. When creating a new thread for the agent, pass a Letta session ID. Develop a 'validate_data' tool using Great Expectations that the agent can call to check the quality of input legislative texts or market data. ```python
# Assuming Letta is installed and configured
from letta import LettaSession def validate_data_with_gx(data: str, expectation_suite_name: str) -> str: # Simulate Great Expectations validation if "critical_error" in data: # Example check return f"Data validation failed for suite {expectation_suite_name}: Critical error detected." return f"Data validation passed for suite {expectation_suite_name}." # Update assistant with new tool
# (You'd usually update via client.beta.assistants.update or define from scratch)
# Add a new function tool for validate_data_with_gx
# And ensure thread creation uses LettaSession to maintain context
```

Adaptation plan

Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.

Keep stable

Hold the task contract and output shape stable so generated implementations remain comparable.

Tune next

Update libraries, interfaces, and environment assumptions to match the stack you actually run.

Verify after

Test failure handling, edge cases, and any code paths that depend on hidden context or secrets.