Back to Prompt Library
implementation

Integrate Gemini 2.5 Flash and LIME

Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.

Linked challenge: Multi-Agent Ad Policy Auditor

Format
Code-aware
Lines
17
Sections
5
Linked challenge
Multi-Agent Ad Policy Auditor

Prompt source

Original prompt text with formatting preserved for inspection.

17 lines
5 sections
No variables
1 code block
Implement the `UserTopicAnalyzer` agent to call Gemini 2.5 Flash for extracting topics from simulated user data. Subsequently, integrate LIME to explain the `AdStrategist`'s targeting decisions and the `PrivacyAuditor`'s flagging decisions. This might involve creating a wrapper function or a custom tool for the agents to invoke LIME.

```python
# Example of how UserTopicAnalyzer might use Gemini
def analyze_topics(user_data: str) -> str:
    response = autogen.Completion.create(context=user_data, llm_config=llm_config)
    return response.get("choices")[0].get("message").get("content")

# Integrate this function into UserTopicAnalyzer's capabilities or tools

# Consider a custom tool for LIME:
# from autogen.agentchat.contrib.llm_utils import register_for_llm
# from lime.lime_text import LimeTextExplainer

# @register_for_llm(name="explain_decision", description="Generates a LIME explanation for a text-based decision.")
# def explain_decision(text_input: str, model_prediction_function, class_names: list) -> str:
#     explainer = LimeTextExplainer(class_names=class_names)
#     explanation = explainer.explain_instance(text_input, model_prediction_function, num_features=6)
#     return explanation.as_plain_text()
# # ... then make this tool available to the relevant agents
```

Adaptation plan

Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.

Keep stable

Hold the task contract and output shape stable so generated implementations remain comparable.

Tune next

Update libraries, interfaces, and environment assumptions to match the stack you actually run.

Verify after

Test failure handling, edge cases, and any code paths that depend on hidden context or secrets.