Back to Prompt Library
implementation
Implement Code Analysis Tool
Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.
Linked challenge: AI Code Audit & Optimization Agent
Format
Code-aware
Lines
18
Sections
4
Linked challenge
AI Code Audit & Optimization Agent
Prompt source
Original prompt text with formatting preserved for inspection.
18 lines
4 sections
No variables
1 code block
Implement a Python function, `analyze_python_code(code: str) -> dict`, that simulates a code analysis tool. This function should take a Python code string and return a dictionary of potential issues (e.g., syntax errors, common linting problems, simple security patterns). This will be registered as a `tool_function` with the OpenAI Assistants API. Ensure it can be called by your agents.
```python
import openai
# Placeholder for your code analysis tool function
def analyze_python_code(code: str) -> dict:
# Implement your simulated analysis logic here
# e.g., using a simple regex or AST parsing for basic checks
findings = []
if 'print(f\'User password:' in code:
findings.append({'type': 'security', 'description': 'Potential logging of sensitive data.'})
# Add more analysis logic
return {'issues': findings}
# Later, when defining the assistant:
# assistant = client.beta.assistants.create(
# tools=[{"type": "function", "function": {"name": "analyze_python_code", ...}}],
# ...
# )
```Adaptation plan
Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.
Keep stable
Hold the task contract and output shape stable so generated implementations remain comparable.
Tune next
Update libraries, interfaces, and environment assumptions to match the stack you actually run.
Verify after
Test failure handling, edge cases, and any code paths that depend on hidden context or secrets.