Back to Prompt Library
implementation

Implementing Threat Detection Tool Integration

Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.

Linked challenge: Cyberthreat Orchestrator Agent

Format
Code-aware
Lines
21
Sections
7
Linked challenge
Cyberthreat Orchestrator Agent

Prompt source

Original prompt text with formatting preserved for inspection.

21 lines
7 sections
No variables
1 code block
Develop a custom LangChain tool called 'SecurityScanner' that simulates scanning a system for vulnerabilities based on a given log entry. This tool should take a 'log_entry' and 'system_id' as input and return a dictionary indicating 'vulnerabilities_found' (boolean) and a list of 'potential_threat_types'. Provide the Python code for this tool and demonstrate how to integrate it into a LangGraph agent responsible for 'Threat Detection'.

```python
from langchain.tools import BaseTool
from pydantic import BaseModel, Field
from typing import Type

class SecurityScannerInput(BaseModel):
    log_entry: str = Field(description="The security log entry to scan.")
    system_id: str = Field(description="The ID of the system from which the log originated.")

class SecurityScannerTool(BaseTool):
    name = "security_scanner"
    description = "Scans a system for vulnerabilities based on log entries."
    args_schema: Type[BaseModel] = SecurityScannerInput

    def _run(self, log_entry: str, system_id: str) -> dict:
        # ... simulate scanning logic here ...
        if "failed login" in log_entry.lower():
            return {"vulnerabilities_found": True, "potential_threat_types": ["Brute-Force"]}
        return {"vulnerabilities_found": False, "potential_threat_types": []}

    async def _arun(self, log_entry: str, system_id: str) -> dict:
        raise NotImplementedError("Async not implemented")

# Integration with LangGraph agent ...
```

Adaptation plan

Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.

Keep stable

Hold the task contract and output shape stable so generated implementations remain comparable.

Tune next

Update libraries, interfaces, and environment assumptions to match the stack you actually run.

Verify after

Test failure handling, edge cases, and any code paths that depend on hidden context or secrets.