Back to Prompt Library
implementation
Implement Code Generation Workflow
Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.
Linked challenge: Accelerated Code Dev & Review Agent
Format
Code-aware
Lines
23
Sections
3
Linked challenge
Accelerated Code Dev & Review Agent
Prompt source
Original prompt text with formatting preserved for inspection.
23 lines
3 sections
No variables
1 code block
Extend your Mastra AI project. Implement a workflow where the 'CodeGenerator' agent receives a feature description (e.g., 'a function to sort a list of dictionaries by a key'). It should then use Claude Sonnet 4 to generate the Python function code and corresponding unit tests. Implement a custom tool that allows the agent to 'save_file(filename, content)' to a mock file system. The agent should invoke this tool twice: once for the function, once for the tests. Ensure it follows PEP8 where possible.
```typescript
// Example of a custom tool
const saveFileTool = createTool({
id: 'save_file',
description: 'Saves content to a specified file.',
schema: {
type: 'object',
properties: {
filename: { type: 'string' },
content: { type: 'string' },
},
required: ['filename', 'content'],
},
async execute({ filename, content }) {
console.log(`Saving ${filename}...`);
// Simulate file save, e.g., write to a local temp dir or in-memory map
return `File ${filename} saved successfully.`;
},
});
codeGenerator.addTool(saveFileTool);
// ... Workflow to use the tool
```Adaptation plan
Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.
Keep stable
Hold the task contract and output shape stable so generated implementations remain comparable.
Tune next
Update libraries, interfaces, and environment assumptions to match the stack you actually run.
Verify after
Test failure handling, edge cases, and any code paths that depend on hidden context or secrets.