Back to Prompt Library
implementation
Orchestrate Self-Improvement with Dagster and DeepEval
Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.
Linked challenge: Self-Improving GPT-5.3-Codex Agent for Code Generation & Refinement
Format
Text-first
Lines
1
Sections
1
Linked challenge
Self-Improving GPT-5.3-Codex Agent for Code Generation & Refinement
Prompt source
Original prompt text with formatting preserved for inspection.
1 lines
1 sections
No variables
0 checklist items
Design a Dagster pipeline (`@job` and `@op` definitions) that orchestrates the iterative self-improvement loop of your agent. This pipeline should include steps for invoking the OpenAI agent to generate/refine code, executing tests via your custom tool, and then using DeepEval to analyze the test results and provide structured feedback to the agent for its next iteration. Show how DeepEval can assess aspects like code correctness and adherence to best practices. Provide Python code for the Dagster pipeline and DeepEval integration.
Adaptation plan
Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.
Keep stable
Hold the task contract and output shape stable so generated implementations remain comparable.
Tune next
Update libraries, interfaces, and environment assumptions to match the stack you actually run.
Verify after
Test failure handling, edge cases, and any code paths that depend on hidden context or secrets.