# Senior Developer Review - Workflow Instructions
````xml
The workflow execution engine is governed by: {project-root}/bmad/core/tasks/workflow.xml
You MUST have already loaded and processed: {installed_path}/workflow.yaml
Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level}
Generate all documents in {document_output_language}
This workflow performs a SYSTEMATIC Senior Developer Review on a story with status "review", validates EVERY acceptance criterion and EVERY completed task, appends structured review notes with evidence, and updates the story status based on outcome.
If story_path is provided, use it. Otherwise, find the first story in sprint-status.yaml with status "review". If none found, offer ad-hoc review option.
Ad-hoc review mode: User can specify any files to review and what to review for (quality, security, requirements, etc.). Creates standalone review report.
SYSTEMATIC VALIDATION REQUIREMENT: For EVERY acceptance criterion, verify implementation with evidence (file:line). For EVERY task marked complete, verify it was actually done. Tasks marked complete but not done = HIGH SEVERITY finding.
⚠️ ZERO TOLERANCE FOR LAZY VALIDATION ⚠️
If you FAIL to catch even ONE task marked complete that was NOT actually implemented, or ONE acceptance criterion marked done that is NOT in the code with evidence, you have FAILED YOUR ONLY PURPOSE. This is an IMMEDIATE DISQUALIFICATION. No shortcuts. No assumptions. No "looks good enough." You WILL read every file. You WILL verify every claim. You WILL provide evidence (file:line) for EVERY validation. Failure to catch false completions = you failed humanity and the project. Your job is to be the uncompromising gatekeeper. DO YOUR JOB COMPLETELY OR YOU WILL BE REPLACED.
Only modify the story file in these areas: Status, Dev Agent Record (Completion Notes), File List (if corrections needed), Change Log, and the appended "Senior Developer Review (AI)" section.
Execute ALL steps in exact order; do NOT skip steps
DOCUMENT OUTPUT: Technical review reports. Structured findings with severity levels and action items. User skill level ({user_skill_level}) affects conversation style ONLY, not review content.
Use {{story_path}} directly
Read COMPLETE story file and parse sections
Extract story_key from filename or story metadata
Verify Status is "review" - if not, HALT with message: "Story status must be 'review' to proceed"
MUST read COMPLETE sprint-status.yaml file from start to end to preserve order
Load the FULL file: {{output_folder}}/sprint-status.yaml
Read ALL lines from beginning to end - do not skip any content
Parse the development_status section completely
Find FIRST story (reading in order from top to bottom) where:
- Key matches pattern: number-number-name (e.g., "1-2-user-auth")
- NOT an epic key (epic-X) or retrospective (epic-X-retrospective)
- Status value equals "review"
Select an option (1/2/3):
What code would you like me to review?
Provide:
- File path(s) or directory to review
- What to review for:
• General quality and standards
• Requirements compliance
• Security concerns
• Performance issues
• Architecture alignment
• Something else (specify)
Your input:
Parse user input to extract:
- {{review_files}}: file paths or directories to review
- {{review_focus}}: what aspects to focus on
- {{review_context}}: any additional context provided
Set ad_hoc_review_mode = true
Skip to step 4 with custom scope
HALT
Use the first story found with status "review"
Resolve story file path in {{story_dir}}
Read the COMPLETE story file
Extract {{epic_num}} and {{story_num}} from filename (e.g., story-2.3.*.md) and story metadata
Parse sections: Status, Story, Acceptance Criteria, Tasks/Subtasks (and completion states), Dev Notes, Dev Agent Record (Context Reference, Completion Notes, File List), Change Log
HALT with message: "Unable to read story file"
Locate story context file: Under Dev Agent Record → Context Reference, read referenced path(s). If missing, search {{output_folder}} for files matching pattern "story-{{epic_num}}.{{story_num}}*.context.xml" and use the most recent.
Continue but record a WARNING in review notes: "No story context file found"
Locate Epic Tech Spec: Search {{tech_spec_search_dir}} with glob {{tech_spec_glob_template}} (resolve {{epic_num}})
Continue but record a WARNING in review notes: "No Tech Spec found for epic {{epic_num}}"
Load architecture/standards docs: For each file name in {{arch_docs_file_names}} within {{arch_docs_search_dirs}}, read if exists. Collect testing, coding standards, security, and architectural patterns.
Detect primary ecosystem(s) by scanning for manifests (e.g., package.json, pyproject.toml, go.mod, Dockerfile). Record key frameworks (e.g., Node/Express, React/Vue, Python/FastAPI, etc.).
Synthesize a concise "Best-Practices and References" note capturing any updates or considerations that should influence the review (cite links and versions if available).
Use {{review_files}} as the file list to review
Focus review on {{review_focus}} aspects specified by user
Use {{review_context}} for additional guidance
Skip acceptance criteria checking (no story context)
If architecture docs exist, verify alignment with architectural constraints
SYSTEMATIC VALIDATION - Check EVERY AC and EVERY task marked complete
From the story, read Acceptance Criteria section completely - parse into numbered list
From the story, read Tasks/Subtasks section completely - parse ALL tasks and subtasks with their completion state ([x] = completed, [ ] = incomplete)
From Dev Agent Record → File List, compile list of changed/added files. If File List is missing or clearly incomplete, search repo for recent changes relevant to the story scope (heuristics: filenames matching components/services/routes/tests inferred from ACs/tasks).
Step 4A: SYSTEMATIC ACCEPTANCE CRITERIA VALIDATION
Create AC validation checklist with one entry per AC
For EACH acceptance criterion (AC1, AC2, AC3, etc.):
1. Read the AC requirement completely
2. Search changed files for evidence of implementation
3. Determine: IMPLEMENTED, PARTIAL, or MISSING
4. Record specific evidence (file:line references where AC is satisfied)
5. Check for corresponding tests (unit/integration/E2E as applicable)
6. If PARTIAL or MISSING: Flag as finding with severity based on AC criticality
7. Document in AC validation checklist
Generate AC Coverage Summary: "X of Y acceptance criteria fully implemented"
Step 4B: SYSTEMATIC TASK COMPLETION VALIDATION
Create task validation checklist with one entry per task/subtask
For EACH task/subtask marked as COMPLETED ([x]):
1. Read the task description completely
2. Search changed files for evidence the task was actually done
3. Determine: VERIFIED COMPLETE, QUESTIONABLE, or NOT DONE
4. Record specific evidence (file:line references proving task completion)
5. **CRITICAL**: If marked complete but NOT DONE → Flag as HIGH SEVERITY finding with message: "Task marked complete but implementation not found: [task description]"
6. If QUESTIONABLE → Flag as MEDIUM SEVERITY finding: "Task completion unclear: [task description]"
7. Document in task validation checklist
For EACH task/subtask marked as INCOMPLETE ([ ]):
1. Note it was not claimed to be complete
2. Check if it was actually done anyway (sometimes devs forget to check boxes)
3. If done but not marked: Note in review (helpful correction, not a finding)
Generate Task Completion Summary: "X of Y completed tasks verified, Z questionable, W falsely marked complete"
Step 4C: CROSS-CHECK EPIC TECH-SPEC REQUIREMENTS
Cross-check epic tech-spec requirements and architecture constraints against the implementation intent in files.
flag as High Severity finding.
Step 4D: COMPILE VALIDATION FINDINGS
Compile all validation findings into structured list:
- Missing AC implementations (severity based on AC importance)
- Partial AC implementations (MEDIUM severity)
- Tasks falsely marked complete (HIGH severity - this is critical)
- Questionable task completions (MEDIUM severity)
- Missing tests for ACs (severity based on AC criticality)
- Architecture violations (HIGH severity)
For each changed file, skim for common issues appropriate to the stack: error handling, input validation, logging, dependency injection, thread-safety/async correctness, resource cleanup, performance anti-patterns.
Perform security review: injection risks, authZ/authN handling, secret management, unsafe defaults, un-validated redirects, CORS misconfigured, dependency vulnerabilities (based on manifests).
Check tests quality: assertions are meaningful, edge cases covered, deterministic behavior, proper fixtures, no flakiness patterns.
Capture concrete, actionable suggestions with severity (High/Med/Low) and rationale. When possible, suggest specific code-level changes (filenames + line ranges) without rewriting large sections.
Determine outcome based on validation results:
- BLOCKED: Any HIGH severity finding (AC missing, task falsely marked complete, critical architecture violation)
- CHANGES REQUESTED: Any MEDIUM severity findings or multiple LOW severity issues
- APPROVE: All ACs implemented, all completed tasks verified, no significant issues
Prepare a structured review report with sections:
1. **Summary**: Brief overview of review outcome and key concerns
2. **Outcome**: Approve | Changes Requested | Blocked (with justification)
3. **Key Findings** (by severity):
- HIGH severity issues first (especially falsely marked complete tasks)
- MEDIUM severity issues
- LOW severity issues
4. **Acceptance Criteria Coverage**:
- Include complete AC validation checklist from Step 4A
- Show: AC# | Description | Status (IMPLEMENTED/PARTIAL/MISSING) | Evidence (file:line)
- Summary: "X of Y acceptance criteria fully implemented"
- List any missing or partial ACs with severity
5. **Task Completion Validation**:
- Include complete task validation checklist from Step 4B
- Show: Task | Marked As | Verified As | Evidence (file:line)
- **CRITICAL**: Highlight any tasks marked complete but not done in RED/bold
- Summary: "X of Y completed tasks verified, Z questionable, W falsely marked complete"
6. **Test Coverage and Gaps**:
- Which ACs have tests, which don't
- Test quality issues found
7. **Architectural Alignment**:
- Tech-spec compliance
- Architecture violations if any
8. **Security Notes**: Security findings if any
9. **Best-Practices and References**: With links
10. **Action Items**:
- CRITICAL: ALL action items requiring code changes MUST have checkboxes for tracking
- Format for actionable items: `- [ ] [Severity] Description (AC #X) [file: path:line]`
- Format for informational notes: `- Note: Description (no action required)`
- Imperative phrasing for action items
- Map to related ACs or files with specific line references
- Include suggested owners if clear
- Example format:
```
### Action Items
**Code Changes Required:**
- [ ] [High] Add input validation on login endpoint (AC #1) [file: src/routes/auth.js:23-45]
- [ ] [Med] Add unit test for invalid email format [file: tests/unit/auth.test.js]
**Advisory Notes:**
- Note: Consider adding rate limiting for production deployment
- Note: Document the JWT expiration policy in README
```
The AC validation checklist and task validation checklist MUST be included in the review - this is the evidence trail
Generate review report as a standalone document
Save to {{output_folder}}/code-review-{{date}}.md
Include sections:
- Review Type: Ad-Hoc Code Review
- Reviewer: {{user_name}}
- Date: {{date}}
- Files Reviewed: {{review_files}}
- Review Focus: {{review_focus}}
- Outcome: (Approve | Changes Requested | Blocked)
- Summary
- Key Findings
- Test Coverage and Gaps
- Architectural Alignment
- Security Notes
- Best-Practices and References (with links)
- Action Items
Open {{story_path}} and append a new section at the end titled exactly: "Senior Developer Review (AI)".
Insert subsections:
- Reviewer: {{user_name}}
- Date: {{date}}
- Outcome: (Approve | Changes Requested | Blocked) with justification
- Summary
- Key Findings (by severity - HIGH/MEDIUM/LOW)
- **Acceptance Criteria Coverage**:
* Include complete AC validation checklist with table format
* AC# | Description | Status | Evidence
* Summary: X of Y ACs implemented
- **Task Completion Validation**:
* Include complete task validation checklist with table format
* Task | Marked As | Verified As | Evidence
* **Highlight falsely marked complete tasks prominently**
* Summary: X of Y tasks verified, Z questionable, W false completions
- Test Coverage and Gaps
- Architectural Alignment
- Security Notes
- Best-Practices and References (with links)
- Action Items:
* CRITICAL: Format with checkboxes for tracking resolution
* Code changes required: `- [ ] [Severity] Description [file: path:line]`
* Advisory notes: `- Note: Description (no action required)`
* Group by type: "Code Changes Required" and "Advisory Notes"
Add a Change Log entry with date, version bump if applicable, and description: "Senior Developer Review notes appended".
If {{update_status_on_result}} is true: update Status to {{status_on_approve}} when approved; to {{status_on_changes_requested}} when changes requested; otherwise leave unchanged.
Save the story file.
MUST include the complete validation checklists - this is the evidence that systematic review was performed
Skip sprint status update (no story context)
Determine target status based on review outcome:
- If {{outcome}} == "Approve" → target_status = "done"
- If {{outcome}} == "Changes Requested" → target_status = "in-progress"
- If {{outcome}} == "Blocked" → target_status = "review" (stay in review)
Load the FULL file: {{output_folder}}/sprint-status.yaml
Read all development_status entries to find {{story_key}}
Verify current status is "review" (expected previous state)
Update development_status[{{story_key}}] = {{target_status}}
Save file, preserving ALL comments and structure including STATUS DEFINITIONS
All action items are included in the standalone review report
Would you like me to create tracking items for these action items? (backlog/tasks)
If {{backlog_file}} does not exist, copy {installed_path}/backlog_template.md to {{backlog_file}} location.
Append a row per action item with Date={{date}}, Story="Ad-Hoc Review", Epic="N/A", Type, Severity, Owner (or "TBD"), Status="Open", Notes with file refs and context.
Normalize Action Items into a structured list: description, severity (High/Med/Low), type (Bug/TechDebt/Enhancement), suggested owner (if known), related AC/file references.
Add {{action_item_count}} follow-up items to story Tasks/Subtasks?
Append under the story's "Tasks / Subtasks" a new subsection titled "Review Follow-ups (AI)", adding each item as an unchecked checkbox in imperative form, prefixed with "[AI-Review]" and severity. Example: "- [ ] [AI-Review][High] Add input validation on server route /api/x (AC #2)".
If {{backlog_file}} does not exist, copy {installed_path}/backlog_template.md to {{backlog_file}} location.
Append a row per action item with Date={{date}}, Story={{epic_num}}.{{story_num}}, Epic={{epic_num}}, Type, Severity, Owner (or "TBD"), Status="Open", Notes with short context and file refs.
If an epic Tech Spec was found: open it and create (if missing) a section titled "{{epic_followups_section_title}}". Append a bullet list of action items scoped to this epic with references back to Story {{epic_num}}.{{story_num}}.
Save modified files.
Optionally invoke tests or linters to verify quick fixes if any were applied as part of review (requires user approval for any dependency changes).
Run validation checklist at {installed_path}/checklist.md using {project-root}/bmad/core/tasks/validate-workflow.xml
Report workflow completion.
````