bmad初始化

This commit is contained in:
2025-11-01 19:22:39 +08:00
parent 5b21dc0bd5
commit 426ae41f54
447 changed files with 80633 additions and 0 deletions

View File

@@ -0,0 +1,221 @@
# Phase 4: Implementation
## Overview
Phase 4 is where planning transitions into actual development. This phase manages the iterative implementation of stories defined in the epic files, tracking their progress through a well-defined status workflow.
## Status Definitions
### Epic Status
Epics progress through a simple two-state flow:
| Status | Description | Next Status |
| ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- | ----------- |
| **backlog** | Epic exists in epic file but technical context has not been created | contexted |
| **contexted** | Epic technical context has been created via `epic-tech-context` workflow. This is a prerequisite before any stories in the epic can be drafted. | - |
**File Indicators:**
- `backlog`: No `epic-{n}-context.md` file exists
- `contexted`: `{output_folder}/epic-{n}-context.md` file exists
### Story Status
Stories progress through a six-state flow representing their journey from idea to implementation:
| Status | Description | Set By | Next Status |
| ----------------- | ---------------------------------------------------------------------------------- | ------------- | ------------- |
| **backlog** | Story only exists in the epic file, no work has begun | Initial state | drafted |
| **drafted** | Story file has been created via `create-story` workflow | SM Agent | ready-for-dev |
| **ready-for-dev** | Story has been drafted, approved, and context created via `story-context` workflow | SM Agent | in-progress |
| **in-progress** | Developer is actively implementing the story | Dev Agent | review |
| **review** | Implementation complete, under SM review via `code-review` workflow | Dev Agent | done |
| **done** | Story has been reviewed and completed | Dev Agent | - |
**File Indicators:**
- `backlog`: No story file exists
- `drafted`: `{story_dir}/{story-key}.md` file exists (e.g., `1-1-user-auth.md`)
- `ready-for-dev`: Both story file and context exist (e.g., `1-1-user-auth-context.md`)
- `in-progress`, `review`, `done`: Manual status updates in sprint-status.yaml
### Retrospective Status
Optional retrospectives can be completed after an epic:
| Status | Description |
| ------------- | -------------------------------------------------- |
| **optional** | Retrospective can be completed but is not required |
| **completed** | Retrospective has been completed |
## Status Transitions
### Epic Flow
```mermaid
graph LR
backlog --> contexted[contexted via epic-tech-context]
```
### Story Flow
```mermaid
graph LR
backlog --> drafted[drafted via create-story]
drafted --> ready[ready-for-dev via story-context]
ready --> progress[in-progress - dev starts]
progress --> review[review via code-review]
review --> done[done - dev completes]
```
## Sprint Status Management
The `sprint-status.yaml` file is the single source of truth for tracking all work items. It contains:
- All epics with their current status
- All stories with their current status
- Retrospective placeholders for each epic
- Clear documentation of status definitions
### Example Sprint Status File
```yaml
development_status:
epic-1: contexted
1-1-project-foundation: done
1-2-app-shell: done
1-3-user-authentication: in-progress
1-4-plant-data-model: ready-for-dev
1-5-add-plant-manual: drafted
1-6-photo-identification: backlog
1-7-plant-naming: backlog
epic-1-retrospective: optional
epic-2: backlog
2-1-personality-system: backlog
2-2-chat-interface: backlog
2-3-llm-integration: backlog
epic-2-retrospective: optional
```
## Workflows in Phase 4
### Core Workflows
| Workflow | Purpose | Updates Status |
| --------------------- | -------------------------------------------------- | ----------------------------------- |
| **sprint-planning** | Generate/update sprint-status.yaml from epic files | Auto-detects file-based statuses |
| **epic-tech-context** | Create technical context for an epic | epic: backlog → contexted |
| **create-story** | Draft a story from epics/PRD | story: backlog → drafted |
| **story-context** | Add implementation context to story | story: drafted → ready-for-dev |
| **dev-story** | Developer implements the story | story: ready-for-dev → in-progress |
| **code-review** | SM reviews implementation | story: in-progress → review |
| **retrospective** | Conduct epic retrospective | retrospective: optional → completed |
| **correct-course** | Course correction when needed | Various status updates |
### Sprint Planning Workflow
The `sprint-planning` workflow is the foundation of Phase 4. It:
1. **Parses all epic files** (`epic*.md` or `epics.md`)
2. **Extracts all epics and stories** maintaining their order
3. **Auto-detects current status** based on existing files:
- Checks for epic context files
- Checks for story files
- Checks for story context files
4. **Generates sprint-status.yaml** with current reality
5. **Preserves manual status updates** (won't downgrade statuses)
Run this workflow:
- Initially after Phase 3 completion
- After creating epic contexts
- Periodically to sync file-based status
- To verify current project state
### Workflow Guidelines
1. **Epic Context First**: Epics should be contexted before drafting their stories
2. **Flexible Parallelism**: Multiple stories can be in-progress based on team capacity
3. **Sequential Default**: Stories within an epic are typically worked in order
4. **Learning Transfer**: SM drafts next story after previous is done, incorporating learnings
5. **Review Flow**: Dev moves to review, SM reviews, Dev moves to done
## Agent Responsibilities
### SM (Scrum Master) Agent
- Run `sprint-planning` to generate initial status
- Create epic contexts (`epic-tech-context`)
- Draft stories (`create-story`)
- Create story contexts (`story-context`)
- Review completed work (`code-review`)
- Update status in sprint-status.yaml
### Developer Agent
- Check sprint-status.yaml for `ready-for-dev` stories
- Update status to `in-progress` when starting
- Implement stories (`dev-story`)
- Move to `review` when complete
- Address review feedback
- Update to `done` after approval
### Test Architect
- Monitor stories entering `review` status
- Track epic progress
- Identify when retrospectives needed
- Validate implementation quality
## Best Practices
1. **Always run sprint-planning first** to establish current state
2. **Update status immediately** as work progresses
3. **Check sprint-status.yaml** before starting any work
4. **Preserve learning** by drafting stories sequentially when possible
5. **Document decisions** in story and context files
## Naming Conventions
### Story File Naming
- Format: `{epic}-{story}-{kebab-title}.md`
- Example: `1-1-user-authentication.md`
- Avoids YAML float parsing issues (1.1 vs 1.10)
- Makes files self-descriptive
### Git Branch Naming
- Format: `feat/{epic}-{story}-{kebab-title}`
- Example: `feat/1-1-user-authentication`
- Consistent with story file naming
- Clean for branch management
## File Structure
```
{output_folder}/
├── sprint-status.yaml # Sprint status tracking
├── epic*.md or epics.md # Epic definitions
├── epic-1-context.md # Epic technical contexts
├── epic-2-context.md
└── stories/
├── 1-1-user-authentication.md # Story drafts
├── 1-1-user-authentication-context.md # Story contexts
├── 1-2-account-management.md
├── 1-2-account-management-context.md
└── ...
```
## Next Steps
After Phase 4 implementation, projects typically move to:
- Deployment and release
- User acceptance testing
- Production monitoring
- Maintenance and updates
The sprint-status.yaml file provides a complete audit trail of the development process and can be used for project reporting and retrospectives.

View File

@@ -0,0 +1,69 @@
# Review Story (Senior Developer Code Review)
Perform an AI-driven Senior Developer Code Review on a story flagged "Ready for Review". The workflow ingests the story, its Story Context, and the epics Tech Spec, consults local docs, uses enabled MCP servers for up-to-date best practices (with web search fallback), and appends structured review notes to the story.
## What It Does
- Auto-discovers the target story or accepts an explicit `story_path`
- Verifies the story is in a reviewable state (e.g., Ready for Review/Review)
- Loads Story Context (from Dev Agent Record → Context Reference or auto-discovery)
- Locates the epic Tech Spec and relevant architecture/standards docs
- Uses MCP servers for best-practices and security references; falls back to web search
- Reviews implementation vs Acceptance Criteria, Tech Spec, and repo standards
- Evaluates code quality, security, and test coverage
- Appends a "Senior Developer Review (AI)" section to the story with findings and action items
- Optionally updates story Status based on outcome
## How to Invoke
- Tell the Dev Agent to perform a \*code-review after loading the dev agent. This is a context intensive operation so this should not be done in the same context as a just completed dev session - clear the context, reload the dev, then run code-review!
## Inputs and Variables
- `story_path` (optional): Explicit path to a story file
- `story_dir` (from config): `{project-root}/bmad/bmm/config.yaml``dev_story_location`
- `allow_status_values`: Defaults include `Ready for Review`, `Review`
- `auto_discover_context` (default: true)
- `auto_discover_tech_spec` (default: true)
- `tech_spec_glob_template`: `tech-spec-epic-{{epic_num}}*.md`
- `arch_docs_search_dirs`: Defaults to `docs/` and `outputs/`
- `enable_mcp_doc_search` (default: true)
- `enable_web_fallback` (default: true)
- `update_status_on_result` (default: false)
## Story Updates
- Appends a section titled: `Senior Developer Review (AI)` at the end
- Adds a Change Log entry: "Senior Developer Review notes appended"
- If enabled, updates `Status` based on outcome
## Persistence and Backlog
To ensure review findings become actionable work, the workflow can persist action items to multiple targets (configurable):
- Story tasks: Inserts unchecked items under `Tasks / Subtasks` in a "Review Follow-ups (AI)" subsection so `dev-story` can pick them up next.
- Story review section: Keeps a full list under "Senior Developer Review (AI) → Action Items".
- Backlog file: Appends normalized rows to `docs/backlog.md` (created if missing) for cross-cutting or longer-term improvements.
- Epic follow-ups: If an epic Tech Spec is found, appends to its `Post-Review Follow-ups` section.
Configure via `workflow.yaml` variables:
- `persist_action_items` (default: true)
- `persist_targets`: `story_tasks`, `story_review_section`, `backlog_file`, `epic_followups`
- `backlog_file` (default: `{project-root}/docs/backlog.md`)
- `update_epic_followups` (default: true)
- `epic_followups_section_title` (default: `Post-Review Follow-ups`)
Routing guidance:
- Put must-fix items to ship the story into the storys tasks.
- Put same-epic but non-blocking improvements into the epic Tech Spec follow-ups.
- Put cross-cutting, future, or refactor work into the backlog file.
## Related Workflows
- `dev-story` — Implements tasks/subtasks and can act on review action items
- `story-context` — Generates Story Context for a single story
- `tech-spec` — Generates epic Tech Spec documents
_Part of the BMAD Method v6 — Implementation Phase_

View File

@@ -0,0 +1,12 @@
# Engineering Backlog
This backlog collects cross-cutting or future action items that emerge from reviews and planning.
Routing guidance:
- Use this file for non-urgent optimizations, refactors, or follow-ups that span multiple stories/epics.
- Must-fix items to ship a story belong in that storys `Tasks / Subtasks`.
- Same-epic improvements may also be captured under the epic Tech Spec `Post-Review Follow-ups` section.
| Date | Story | Epic | Type | Severity | Owner | Status | Notes |
| ---- | ----- | ---- | ---- | -------- | ----- | ------ | ----- |

View File

@@ -0,0 +1,22 @@
# Senior Developer Review - Validation Checklist
- [ ] Story file loaded from `{{story_path}}`
- [ ] Story Status verified as one of: {{allow_status_values}}
- [ ] Epic and Story IDs resolved ({{epic_num}}.{{story_num}})
- [ ] Story Context located or warning recorded
- [ ] Epic Tech Spec located or warning recorded
- [ ] Architecture/standards docs loaded (as available)
- [ ] Tech stack detected and documented
- [ ] MCP doc search performed (or web fallback) and references captured
- [ ] Acceptance Criteria cross-checked against implementation
- [ ] File List reviewed and validated for completeness
- [ ] Tests identified and mapped to ACs; gaps noted
- [ ] Code quality review performed on changed files
- [ ] Security review performed on changed files and dependencies
- [ ] Outcome decided (Approve/Changes Requested/Blocked)
- [ ] Review notes appended under "Senior Developer Review (AI)"
- [ ] Change Log updated with review entry
- [ ] Status updated according to settings (if enabled)
- [ ] Story saved successfully
_Reviewer: {{user_name}} on {{date}}_

View File

@@ -0,0 +1,391 @@
# Senior Developer Review - Workflow Instructions
````xml
<critical>The workflow execution engine is governed by: {project-root}/bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<critical>Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level}</critical>
<critical>Generate all documents in {document_output_language}</critical>
<critical>This workflow performs a SYSTEMATIC Senior Developer Review on a story with status "review", validates EVERY acceptance criterion and EVERY completed task, appends structured review notes with evidence, and updates the story status based on outcome.</critical>
<critical>If story_path is provided, use it. Otherwise, find the first story in sprint-status.yaml with status "review". If none found, offer ad-hoc review option.</critical>
<critical>Ad-hoc review mode: User can specify any files to review and what to review for (quality, security, requirements, etc.). Creates standalone review report.</critical>
<critical>SYSTEMATIC VALIDATION REQUIREMENT: For EVERY acceptance criterion, verify implementation with evidence (file:line). For EVERY task marked complete, verify it was actually done. Tasks marked complete but not done = HIGH SEVERITY finding.</critical>
<critical>⚠️ ZERO TOLERANCE FOR LAZY VALIDATION ⚠️</critical>
<critical>If you FAIL to catch even ONE task marked complete that was NOT actually implemented, or ONE acceptance criterion marked done that is NOT in the code with evidence, you have FAILED YOUR ONLY PURPOSE. This is an IMMEDIATE DISQUALIFICATION. No shortcuts. No assumptions. No "looks good enough." You WILL read every file. You WILL verify every claim. You WILL provide evidence (file:line) for EVERY validation. Failure to catch false completions = you failed humanity and the project. Your job is to be the uncompromising gatekeeper. DO YOUR JOB COMPLETELY OR YOU WILL BE REPLACED.</critical>
<critical>Only modify the story file in these areas: Status, Dev Agent Record (Completion Notes), File List (if corrections needed), Change Log, and the appended "Senior Developer Review (AI)" section.</critical>
<critical>Execute ALL steps in exact order; do NOT skip steps</critical>
<critical>DOCUMENT OUTPUT: Technical review reports. Structured findings with severity levels and action items. User skill level ({user_skill_level}) affects conversation style ONLY, not review content.</critical>
<workflow>
<step n="1" goal="Find story ready for review" tag="sprint-status">
<check if="{{story_path}} is provided">
<action>Use {{story_path}} directly</action>
<action>Read COMPLETE story file and parse sections</action>
<action>Extract story_key from filename or story metadata</action>
<action>Verify Status is "review" - if not, HALT with message: "Story status must be 'review' to proceed"</action>
</check>
<check if="{{story_path}} is NOT provided">
<critical>MUST read COMPLETE sprint-status.yaml file from start to end to preserve order</critical>
<action>Load the FULL file: {{output_folder}}/sprint-status.yaml</action>
<action>Read ALL lines from beginning to end - do not skip any content</action>
<action>Parse the development_status section completely</action>
<action>Find FIRST story (reading in order from top to bottom) where:
- Key matches pattern: number-number-name (e.g., "1-2-user-auth")
- NOT an epic key (epic-X) or retrospective (epic-X-retrospective)
- Status value equals "review"
</action>
<check if="no story with status 'review' found">
<output>📋 No stories with status "review" found
**What would you like to do?**
1. Run `dev-story` to implement and mark a story ready for review
2. Check sprint-status.yaml for current story states
3. Tell me what code to review and what to review it for
</output>
<ask>Select an option (1/2/3):</ask>
<check if="option 3 selected">
<ask>What code would you like me to review?
Provide:
- File path(s) or directory to review
- What to review for:
• General quality and standards
• Requirements compliance
• Security concerns
• Performance issues
• Architecture alignment
• Something else (specify)
Your input:</ask>
<action>Parse user input to extract:
- {{review_files}}: file paths or directories to review
- {{review_focus}}: what aspects to focus on
- {{review_context}}: any additional context provided
</action>
<action>Set ad_hoc_review_mode = true</action>
<action>Skip to step 4 with custom scope</action>
</check>
<check if="option 1 or 2 or no option 3">
<action>HALT</action>
</check>
</check>
<action>Use the first story found with status "review"</action>
<action>Resolve story file path in {{story_dir}}</action>
<action>Read the COMPLETE story file</action>
</check>
<action>Extract {{epic_num}} and {{story_num}} from filename (e.g., story-2.3.*.md) and story metadata</action>
<action>Parse sections: Status, Story, Acceptance Criteria, Tasks/Subtasks (and completion states), Dev Notes, Dev Agent Record (Context Reference, Completion Notes, File List), Change Log</action>
<action if="story cannot be read">HALT with message: "Unable to read story file"</action>
</step>
<step n="2" goal="Resolve story context file and specification inputs">
<action>Locate story context file: Under Dev Agent Record → Context Reference, read referenced path(s). If missing, search {{output_folder}} for files matching pattern "story-{{epic_num}}.{{story_num}}*.context.xml" and use the most recent.</action>
<action if="no story context file found">Continue but record a WARNING in review notes: "No story context file found"</action>
<action>Locate Epic Tech Spec: Search {{tech_spec_search_dir}} with glob {{tech_spec_glob_template}} (resolve {{epic_num}})</action>
<action if="no tech spec found">Continue but record a WARNING in review notes: "No Tech Spec found for epic {{epic_num}}"</action>
<action>Load architecture/standards docs: For each file name in {{arch_docs_file_names}} within {{arch_docs_search_dirs}}, read if exists. Collect testing, coding standards, security, and architectural patterns.</action>
</step>
<step n="3" goal="Detect tech stack and establish best-practice reference set">
<action>Detect primary ecosystem(s) by scanning for manifests (e.g., package.json, pyproject.toml, go.mod, Dockerfile). Record key frameworks (e.g., Node/Express, React/Vue, Python/FastAPI, etc.).</action>
<action>Synthesize a concise "Best-Practices and References" note capturing any updates or considerations that should influence the review (cite links and versions if available).</action>
</step>
<step n="4" goal="Systematic validation of implementation against acceptance criteria and tasks">
<check if="ad_hoc_review_mode == true">
<action>Use {{review_files}} as the file list to review</action>
<action>Focus review on {{review_focus}} aspects specified by user</action>
<action>Use {{review_context}} for additional guidance</action>
<action>Skip acceptance criteria checking (no story context)</action>
<action>If architecture docs exist, verify alignment with architectural constraints</action>
</check>
<check if="ad_hoc_review_mode != true">
<critical>SYSTEMATIC VALIDATION - Check EVERY AC and EVERY task marked complete</critical>
<action>From the story, read Acceptance Criteria section completely - parse into numbered list</action>
<action>From the story, read Tasks/Subtasks section completely - parse ALL tasks and subtasks with their completion state ([x] = completed, [ ] = incomplete)</action>
<action>From Dev Agent Record → File List, compile list of changed/added files. If File List is missing or clearly incomplete, search repo for recent changes relevant to the story scope (heuristics: filenames matching components/services/routes/tests inferred from ACs/tasks).</action>
<critical>Step 4A: SYSTEMATIC ACCEPTANCE CRITERIA VALIDATION</critical>
<action>Create AC validation checklist with one entry per AC</action>
<action>For EACH acceptance criterion (AC1, AC2, AC3, etc.):
1. Read the AC requirement completely
2. Search changed files for evidence of implementation
3. Determine: IMPLEMENTED, PARTIAL, or MISSING
4. Record specific evidence (file:line references where AC is satisfied)
5. Check for corresponding tests (unit/integration/E2E as applicable)
6. If PARTIAL or MISSING: Flag as finding with severity based on AC criticality
7. Document in AC validation checklist
</action>
<action>Generate AC Coverage Summary: "X of Y acceptance criteria fully implemented"</action>
<critical>Step 4B: SYSTEMATIC TASK COMPLETION VALIDATION</critical>
<action>Create task validation checklist with one entry per task/subtask</action>
<action>For EACH task/subtask marked as COMPLETED ([x]):
1. Read the task description completely
2. Search changed files for evidence the task was actually done
3. Determine: VERIFIED COMPLETE, QUESTIONABLE, or NOT DONE
4. Record specific evidence (file:line references proving task completion)
5. **CRITICAL**: If marked complete but NOT DONE → Flag as HIGH SEVERITY finding with message: "Task marked complete but implementation not found: [task description]"
6. If QUESTIONABLE → Flag as MEDIUM SEVERITY finding: "Task completion unclear: [task description]"
7. Document in task validation checklist
</action>
<action>For EACH task/subtask marked as INCOMPLETE ([ ]):
1. Note it was not claimed to be complete
2. Check if it was actually done anyway (sometimes devs forget to check boxes)
3. If done but not marked: Note in review (helpful correction, not a finding)
</action>
<action>Generate Task Completion Summary: "X of Y completed tasks verified, Z questionable, W falsely marked complete"</action>
<critical>Step 4C: CROSS-CHECK EPIC TECH-SPEC REQUIREMENTS</critical>
<action>Cross-check epic tech-spec requirements and architecture constraints against the implementation intent in files.</action>
<action if="critical architecture constraints are violated (e.g., layering, dependency rules)">flag as High Severity finding.</action>
<critical>Step 4D: COMPILE VALIDATION FINDINGS</critical>
<action>Compile all validation findings into structured list:
- Missing AC implementations (severity based on AC importance)
- Partial AC implementations (MEDIUM severity)
- Tasks falsely marked complete (HIGH severity - this is critical)
- Questionable task completions (MEDIUM severity)
- Missing tests for ACs (severity based on AC criticality)
- Architecture violations (HIGH severity)
</action>
</check>
</step>
<step n="5" goal="Perform code quality and risk review">
<action>For each changed file, skim for common issues appropriate to the stack: error handling, input validation, logging, dependency injection, thread-safety/async correctness, resource cleanup, performance anti-patterns.</action>
<action>Perform security review: injection risks, authZ/authN handling, secret management, unsafe defaults, un-validated redirects, CORS misconfigured, dependency vulnerabilities (based on manifests).</action>
<action>Check tests quality: assertions are meaningful, edge cases covered, deterministic behavior, proper fixtures, no flakiness patterns.</action>
<action>Capture concrete, actionable suggestions with severity (High/Med/Low) and rationale. When possible, suggest specific code-level changes (filenames + line ranges) without rewriting large sections.</action>
</step>
<step n="6" goal="Decide review outcome and prepare comprehensive notes">
<action>Determine outcome based on validation results:
- BLOCKED: Any HIGH severity finding (AC missing, task falsely marked complete, critical architecture violation)
- CHANGES REQUESTED: Any MEDIUM severity findings or multiple LOW severity issues
- APPROVE: All ACs implemented, all completed tasks verified, no significant issues
</action>
<action>Prepare a structured review report with sections:
1. **Summary**: Brief overview of review outcome and key concerns
2. **Outcome**: Approve | Changes Requested | Blocked (with justification)
3. **Key Findings** (by severity):
- HIGH severity issues first (especially falsely marked complete tasks)
- MEDIUM severity issues
- LOW severity issues
4. **Acceptance Criteria Coverage**:
- Include complete AC validation checklist from Step 4A
- Show: AC# | Description | Status (IMPLEMENTED/PARTIAL/MISSING) | Evidence (file:line)
- Summary: "X of Y acceptance criteria fully implemented"
- List any missing or partial ACs with severity
5. **Task Completion Validation**:
- Include complete task validation checklist from Step 4B
- Show: Task | Marked As | Verified As | Evidence (file:line)
- **CRITICAL**: Highlight any tasks marked complete but not done in RED/bold
- Summary: "X of Y completed tasks verified, Z questionable, W falsely marked complete"
6. **Test Coverage and Gaps**:
- Which ACs have tests, which don't
- Test quality issues found
7. **Architectural Alignment**:
- Tech-spec compliance
- Architecture violations if any
8. **Security Notes**: Security findings if any
9. **Best-Practices and References**: With links
10. **Action Items**:
- CRITICAL: ALL action items requiring code changes MUST have checkboxes for tracking
- Format for actionable items: `- [ ] [Severity] Description (AC #X) [file: path:line]`
- Format for informational notes: `- Note: Description (no action required)`
- Imperative phrasing for action items
- Map to related ACs or files with specific line references
- Include suggested owners if clear
- Example format:
```
### Action Items
**Code Changes Required:**
- [ ] [High] Add input validation on login endpoint (AC #1) [file: src/routes/auth.js:23-45]
- [ ] [Med] Add unit test for invalid email format [file: tests/unit/auth.test.js]
**Advisory Notes:**
- Note: Consider adding rate limiting for production deployment
- Note: Document the JWT expiration policy in README
```
</action>
<critical>The AC validation checklist and task validation checklist MUST be included in the review - this is the evidence trail</critical>
</step>
<step n="7" goal="Append review to story and update metadata">
<check if="ad_hoc_review_mode == true">
<action>Generate review report as a standalone document</action>
<action>Save to {{output_folder}}/code-review-{{date}}.md</action>
<action>Include sections:
- Review Type: Ad-Hoc Code Review
- Reviewer: {{user_name}}
- Date: {{date}}
- Files Reviewed: {{review_files}}
- Review Focus: {{review_focus}}
- Outcome: (Approve | Changes Requested | Blocked)
- Summary
- Key Findings
- Test Coverage and Gaps
- Architectural Alignment
- Security Notes
- Best-Practices and References (with links)
- Action Items
</action>
<output>Review saved to: {{output_folder}}/code-review-{{date}}.md</output>
</check>
<check if="ad_hoc_review_mode != true">
<action>Open {{story_path}} and append a new section at the end titled exactly: "Senior Developer Review (AI)".</action>
<action>Insert subsections:
- Reviewer: {{user_name}}
- Date: {{date}}
- Outcome: (Approve | Changes Requested | Blocked) with justification
- Summary
- Key Findings (by severity - HIGH/MEDIUM/LOW)
- **Acceptance Criteria Coverage**:
* Include complete AC validation checklist with table format
* AC# | Description | Status | Evidence
* Summary: X of Y ACs implemented
- **Task Completion Validation**:
* Include complete task validation checklist with table format
* Task | Marked As | Verified As | Evidence
* **Highlight falsely marked complete tasks prominently**
* Summary: X of Y tasks verified, Z questionable, W false completions
- Test Coverage and Gaps
- Architectural Alignment
- Security Notes
- Best-Practices and References (with links)
- Action Items:
* CRITICAL: Format with checkboxes for tracking resolution
* Code changes required: `- [ ] [Severity] Description [file: path:line]`
* Advisory notes: `- Note: Description (no action required)`
* Group by type: "Code Changes Required" and "Advisory Notes"
</action>
<action>Add a Change Log entry with date, version bump if applicable, and description: "Senior Developer Review notes appended".</action>
<action>If {{update_status_on_result}} is true: update Status to {{status_on_approve}} when approved; to {{status_on_changes_requested}} when changes requested; otherwise leave unchanged.</action>
<action>Save the story file.</action>
<critical>MUST include the complete validation checklists - this is the evidence that systematic review was performed</critical>
</check>
</step>
<step n="8" goal="Update sprint status based on review outcome" tag="sprint-status">
<check if="ad_hoc_review_mode == true">
<action>Skip sprint status update (no story context)</action>
<output>📋 Ad-hoc review complete - no sprint status to update</output>
</check>
<check if="ad_hoc_review_mode != true">
<action>Determine target status based on review outcome:
- If {{outcome}} == "Approve" → target_status = "done"
- If {{outcome}} == "Changes Requested" → target_status = "in-progress"
- If {{outcome}} == "Blocked" → target_status = "review" (stay in review)
</action>
<action>Load the FULL file: {{output_folder}}/sprint-status.yaml</action>
<action>Read all development_status entries to find {{story_key}}</action>
<action>Verify current status is "review" (expected previous state)</action>
<action>Update development_status[{{story_key}}] = {{target_status}}</action>
<action>Save file, preserving ALL comments and structure including STATUS DEFINITIONS</action>
<check if="update successful">
<output>✅ Sprint status updated: review → {{target_status}}</output>
</check>
<check if="story key not found">
<output>⚠️ Could not update sprint-status: {{story_key}} not found
Review was saved to story file, but sprint-status.yaml may be out of sync.
</output>
</check>
</check>
</step>
<step n="9" goal="Persist action items to tasks/backlog/epic">
<check if="ad_hoc_review_mode == true">
<action>All action items are included in the standalone review report</action>
<ask if="action items exist">Would you like me to create tracking items for these action items? (backlog/tasks)</ask>
<action if="user confirms">
If {{backlog_file}} does not exist, copy {installed_path}/backlog_template.md to {{backlog_file}} location.
Append a row per action item with Date={{date}}, Story="Ad-Hoc Review", Epic="N/A", Type, Severity, Owner (or "TBD"), Status="Open", Notes with file refs and context.
</action>
</check>
<check if="ad_hoc_review_mode != true">
<action>Normalize Action Items into a structured list: description, severity (High/Med/Low), type (Bug/TechDebt/Enhancement), suggested owner (if known), related AC/file references.</action>
<ask if="action items exist and 'story_tasks' in {{persist_targets}}">Add {{action_item_count}} follow-up items to story Tasks/Subtasks?</ask>
<action if="user confirms or no ask needed">
Append under the story's "Tasks / Subtasks" a new subsection titled "Review Follow-ups (AI)", adding each item as an unchecked checkbox in imperative form, prefixed with "[AI-Review]" and severity. Example: "- [ ] [AI-Review][High] Add input validation on server route /api/x (AC #2)".
</action>
<action if="{{persist_action_items}} == true and 'backlog_file' in {{persist_targets}}">
If {{backlog_file}} does not exist, copy {installed_path}/backlog_template.md to {{backlog_file}} location.
Append a row per action item with Date={{date}}, Story={{epic_num}}.{{story_num}}, Epic={{epic_num}}, Type, Severity, Owner (or "TBD"), Status="Open", Notes with short context and file refs.
</action>
<action if="{{persist_action_items}} == true and ('epic_followups' in {{persist_targets}} or {{update_epic_followups}} == true)">
If an epic Tech Spec was found: open it and create (if missing) a section titled "{{epic_followups_section_title}}". Append a bullet list of action items scoped to this epic with references back to Story {{epic_num}}.{{story_num}}.
</action>
<action>Save modified files.</action>
<action>Optionally invoke tests or linters to verify quick fixes if any were applied as part of review (requires user approval for any dependency changes).</action>
</check>
</step>
<step n="10" goal="Validation and completion">
<invoke-task>Run validation checklist at {installed_path}/checklist.md using {project-root}/bmad/core/tasks/validate-workflow.xml</invoke-task>
<action>Report workflow completion.</action>
<check if="ad_hoc_review_mode == true">
<output>**✅ Ad-Hoc Code Review Complete, {user_name}!**
**Review Details:**
- Files Reviewed: {{review_files}}
- Review Focus: {{review_focus}}
- Review Outcome: {{outcome}}
- Action Items: {{action_item_count}}
- Review Report: {{output_folder}}/code-review-{{date}}.md
**Next Steps:**
1. Review the detailed findings in the review report
2. If changes requested: Address action items in the code
3. If blocked: Resolve blockers before proceeding
4. Re-run review on updated code if needed
</output>
</check>
<check if="ad_hoc_review_mode != true">
<output>**✅ Story Review Complete, {user_name}!**
**Story Details:**
- Story: {{epic_num}}.{{story_num}}
- Story Key: {{story_key}}
- Review Outcome: {{outcome}}
- Sprint Status: {{target_status}}
- Action Items: {{action_item_count}}
**Next Steps:**
1. Review the Senior Developer Review notes appended to story
2. If approved: Story is marked done, continue with next story
3. If changes requested: Address action items and re-run `dev-story`
4. If blocked: Resolve blockers before proceeding
</output>
</check>
</step>
</workflow>
````

View File

@@ -0,0 +1,54 @@
# Review Story Workflow
name: code-review
description: "Perform a Senior Developer code review on a completed story flagged Ready for Review, leveraging story-context, epic tech-spec, repo docs, MCP servers for latest best-practices, and web search as fallback. Appends structured review notes to the story."
author: "BMad"
# Critical variables from config
config_source: "{project-root}/bmad/bmm/config.yaml"
output_folder: "{config_source}:output_folder"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
user_skill_level: "{config_source}:user_skill_level"
document_output_language: "{config_source}:document_output_language"
date: system-generated
# Workflow components
installed_path: "{project-root}/bmad/bmm/workflows/4-implementation/code-review"
instructions: "{installed_path}/instructions.md"
validation: "{installed_path}/checklist.md"
# This is an action workflow (no output template document)
template: false
# Variables (can be provided by caller)
variables:
story_path: "" # Optional: Explicit path to story file. If not provided, finds first story with status "review"
story_dir: "{config_source}:dev_story_location" # Directory containing story files
tech_spec_search_dir: "{project-root}/docs"
tech_spec_glob_template: "tech-spec-epic-{{epic_num}}*.md"
arch_docs_search_dirs: |
- "{project-root}/docs"
- "{output_folder}"
arch_docs_file_names: |
- architecture.md
enable_mcp_doc_search: true # Prefer enabled MCP servers for doc/best-practice lookup
enable_web_fallback: true # Fallback to web search/read-url if MCP not available
# Persistence controls for review action items and notes
persist_action_items: true
# Valid targets: story_tasks, story_review_section, backlog_file, epic_followups
persist_targets: |
- story_review_section
- story_tasks
- backlog_file
- epic_followups
backlog_file: "{project-root}/docs/backlog.md"
update_epic_followups: true
epic_followups_section_title: "Post-Review Follow-ups"
# Recommended inputs
recommended_inputs:
- story: "Path to the story file (auto-discovered if omitted - finds first story with status 'review')"
- tech_spec: "Epic technical specification document (auto-discovered)"
- story_context_file: "Story context file (.context.xml) (auto-discovered)"
standalone: true

View File

@@ -0,0 +1,73 @@
---
last-redoc-date: 2025-10-01
---
# Correct Course Workflow
The correct-course workflow is v6's adaptive response mechanism for stories that encounter issues during development or review, enabling intelligent course correction without restarting from scratch. Run by the Scrum Master (SM) or Team Lead, this workflow analyzes why a story is blocked or has issues, determines whether the story scope needs adjustment, requirements need clarification, technical approach needs revision, or the story needs to be split or reconsidered. This represents the agile principle of embracing change even late in the development cycle, but doing so in a structured way that captures learning and maintains project coherence.
Unlike simply abandoning failed work or blindly pushing forward, correct-course provides a systematic approach to diagnosing issues and determining appropriate remediation. The workflow examines the original story specification, what was actually implemented, what issues arose during development or review, and the broader epic context to recommend the best path forward. This might involve clarifying requirements, adjusting acceptance criteria, revising technical approach, splitting the story into smaller pieces, or even determining the story should be deprioritized.
The critical value of this workflow is that it prevents thrashing—endless cycles of implementation and rework without clear direction. By stepping back to analyze what went wrong and charting a clear path forward, the correct-course workflow enables teams to learn from difficulties and adapt intelligently rather than stubbornly proceeding with a flawed approach.
## Usage
```bash
# Analyze issues and recommend course correction
bmad run correct-course
```
The workflow should be run when:
- Review identifies critical issues requiring rethinking
- Developer encounters blocking issues during implementation
- Acceptance criteria prove infeasible or unclear
- Technical approach needs significant revision
- Story scope needs adjustment based on discoveries
## Inputs
**Required Context:**
- **Story Document**: Original story specification
- **Implementation Attempts**: Code changes and approaches tried
- **Review Feedback**: Issues and concerns identified
- **Epic Context**: Overall epic goals and current progress
- **Technical Constraints**: Architecture decisions and limitations discovered
**Analysis Focus:**
- Root cause of issues or blockages
- Feasibility of current story scope
- Clarity of requirements and acceptance criteria
- Appropriateness of technical approach
- Whether story should be split or revised
## Outputs
**Primary Deliverable:**
- **Course Correction Report** (`[story-id]-correction.md`): Analysis and recommendations including:
- Issue root cause analysis
- Impact assessment on epic and project
- Recommended corrective actions with rationale
- Revised story specification if applicable
- Updated acceptance criteria if needed
- Technical approach adjustments
- Timeline and effort implications
**Possible Outcomes:**
1. **Clarify Requirements**: Update story with clearer acceptance criteria
2. **Revise Scope**: Adjust story scope to be more achievable
3. **Split Story**: Break into multiple smaller stories
4. **Change Approach**: Recommend different technical approach
5. **Deprioritize**: Determine story should be deferred or cancelled
6. **Unblock**: Identify and address blocking dependencies
**Artifacts:**
- Updated story document if revision needed
- New story documents if splitting story
- Updated epic backlog reflecting changes
- Lessons learned for retrospective

View File

@@ -0,0 +1,279 @@
# Change Navigation Checklist
<critical>This checklist is executed as part of: {project-root}/bmad/bmm/workflows/4-implementation/correct-course/workflow.yaml</critical>
<critical>Work through each section systematically with the user, recording findings and impacts</critical>
<checklist>
<section n="1" title="Understand the Trigger and Context">
<check-item id="1.1">
<prompt>Identify the triggering story that revealed this issue</prompt>
<action>Document story ID and brief description</action>
<status>[ ] Done / [ ] N/A / [ ] Action-needed</status>
</check-item>
<check-item id="1.2">
<prompt>Define the core problem precisely</prompt>
<action>Categorize issue type:</action>
- Technical limitation discovered during implementation
- New requirement emerged from stakeholders
- Misunderstanding of original requirements
- Strategic pivot or market change
- Failed approach requiring different solution
<action>Write clear problem statement</action>
<status>[ ] Done / [ ] N/A / [ ] Action-needed</status>
</check-item>
<check-item id="1.3">
<prompt>Assess initial impact and gather supporting evidence</prompt>
<action>Collect concrete examples, error messages, stakeholder feedback, or technical constraints</action>
<action>Document evidence for later reference</action>
<status>[ ] Done / [ ] N/A / [ ] Action-needed</status>
</check-item>
<halt-condition>
<action if="trigger is unclear">HALT: "Cannot proceed without understanding what caused the need for change"</action>
<action if="no evidence provided">HALT: "Need concrete evidence or examples of the issue before analyzing impact"</action>
</halt-condition>
</section>
<section n="2" title="Epic Impact Assessment">
<check-item id="2.1">
<prompt>Evaluate current epic containing the trigger story</prompt>
<action>Can this epic still be completed as originally planned?</action>
<action>If no, what modifications are needed?</action>
<status>[ ] Done / [ ] N/A / [ ] Action-needed</status>
</check-item>
<check-item id="2.2">
<prompt>Determine required epic-level changes</prompt>
<action>Check each scenario:</action>
- Modify existing epic scope or acceptance criteria
- Add new epic to address the issue
- Remove or defer epic that's no longer viable
- Completely redefine epic based on new understanding
<action>Document specific epic changes needed</action>
<status>[ ] Done / [ ] N/A / [ ] Action-needed</status>
</check-item>
<check-item id="2.3">
<prompt>Review all remaining planned epics for required changes</prompt>
<action>Check each future epic for impact</action>
<action>Identify dependencies that may be affected</action>
<status>[ ] Done / [ ] N/A / [ ] Action-needed</status>
</check-item>
<check-item id="2.4">
<prompt>Check if issue invalidates future epics or necessitates new ones</prompt>
<action>Does this change make any planned epics obsolete?</action>
<action>Are new epics needed to address gaps created by this change?</action>
<status>[ ] Done / [ ] N/A / [ ] Action-needed</status>
</check-item>
<check-item id="2.5">
<prompt>Consider if epic order or priority should change</prompt>
<action>Should epics be resequenced based on this issue?</action>
<action>Do priorities need adjustment?</action>
<status>[ ] Done / [ ] N/A / [ ] Action-needed</status>
</check-item>
</section>
<section n="3" title="Artifact Conflict and Impact Analysis">
<check-item id="3.1">
<prompt>Check PRD for conflicts</prompt>
<action>Does issue conflict with core PRD goals or objectives?</action>
<action>Do requirements need modification, addition, or removal?</action>
<action>Is the defined MVP still achievable or does scope need adjustment?</action>
<status>[ ] Done / [ ] N/A / [ ] Action-needed</status>
</check-item>
<check-item id="3.2">
<prompt>Review Architecture document for conflicts</prompt>
<action>Check each area for impact:</action>
- System components and their interactions
- Architectural patterns and design decisions
- Technology stack choices
- Data models and schemas
- API designs and contracts
- Integration points
<action>Document specific architecture sections requiring updates</action>
<status>[ ] Done / [ ] N/A / [ ] Action-needed</status>
</check-item>
<check-item id="3.3">
<prompt>Examine UI/UX specifications for conflicts</prompt>
<action>Check for impact on:</action>
- User interface components
- User flows and journeys
- Wireframes or mockups
- Interaction patterns
- Accessibility considerations
<action>Note specific UI/UX sections needing revision</action>
<status>[ ] Done / [ ] N/A / [ ] Action-needed</status>
</check-item>
<check-item id="3.4">
<prompt>Consider impact on other artifacts</prompt>
<action>Review additional artifacts for impact:</action>
- Deployment scripts
- Infrastructure as Code (IaC)
- Monitoring and observability setup
- Testing strategies
- Documentation
- CI/CD pipelines
<action>Document any secondary artifacts requiring updates</action>
<status>[ ] Done / [ ] N/A / [ ] Action-needed</status>
</check-item>
</section>
<section n="4" title="Path Forward Evaluation">
<check-item id="4.1">
<prompt>Evaluate Option 1: Direct Adjustment</prompt>
<action>Can the issue be addressed by modifying existing stories?</action>
<action>Can new stories be added within the current epic structure?</action>
<action>Would this approach maintain project timeline and scope?</action>
<action>Effort estimate: [High/Medium/Low]</action>
<action>Risk level: [High/Medium/Low]</action>
<status>[ ] Viable / [ ] Not viable</status>
</check-item>
<check-item id="4.2">
<prompt>Evaluate Option 2: Potential Rollback</prompt>
<action>Would reverting recently completed stories simplify addressing this issue?</action>
<action>Which stories would need to be rolled back?</action>
<action>Is the rollback effort justified by the simplification gained?</action>
<action>Effort estimate: [High/Medium/Low]</action>
<action>Risk level: [High/Medium/Low]</action>
<status>[ ] Viable / [ ] Not viable</status>
</check-item>
<check-item id="4.3">
<prompt>Evaluate Option 3: PRD MVP Review</prompt>
<action>Is the original PRD MVP still achievable with this issue?</action>
<action>Does MVP scope need to be reduced or redefined?</action>
<action>Do core goals need modification based on new constraints?</action>
<action>What would be deferred to post-MVP if scope is reduced?</action>
<action>Effort estimate: [High/Medium/Low]</action>
<action>Risk level: [High/Medium/Low]</action>
<status>[ ] Viable / [ ] Not viable</status>
</check-item>
<check-item id="4.4">
<prompt>Select recommended path forward</prompt>
<action>Based on analysis of all options, choose the best path</action>
<action>Provide clear rationale considering:</action>
- Implementation effort and timeline impact
- Technical risk and complexity
- Impact on team morale and momentum
- Long-term sustainability and maintainability
- Stakeholder expectations and business value
<action>Selected approach: [Option 1 / Option 2 / Option 3 / Hybrid]</action>
<action>Justification: [Document reasoning]</action>
<status>[ ] Done / [ ] N/A / [ ] Action-needed</status>
</check-item>
</section>
<section n="5" title="Sprint Change Proposal Components">
<check-item id="5.1">
<prompt>Create identified issue summary</prompt>
<action>Write clear, concise problem statement</action>
<action>Include context about discovery and impact</action>
<status>[ ] Done / [ ] N/A / [ ] Action-needed</status>
</check-item>
<check-item id="5.2">
<prompt>Document epic impact and artifact adjustment needs</prompt>
<action>Summarize findings from Epic Impact Assessment (Section 2)</action>
<action>Summarize findings from Artifact Conflict Analysis (Section 3)</action>
<action>Be specific about what changes are needed and why</action>
<status>[ ] Done / [ ] N/A / [ ] Action-needed</status>
</check-item>
<check-item id="5.3">
<prompt>Present recommended path forward with rationale</prompt>
<action>Include selected approach from Section 4</action>
<action>Provide complete justification for recommendation</action>
<action>Address trade-offs and alternatives considered</action>
<status>[ ] Done / [ ] N/A / [ ] Action-needed</status>
</check-item>
<check-item id="5.4">
<prompt>Define PRD MVP impact and high-level action plan</prompt>
<action>State clearly if MVP is affected</action>
<action>Outline major action items needed for implementation</action>
<action>Identify dependencies and sequencing</action>
<status>[ ] Done / [ ] N/A / [ ] Action-needed</status>
</check-item>
<check-item id="5.5">
<prompt>Establish agent handoff plan</prompt>
<action>Identify which roles/agents will execute the changes:</action>
- Development team (for implementation)
- Product Owner / Scrum Master (for backlog changes)
- Product Manager / Architect (for strategic changes)
<action>Define responsibilities for each role</action>
<status>[ ] Done / [ ] N/A / [ ] Action-needed</status>
</check-item>
</section>
<section n="6" title="Final Review and Handoff">
<check-item id="6.1">
<prompt>Review checklist completion</prompt>
<action>Verify all applicable sections have been addressed</action>
<action>Confirm all [Action-needed] items have been documented</action>
<action>Ensure analysis is comprehensive and actionable</action>
<status>[ ] Done / [ ] N/A / [ ] Action-needed</status>
</check-item>
<check-item id="6.2">
<prompt>Verify Sprint Change Proposal accuracy</prompt>
<action>Review complete proposal for consistency and clarity</action>
<action>Ensure all recommendations are well-supported by analysis</action>
<action>Check that proposal is actionable and specific</action>
<status>[ ] Done / [ ] N/A / [ ] Action-needed</status>
</check-item>
<check-item id="6.3">
<prompt>Obtain explicit user approval</prompt>
<action>Present complete proposal to user</action>
<action>Get clear yes/no approval for proceeding</action>
<action>Document approval and any conditions</action>
<status>[ ] Done / [ ] N/A / [ ] Action-needed</status>
</check-item>
<check-item id="6.4">
<prompt>Confirm next steps and handoff plan</prompt>
<action>Review handoff responsibilities with user</action>
<action>Ensure all stakeholders understand their roles</action>
<action>Confirm timeline and success criteria</action>
<status>[ ] Done / [ ] N/A / [ ] Action-needed</status>
</check-item>
<halt-condition>
<action if="any critical section cannot be completed">HALT: "Cannot proceed to proposal without complete impact analysis"</action>
<action if="user approval not obtained">HALT: "Must have explicit approval before implementing changes"</action>
<action if="handoff responsibilities unclear">HALT: "Must clearly define who will execute the proposed changes"</action>
</halt-condition>
</section>
</checklist>
<execution-notes>
<note>This checklist is for SIGNIFICANT changes affecting project direction</note>
<note>Work interactively with user - they make final decisions</note>
<note>Be factual, not blame-oriented when analyzing issues</note>
<note>Handle changes professionally as opportunities to improve the project</note>
<note>Maintain conversation context throughout - this is collaborative work</note>
</execution-notes>

View File

@@ -0,0 +1,201 @@
# Correct Course - Sprint Change Management Instructions
<critical>The workflow execution engine is governed by: {project-root}/bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {project-root}/bmad/bmm/workflows/4-implementation/correct-course/workflow.yaml</critical>
<critical>Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level}</critical>
<critical>Generate all documents in {document_output_language}</critical>
<critical>DOCUMENT OUTPUT: Updated epics, stories, or PRD sections. Clear, actionable changes. User skill level ({user_skill_level}) affects conversation style ONLY, not document updates.</critical>
<workflow>
<step n="1" goal="Initialize Change Navigation">
<action>Confirm change trigger and gather user description of the issue</action>
<action>Ask: "What specific issue or change has been identified that requires navigation?"</action>
<action>Verify access to required project documents:</action>
- PRD (Product Requirements Document)
- Current Epics and Stories
- Architecture documentation
- UI/UX specifications
<action>Ask user for mode preference:</action>
- **Incremental** (recommended): Refine each edit collaboratively
- **Batch**: Present all changes at once for review
<action>Store mode selection for use throughout workflow</action>
<action if="change trigger is unclear">HALT: "Cannot navigate change without clear understanding of the triggering issue. Please provide specific details about what needs to change and why."</action>
<action if="core documents are unavailable">HALT: "Need access to project documents (PRD, Epics, Architecture, UI/UX) to assess change impact. Please ensure these documents are accessible."</action>
</step>
<step n="2" goal="Execute Change Analysis Checklist">
<action>Load and execute the systematic analysis from: {checklist}</action>
<action>Work through each checklist section interactively with the user</action>
<action>Record status for each checklist item:</action>
- [x] Done - Item completed successfully
- [N/A] Skip - Item not applicable to this change
- [!] Action-needed - Item requires attention or follow-up
<action>Maintain running notes of findings and impacts discovered</action>
<action>Present checklist progress after each major section</action>
<action if="checklist cannot be completed">Identify blocking issues and work with user to resolve before continuing</action>
</step>
<step n="3" goal="Draft Specific Change Proposals">
<action>Based on checklist findings, create explicit edit proposals for each identified artifact</action>
<action>For Story changes:</action>
- Show old → new text format
- Include story ID and section being modified
- Provide rationale for each change
- Example format:
```
Story: [STORY-123] User Authentication
Section: Acceptance Criteria
OLD:
- User can log in with email/password
NEW:
- User can log in with email/password
- User can enable 2FA via authenticator app
Rationale: Security requirement identified during implementation
```
<action>For PRD modifications:</action>
- Specify exact sections to update
- Show current content and proposed changes
- Explain impact on MVP scope and requirements
<action>For Architecture changes:</action>
- Identify affected components, patterns, or technology choices
- Describe diagram updates needed
- Note any ripple effects on other components
<action>For UI/UX specification updates:</action>
- Reference specific screens or components
- Show wireframe or flow changes needed
- Connect changes to user experience impact
<check if="mode is Incremental">
<action>Present each edit proposal individually</action>
<ask>Review and refine this change? Options: Approve [a], Edit [e], Skip [s]</ask>
<action>Iterate on each proposal based on user feedback</action>
</check>
<action if="mode is Batch">Collect all edit proposals and present together at end of step</action>
</step>
<step n="4" goal="Generate Sprint Change Proposal">
<action>Compile comprehensive Sprint Change Proposal document with following sections:</action>
<action>Section 1: Issue Summary</action>
- Clear problem statement describing what triggered the change
- Context about when/how the issue was discovered
- Evidence or examples demonstrating the issue
<action>Section 2: Impact Analysis</action>
- Epic Impact: Which epics are affected and how
- Story Impact: Current and future stories requiring changes
- Artifact Conflicts: PRD, Architecture, UI/UX documents needing updates
- Technical Impact: Code, infrastructure, or deployment implications
<action>Section 3: Recommended Approach</action>
- Present chosen path forward from checklist evaluation:
- Direct Adjustment: Modify/add stories within existing plan
- Potential Rollback: Revert completed work to simplify resolution
- MVP Review: Reduce scope or modify goals
- Provide clear rationale for recommendation
- Include effort estimate, risk assessment, and timeline impact
<action>Section 4: Detailed Change Proposals</action>
- Include all refined edit proposals from Step 3
- Group by artifact type (Stories, PRD, Architecture, UI/UX)
- Ensure each change includes before/after and justification
<action>Section 5: Implementation Handoff</action>
- Categorize change scope:
- Minor: Direct implementation by dev team
- Moderate: Backlog reorganization needed (PO/SM)
- Major: Fundamental replan required (PM/Architect)
- Specify handoff recipients and their responsibilities
- Define success criteria for implementation
<action>Present complete Sprint Change Proposal to user</action>
<action>Write Sprint Change Proposal document to {default_output_file}</action>
<ask>Review complete proposal. Continue [c] or Edit [e]?</ask>
</step>
<step n="5" goal="Finalize and Route for Implementation">
<action>Get explicit user approval for complete proposal</action>
<ask>Do you approve this Sprint Change Proposal for implementation? (yes/no/revise)</ask>
<check if="no or revise">
<action>Gather specific feedback on what needs adjustment</action>
<action>Return to appropriate step to address concerns</action>
<goto step="3">If changes needed to edit proposals</goto>
<goto step="4">If changes needed to overall proposal structure</goto>
</check>
<check if="yes the proposal is approved by the user">
<action>Finalize Sprint Change Proposal document</action>
<action>Determine change scope classification:</action>
- **Minor**: Can be implemented directly by development team
- **Moderate**: Requires backlog reorganization and PO/SM coordination
- **Major**: Needs fundamental replan with PM/Architect involvement
<action>Provide appropriate handoff based on scope:</action>
</check>
<check if="Minor scope">
<action>Route to: Development team for direct implementation</action>
<action>Deliverables: Finalized edit proposals and implementation tasks</action>
</check>
<check if="Moderate scope">
<action>Route to: Product Owner / Scrum Master agents</action>
<action>Deliverables: Sprint Change Proposal + backlog reorganization plan</action>
</check>
<check if="Major scope">
<action>Route to: Product Manager / Solution Architect</action>
<action>Deliverables: Complete Sprint Change Proposal + escalation notice</action>
<action>Confirm handoff completion and next steps with user</action>
<action>Document handoff in workflow execution log</action>
</check>
</step>
<step n="6" goal="Workflow Completion">
<action>Summarize workflow execution:</action>
- Issue addressed: {{change_trigger}}
- Change scope: {{scope_classification}}
- Artifacts modified: {{list_of_artifacts}}
- Routed to: {{handoff_recipients}}
<action>Confirm all deliverables produced:</action>
- Sprint Change Proposal document
- Specific edit proposals with before/after
- Implementation handoff plan
<action>Report workflow completion to user with personalized message: "✅ Correct Course workflow complete, {user_name}!"</action>
<action>Remind user of success criteria and next steps for implementation team</action>
</step>
</workflow>

View File

@@ -0,0 +1,43 @@
# Correct Course - Sprint Change Management Workflow
name: "correct-course"
description: "Navigate significant changes during sprint execution by analyzing impact, proposing solutions, and routing for implementation"
author: "BMad Method"
config_source: "{project-root}/bmad/bmm/config.yaml"
output_folder: "{config_source}:output_folder"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
user_skill_level: "{config_source}:user_skill_level"
document_output_language: "{config_source}:document_output_language"
date: system-generated
installed_path: "{project-root}/bmad/bmm/workflows/4-implementation/correct-course"
template: false
instructions: "{installed_path}/instructions.md"
validation: "{installed_path}/checklist.md"
checklist: "{installed_path}/checklist.md"
default_output_file: "{output_folder}/sprint-change-proposal-{date}.md"
# Workflow execution mode (interactive: step-by-step with user, non-interactive: automated)
mode: interactive
required_inputs:
- change_trigger: "Description of the issue or change that triggered this workflow"
- project_documents: "Access to PRD, Epics/Stories, Architecture, UI/UX specs"
output_artifacts:
- sprint_change_proposal: "Comprehensive proposal documenting issue, impact, and recommended changes"
- artifact_edits: "Specific before/after edits for affected documents"
- handoff_plan: "Clear routing for implementation based on change scope"
halt_conditions:
- "Change trigger unclear or undefined"
- "Core project documents unavailable"
- "Impact analysis incomplete"
- "User approval not obtained"
execution_modes:
- incremental: "Recommended - Refine each edit with user collaboration"
- batch: "Present all changes at once for review"
standalone: true

View File

@@ -0,0 +1,146 @@
# Create Story Workflow
Just-in-time story generation creating one story at a time based on epic backlog state. Run by Scrum Master (SM) agent to ensure planned stories align with approved epics.
## Table of Contents
- [Usage](#usage)
- [Key Features](#key-features)
- [Inputs & Outputs](#inputs--outputs)
- [Workflow Behavior](#workflow-behavior)
- [Integration](#integration)
## Usage
```bash
# SM initiates next story creation
bmad sm *create-story
```
**When to run:**
- Sprint has capacity for new work
- Previous story is Done/Approved
- Team ready for next planned story
## Key Features
### Strict Planning Enforcement
- **Only creates stories enumerated in epics.md**
- Halts if story not found in epic plan
- Prevents scope creep through validation
### Intelligent Document Discovery
- Auto-finds tech spec: `tech-spec-epic-{N}-*.md`
- Discovers architecture docs across directories
- Builds prioritized requirement sources
### Source Document Grounding
- Every requirement traced to source
- No invention of domain facts
- Citations included in output
### Non-Interactive Mode
- Default "#yolo" mode minimizes prompts
- Smooth automated story preparation
- Only prompts when critical
## Inputs & Outputs
### Required Files
| File | Purpose | Priority |
| ------------------------ | ----------------------------- | -------- |
| epics.md | Story enumeration (MANDATORY) | Critical |
| tech-spec-epic-{N}-\*.md | Epic technical spec | High |
| PRD.md | Product requirements | Medium |
| Architecture docs | Technical constraints | Low |
### Auto-Discovered Docs
- `tech-stack.md`, `unified-project-structure.md`
- `testing-strategy.md`, `backend/frontend-architecture.md`
- `data-models.md`, `database-schema.md`, `api-specs.md`
### Output
**Story Document:** `{story_dir}/story-{epic}.{story}.md`
- User story statement (role, action, benefit)
- Acceptance criteria from tech spec/epics
- Tasks mapped to ACs
- Testing requirements
- Dev notes with sources
- Status: "Draft"
## Workflow Behavior
### Story Number Management
- Auto-detects next story number
- No duplicates or skipped numbers
- Maintains epic.story convention
### Update vs Create
- **If current story not Done:** Updates existing
- **If current story Done:** Creates next (if planned)
### Validation Safeguards
**No Story Found:**
```
"No planned next story found in epics.md for epic {N}.
Run *correct-course to add/modify epic stories."
```
**Missing Config:**
Ensure `dev_story_location` set in config.yaml
## Integration
### v6 Implementation Cycle
1. **create-story** ← Current step (defines "what")
2. story-context (adds technical "how")
3. dev-story (implementation)
4. code-review (validation)
5. correct-course (if needed)
6. retrospective (after epic)
### Document Priority
1. **tech_spec_file** - Epic-specific spec
2. **epics_file** - Story breakdown
3. **prd_file** - Business requirements
4. **architecture_docs** - Technical guidance
## Configuration
```yaml
# bmad/bmm/config.yaml
dev_story_location: ./stories
output_folder: ./output
# workflow.yaml defaults
non_interactive: true
auto_run_context: true
```
## Troubleshooting
| Issue | Solution |
| ----------------------- | ------------------------------------------ |
| "No planned next story" | Run `*correct-course` to add story to epic |
| Missing story_dir | Set `dev_story_location` in config |
| Tech spec not found | Use naming: `tech-spec-epic-{N}-*.md` |
| No architecture docs | Add docs to docs/ or output/ folder |
---
For workflow details, see [instructions.md](./instructions.md) and [checklist.md](./checklist.md).

View File

@@ -0,0 +1,240 @@
# Create Story Quality Validation Checklist
```xml
<critical>This validation runs in a FRESH CONTEXT by an independent validator agent</critical>
<critical>The validator audits story quality and offers to improve if issues are found</critical>
<critical>Load only the story file and necessary source documents - do NOT load workflow instructions</critical>
<validation-checklist>
<expectations>
**What create-story workflow should have accomplished:**
1. **Previous Story Continuity:** If a previous story exists (status: done/review/in-progress), current story should have "Learnings from Previous Story" subsection in Dev Notes that references: new files created, completion notes, architectural decisions, unresolved review items
2. **Source Document Coverage:** Story should cite tech spec (if exists), epics, PRD, and relevant architecture docs (architecture.md, testing-strategy.md, coding-standards.md, unified-project-structure.md)
3. **Requirements Traceability:** ACs sourced from tech spec (preferred) or epics, not invented
4. **Dev Notes Quality:** Specific guidance with citations, not generic advice
5. **Task-AC Mapping:** Every AC has tasks, every task references AC, testing subtasks present
6. **Structure:** Status="drafted", proper story statement, Dev Agent Record sections initialized
</expectations>
## Validation Steps
### 1. Load Story and Extract Metadata
- [ ] Load story file: {{story_file_path}}
- [ ] Parse sections: Status, Story, ACs, Tasks, Dev Notes, Dev Agent Record, Change Log
- [ ] Extract: epic_num, story_num, story_key, story_title
- [ ] Initialize issue tracker (Critical/Major/Minor)
### 2. Previous Story Continuity Check
**Find previous story:**
- [ ] Load {output_folder}/sprint-status.yaml
- [ ] Find current {{story_key}} in development_status
- [ ] Identify story entry immediately above (previous story)
- [ ] Check previous story status
**If previous story status is done/review/in-progress:**
- [ ] Load previous story file: {story_dir}/{{previous_story_key}}.md
- [ ] Extract: Dev Agent Record (Completion Notes, File List with NEW/MODIFIED)
- [ ] Extract: Senior Developer Review section if present
- [ ] Count unchecked [ ] items in Review Action Items
- [ ] Count unchecked [ ] items in Review Follow-ups (AI)
**Validate current story captured continuity:**
- [ ] Check: "Learnings from Previous Story" subsection exists in Dev Notes
- If MISSING and previous story has content → **CRITICAL ISSUE**
- [ ] If subsection exists, verify it includes:
- [ ] References to NEW files from previous story → If missing → **MAJOR ISSUE**
- [ ] Mentions completion notes/warnings → If missing → **MAJOR ISSUE**
- [ ] Calls out unresolved review items (if any exist) → If missing → **CRITICAL ISSUE**
- [ ] Cites previous story: [Source: stories/{{previous_story_key}}.md]
**If previous story status is backlog/drafted:**
- [ ] No continuity expected (note this)
**If no previous story exists:**
- [ ] First story in epic, no continuity expected
### 3. Source Document Coverage Check
**Build available docs list:**
- [ ] Check exists: tech-spec-epic-{{epic_num}}*.md in {tech_spec_search_dir}
- [ ] Check exists: {output_folder}/epics.md
- [ ] Check exists: {output_folder}/PRD.md
- [ ] Check exists in {output_folder}/ or {project-root}/docs/:
- architecture.md, testing-strategy.md, coding-standards.md
- unified-project-structure.md, tech-stack.md
- backend-architecture.md, frontend-architecture.md, data-models.md
**Validate story references available docs:**
- [ ] Extract all [Source: ...] citations from story Dev Notes
- [ ] Tech spec exists but not cited → **CRITICAL ISSUE**
- [ ] Epics exists but not cited → **CRITICAL ISSUE**
- [ ] Architecture.md exists → Read for relevance → If relevant but not cited → **MAJOR ISSUE**
- [ ] Testing-strategy.md exists → Check Dev Notes mentions testing standards → If not → **MAJOR ISSUE**
- [ ] Testing-strategy.md exists → Check Tasks have testing subtasks → If not → **MAJOR ISSUE**
- [ ] Coding-standards.md exists → Check Dev Notes references standards → If not → **MAJOR ISSUE**
- [ ] Unified-project-structure.md exists → Check Dev Notes has "Project Structure Notes" subsection → If not → **MAJOR ISSUE**
**Validate citation quality:**
- [ ] Verify cited file paths are correct and files exist → Bad citations → **MAJOR ISSUE**
- [ ] Check citations include section names, not just file paths → Vague citations → **MINOR ISSUE**
### 4. Acceptance Criteria Quality Check
- [ ] Extract Acceptance Criteria from story
- [ ] Count ACs: {{ac_count}} (if 0 → **CRITICAL ISSUE** and halt)
- [ ] Check story indicates AC source (tech spec, epics, PRD)
**If tech spec exists:**
- [ ] Load tech spec
- [ ] Search for this story number
- [ ] Extract tech spec ACs for this story
- [ ] Compare story ACs vs tech spec ACs → If mismatch → **MAJOR ISSUE**
**If no tech spec but epics.md exists:**
- [ ] Load epics.md
- [ ] Search for Epic {{epic_num}}, Story {{story_num}}
- [ ] Story not found in epics → **CRITICAL ISSUE** (should have halted)
- [ ] Extract epics ACs
- [ ] Compare story ACs vs epics ACs → If mismatch without justification → **MAJOR ISSUE**
**Validate AC quality:**
- [ ] Each AC is testable (measurable outcome)
- [ ] Each AC is specific (not vague)
- [ ] Each AC is atomic (single concern)
- [ ] Vague ACs found → **MINOR ISSUE**
### 5. Task-AC Mapping Check
- [ ] Extract Tasks/Subtasks from story
- [ ] For each AC: Search tasks for "(AC: #{{ac_num}})" reference
- [ ] AC has no tasks → **MAJOR ISSUE**
- [ ] For each task: Check if references an AC number
- [ ] Tasks without AC refs (and not testing/setup) → **MINOR ISSUE**
- [ ] Count tasks with testing subtasks
- [ ] Testing subtasks < ac_count **MAJOR ISSUE**
### 6. Dev Notes Quality Check
**Check required subsections exist:**
- [ ] Architecture patterns and constraints
- [ ] References (with citations)
- [ ] Project Structure Notes (if unified-project-structure.md exists)
- [ ] Learnings from Previous Story (if previous story has content)
- [ ] Missing required subsections **MAJOR ISSUE**
**Validate content quality:**
- [ ] Architecture guidance is specific (not generic "follow architecture docs") If generic **MAJOR ISSUE**
- [ ] Count citations in References subsection
- [ ] No citations **MAJOR ISSUE**
- [ ] < 3 citations and multiple arch docs exist **MINOR ISSUE**
- [ ] Scan for suspicious specifics without citations:
- API endpoints, schema details, business rules, tech choices
- [ ] Likely invented details found **MAJOR ISSUE**
### 7. Story Structure Check
- [ ] Status = "drafted" If not **MAJOR ISSUE**
- [ ] Story section has "As a / I want / so that" format If malformed **MAJOR ISSUE**
- [ ] Dev Agent Record has required sections:
- Context Reference, Agent Model Used, Debug Log References, Completion Notes List, File List
- [ ] Missing sections **MAJOR ISSUE**
- [ ] Change Log initialized If missing **MINOR ISSUE**
- [ ] File in correct location: {story_dir}/{{story_key}}.md If not **MAJOR ISSUE**
### 8. Unresolved Review Items Alert
**CRITICAL CHECK for incomplete review items from previous story:**
- [ ] If previous story has "Senior Developer Review (AI)" section:
- [ ] Count unchecked [ ] items in "Action Items"
- [ ] Count unchecked [ ] items in "Review Follow-ups (AI)"
- [ ] If unchecked items > 0:
- [ ] Check current story "Learnings from Previous Story" mentions these
- [ ] If NOT mentioned → **CRITICAL ISSUE** with details:
- List all unchecked items with severity
- Note: "These may represent epic-wide concerns"
- Required: Add to Learnings section with note about pending items
## Validation Report Generation
**Calculate severity counts:**
- Critical: {{critical_count}}
- Major: {{major_count}}
- Minor: {{minor_count}}
**Determine outcome:**
- Critical > 0 OR Major > 3 → **FAIL**
- Major ≤ 3 and Critical = 0 → **PASS with issues**
- All = 0 → **PASS**
**Generate report:**
```
# Story Quality Validation Report
Story: {{story_key}} - {{story_title}}
Outcome: {{outcome}} (Critical: {{critical_count}}, Major: {{major_count}}, Minor: {{minor_count}})
## Critical Issues (Blockers)
{{list_each_with_description_and_evidence}}
## Major Issues (Should Fix)
{{list_each_with_description_and_evidence}}
## Minor Issues (Nice to Have)
{{list_each_with_description}}
## Successes
{{list_what_was_done_well}}
```
## User Alert and Remediation
**If FAIL:**
- Show issues summary and top 3 issues
- Offer options: (1) Auto-improve story, (2) Show detailed findings, (3) Fix manually, (4) Accept as-is
- If option 1: Re-load source docs, regenerate affected sections, re-run validation
**If PASS with issues:**
- Show issues list
- Ask: "Improve story? (y/n)"
- If yes: Enhance story with missing items
**If PASS:**
- Confirm: All quality standards met
- List successes
- Ready for story-context generation
</validation-checklist>
```
## Quick Reference
**Validation runs in fresh context and checks:**
1. ✅ Previous story continuity captured (files, notes, **unresolved review items**)
2. ✅ All relevant source docs discovered and cited
3. ✅ ACs match tech spec/epics exactly
4. ✅ Tasks cover all ACs with testing
5. ✅ Dev Notes have specific guidance with citations (not generic)
6. ✅ Structure and metadata complete
**Severity Levels:**
- **CRITICAL** = Missing previous story reference, missing tech spec cite, unresolved review items not called out, story not in epics
- **MAJOR** = Missing arch docs, missing files from previous story, vague Dev Notes, ACs don't match source, no testing subtasks
- **MINOR** = Vague citations, orphan tasks, missing Change Log
**Outcome Triggers:**
- **FAIL** = Any critical OR >3 major issues
- **PASS with issues** = ≤3 major issues, no critical
- **PASS** = All checks passed

View File

@@ -0,0 +1,254 @@
# Create Story - Workflow Instructions (Spec-compliant, non-interactive by default)
````xml
<critical>The workflow execution engine is governed by: {project_root}/bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<critical>Generate all documents in {document_output_language}</critical>
<critical>This workflow creates or updates the next user story from epics/PRD and architecture context, saving to the configured stories directory and optionally invoking Story Context.</critical>
<critical>DOCUMENT OUTPUT: Concise, technical, actionable story specifications. Use tables/lists for acceptance criteria and tasks.</critical>
<workflow>
<step n="1" goal="Load config and initialize">
<action>Resolve variables from config_source: story_dir (dev_story_location), output_folder, user_name, communication_language. If story_dir missing and {{non_interactive}} == false → ASK user to provide a stories directory and update variable. If {{non_interactive}} == true and missing, HALT with a clear message.</action>
<action>Create {{story_dir}} if it does not exist</action>
<action>Resolve installed component paths from workflow.yaml: template, instructions, validation</action>
<action>Resolve recommended inputs if present: epics_file, prd_file, architecture_file</action>
</step>
<step n="2" goal="Discover and load source documents">
<critical>PREVIOUS STORY CONTINUITY: Essential for maintaining context and learning from prior development</critical>
<action>Find the previous completed story to extract dev agent learnings and review findings:
1. Load {{output_folder}}/sprint-status.yaml COMPLETELY
2. Find current {{story_key}} in development_status section
3. Identify the story entry IMMEDIATELY ABOVE current story (previous row in file order)
4. If previous story exists:
- Extract {{previous_story_key}}
- Check previous story status (done, in-progress, review, etc.)
- If status is "done", "review", or "in-progress" (has some completion):
* Construct path: {{story_dir}}/{{previous_story_key}}.md
* Load the COMPLETE previous story file
* Parse ALL sections comprehensively:
A) Dev Agent Record → Completion Notes List:
- New patterns/services created (to reuse, not recreate)
- Architectural deviations or decisions made
- Technical debt deferred to future stories
- Warnings or recommendations for next story
- Interfaces/methods created for reuse
B) Dev Agent Record → Debug Log References:
- Issues encountered and solutions
- Gotchas or unexpected challenges
- Workarounds applied
C) Dev Agent Record → File List:
- Files created (NEW) - understand new capabilities
- Files modified (MODIFIED) - track evolving components
- Files deleted (DELETED) - removed functionality
D) Dev Notes:
- Any "future story" notes or TODOs
- Patterns established
- Constraints discovered
E) Senior Developer Review (AI) section (if present):
- Review outcome (Approve/Changes Requested/Blocked)
- Unresolved action items (unchecked [ ] items)
- Key findings that might affect this story
- Architectural concerns raised
F) Senior Developer Review → Action Items (if present):
- Check for unchecked [ ] items still pending
- Note any systemic issues that apply to multiple stories
G) Review Follow-ups (AI) tasks (if present):
- Check for unchecked [ ] review tasks still pending
- Determine if they're epic-wide concerns
H) Story Status:
- If "review" or "in-progress" - incomplete, note what's pending
- If "done" - confirmed complete
* Store ALL findings as {{previous_story_learnings}} with structure:
- new_files: [list]
- modified_files: [list]
- new_services: [list with descriptions]
- architectural_decisions: [list]
- technical_debt: [list]
- warnings_for_next: [list]
- review_findings: [list if review exists]
- pending_items: [list of unchecked action items]
- If status is "backlog" or "drafted":
* Set {{previous_story_learnings}} = "Previous story not yet implemented"
5. If no previous story exists (first story in epic):
- Set {{previous_story_learnings}} = "First story in epic - no predecessor context"
</action>
<action>If {{tech_spec_file}} empty: derive from {{tech_spec_glob_template}} with {{epic_num}} and search {{tech_spec_search_dir}} recursively. If multiple, pick most recent by modified time.</action>
<action>Build a prioritized document set for this epic:
1) tech_spec_file (epic-scoped)
2) epics_file (acceptance criteria and breakdown)
3) prd_file (business requirements and constraints)
4) architecture_file (architecture constraints)
5) Architecture docs under docs/ and output_folder/: tech-stack.md, unified-project-structure.md, coding-standards.md, testing-strategy.md, backend-architecture.md, frontend-architecture.md, data-models.md, database-schema.md, rest-api-spec.md, external-apis.md (include if present)
</action>
<action>READ COMPLETE FILES for all items found in the prioritized set. Store content and paths for citation.</action>
</step>
<step n="3" goal="Find next backlog story to draft" tag="sprint-status">
<critical>MUST read COMPLETE sprint-status.yaml file from start to end to preserve order</critical>
<action>Load the FULL file: {{output_folder}}/sprint-status.yaml</action>
<action>Read ALL lines from beginning to end - do not skip any content</action>
<action>Parse the development_status section completely to understand story order</action>
<action>Find the FIRST story (by reading in order from top to bottom) where:
- Key matches pattern: number-number-name (e.g., "1-2-user-auth")
- NOT an epic key (epic-X) or retrospective (epic-X-retrospective)
- Status value equals "backlog"
</action>
<check if="no backlog story found">
<output>📋 No backlog stories found in sprint-status.yaml
All stories are either already drafted or completed.
**Options:**
1. Run sprint-planning to refresh story tracking
2. Load PM agent and run correct-course to add more stories
3. Check if current sprint is complete
</output>
<action>HALT</action>
</check>
<action>Extract from found story key (e.g., "1-2-user-authentication"):
- epic_num: first number before dash (e.g., "1")
- story_num: second number after first dash (e.g., "2")
- story_title: remainder after second dash (e.g., "user-authentication")
</action>
<action>Set {{story_id}} = "{{epic_num}}.{{story_num}}"</action>
<action>Store story_key for later use (e.g., "1-2-user-authentication")</action>
<action>Verify story is enumerated in {{epics_file}}. If not found, HALT with message:</action>
<action>"Story {{story_key}} not found in epics.md. Please load PM agent and run correct-course to sync epics, then rerun create-story."</action>
<action>Check if story file already exists at expected path in {{story_dir}}</action>
<check if="story file exists">
<output> Story file already exists: {{story_file_path}}
Will update existing story file rather than creating new one.
</output>
<action>Set update_mode = true</action>
</check>
</step>
<step n="4" goal="Extract requirements and derive story statement">
<action>From tech_spec_file (preferred) or epics_file: extract epic {{epic_num}} title/summary, acceptance criteria for the next story, and any component references. If not present, fall back to PRD sections mapping to this epic/story.</action>
<action>From architecture and architecture docs: extract constraints, patterns, component boundaries, and testing guidance relevant to the extracted ACs. ONLY capture information that directly informs implementation of this story.</action>
<action>Derive a clear user story statement (role, action, benefit) grounded strictly in the above sources. If ambiguous and {{non_interactive}} == false → ASK user to clarify. If {{non_interactive}} == true → generate the best grounded statement WITHOUT inventing domain facts.</action>
<template-output file="{default_output_file}">requirements_context_summary</template-output>
</step>
<step n="5" goal="Project structure alignment and lessons learned">
<action>Review {{previous_story_learnings}} and extract actionable intelligence:
- New patterns/services created → Note for reuse (DO NOT recreate)
- Architectural deviations → Understand and maintain consistency
- Technical debt items → Assess if this story should address them
- Files modified → Understand current state of evolving components
- Warnings/recommendations → Apply to this story's approach
- Review findings → Learn from issues found in previous story
- Pending action items → Determine if epic-wide concerns affect this story
</action>
<action>If unified-project-structure.md present: align expected file paths, module names, and component locations; note any potential conflicts.</action>
<action>Cross-reference {{previous_story_learnings}}.new_files with project structure to understand where new capabilities are located.</action>
<template-output file="{default_output_file}">structure_alignment_summary</template-output>
</step>
<step n="6" goal="Assemble acceptance criteria and tasks">
<action>Assemble acceptance criteria list from tech_spec or epics. If gaps exist, derive minimal, testable criteria from PRD verbatim phrasing (NO invention).</action>
<action>Create tasks/subtasks directly mapped to ACs. Include explicit testing subtasks per testing-strategy and existing tests framework. Cite architecture/source documents for any technical mandates.</action>
<template-output file="{default_output_file}">acceptance_criteria</template-output>
<template-output file="{default_output_file}">tasks_subtasks</template-output>
</step>
<step n="7" goal="Create or update story document">
<action>Resolve output path: {default_output_file} using current {{epic_num}} and {{story_num}}. If targeting an existing story for update, use its path.</action>
<action>Initialize from template.md if creating a new file; otherwise load existing file for edit.</action>
<action>Compute a concise story_title from epic/story context; if missing, synthesize from PRD feature name and epic number.</action>
<template-output file="{default_output_file}">story_header</template-output>
<template-output file="{default_output_file}">story_body</template-output>
<template-output file="{default_output_file}">dev_notes_with_citations</template-output>
<action>If {{previous_story_learnings}} contains actionable items (not "First story" or "not yet implemented"):
- Add "Learnings from Previous Story" subsection to Dev Notes
- Include relevant completion notes, new files/patterns, deviations
- Cite previous story file as reference [Source: stories/{{previous_story_key}}.md]
- Highlight interfaces/services to REUSE (not recreate)
- Note any technical debt to address in this story
- List pending review items that affect this story (if any)
- Reference specific files created: "Use {{file_path}} for {{purpose}}"
- Format example:
```
### Learnings from Previous Story
**From Story {{previous_story_key}} (Status: {{previous_status}})**
- **New Service Created**: `AuthService` base class available at `src/services/AuthService.js` - use `AuthService.register()` method
- **Architectural Change**: Switched from session-based to JWT authentication
- **Schema Changes**: User model now includes `passwordHash` field, migration applied
- **Technical Debt**: Email verification skipped, should be included in this or subsequent story
- **Testing Setup**: Auth test suite initialized at `tests/integration/auth.test.js` - follow patterns established there
- **Pending Review Items**: Rate limiting mentioned in review - consider for this story
[Source: stories/{{previous_story_key}}.md#Dev-Agent-Record]
```
</action>
<template-output file="{default_output_file}">change_log</template-output>
</step>
<step n="8" goal="Validate, save, and mark story drafted" tag="sprint-status">
<invoke-task>Validate against checklist at {installed_path}/checklist.md using bmad/core/tasks/validate-workflow.xml</invoke-task>
<action>Save document unconditionally (non-interactive default). In interactive mode, allow user confirmation.</action>
<!-- Mark story as drafted in sprint status -->
<action>Update {{output_folder}}/sprint-status.yaml</action>
<action>Load the FULL file and read all development_status entries</action>
<action>Find development_status key matching {{story_key}}</action>
<action>Verify current status is "backlog" (expected previous state)</action>
<action>Update development_status[{{story_key}}] = "drafted"</action>
<action>Save file, preserving ALL comments and structure including STATUS DEFINITIONS</action>
<check if="story key not found in file">
<output>⚠️ Could not update story status: {{story_key}} not found in sprint-status.yaml
Story file was created successfully, but sprint-status.yaml was not updated.
You may need to run sprint-planning to refresh tracking, or manually set the story row status to `drafted`.
</output>
</check>
<action>Report created/updated story path</action>
<output>**✅ Story Created Successfully, {user_name}!**
**Story Details:**
- Story ID: {{story_id}}
- Story Key: {{story_key}}
- File: {{story_file}}
- Status: drafted (was backlog)
**⚠️ Important:** The following workflows are context-intensive. It's recommended to clear context and restart the SM agent before running the next command.
**Next Steps:**
1. Review the drafted story in {{story_file}}
2. **[RECOMMENDED]** Run `story-context` to generate technical context XML and mark story ready for development (combines context + ready in one step)
3. Or run `story-ready` to manually mark the story ready without generating technical context
</output>
</step>
</workflow>
````

View File

@@ -0,0 +1,51 @@
# Story {{epic_num}}.{{story_num}}: {{story_title}}
Status: drafted
## Story
As a {{role}},
I want {{action}},
so that {{benefit}}.
## Acceptance Criteria
1. [Add acceptance criteria from epics/PRD]
## Tasks / Subtasks
- [ ] Task 1 (AC: #)
- [ ] Subtask 1.1
- [ ] Task 2 (AC: #)
- [ ] Subtask 2.1
## Dev Notes
- Relevant architecture patterns and constraints
- Source tree components to touch
- Testing standards summary
### Project Structure Notes
- Alignment with unified project structure (paths, modules, naming)
- Detected conflicts or variances (with rationale)
### References
- Cite all technical details with source paths and sections, e.g. [Source: docs/<file>.md#Section]
## Dev Agent Record
### Context Reference
<!-- Path(s) to story context XML will be added here by context workflow -->
### Agent Model Used
{{agent_model_name_version}}
### Debug Log References
### Completion Notes List
### File List

View File

@@ -0,0 +1,47 @@
name: create-story
description: "Create the next user story markdown from epics/PRD and architecture, using a standard template and saving to the stories folder"
author: "BMad"
# Critical variables from config
config_source: "{project-root}/bmad/bmm/config.yaml"
output_folder: "{config_source}:output_folder"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
date: system-generated
# Workflow components
installed_path: "{project-root}/bmad/bmm/workflows/4-implementation/create-story"
template: "{installed_path}/template.md"
instructions: "{installed_path}/instructions.md"
validation: "{installed_path}/checklist.md"
# Variables and inputs
variables:
story_dir: "{config_source}:dev_story_location" # Directory where stories are stored
epics_file: "{output_folder}/epics.md" # Preferred source for epic/story breakdown
prd_file: "{output_folder}/PRD.md" # Fallback for requirements
architecture_file: "{output_folder}/architecture.md" # Optional architecture context
tech_spec_file: "" # Will be auto-discovered from docs as tech-spec-epic-{{epic_num}}-*.md
tech_spec_search_dir: "{project-root}/docs"
tech_spec_glob_template: "tech-spec-epic-{{epic_num}}*.md"
arch_docs_search_dirs: |
- "{project-root}/docs"
- "{output_folder}"
arch_docs_file_names: |
- architecture.md
- infrastructure-architecture.md
story_title: "" # Will be elicited if not derivable
epic_num: 1
story_num: 1
non_interactive: true # Generate without elicitation; avoid interactive prompts
# Output configuration
# Uses story_key from sprint-status.yaml (e.g., "1-2-user-authentication")
default_output_file: "{story_dir}/{{story_key}}.md"
recommended_inputs:
- epics: "Epic breakdown (epics.md)"
- prd: "PRD document"
- architecture: "Architecture (optional)"
standalone: true

View File

@@ -0,0 +1,367 @@
# Workflow Audit Report
**Workflow:** dev-story
**Audit Date:** 2025-10-25
**Auditor:** Audit Workflow (BMAD v6)
**Workflow Type:** Action Workflow
**Module:** BMM (BMad Method)
---
## Executive Summary
**Overall Status:** GOOD - Minor issues to address
- Critical Issues: 0
- Important Issues: 3
- Cleanup Recommendations: 2
The dev-story workflow is well-structured and follows most BMAD v6 standards. The workflow correctly sets `web_bundle: false` as expected for implementation workflows. However, there are several config variable usage issues and some variables referenced in instructions that are not defined in the YAML.
---
## 1. Standard Config Block Validation
**Status:** PASS ✓
The workflow.yaml contains all required standard config variables:
-`config_source: "{project-root}/bmad/bmm/config.yaml"` - Correctly defined
-`output_folder: "{config_source}:output_folder"` - Pulls from config_source
-`user_name: "{config_source}:user_name"` - Pulls from config_source
-`communication_language: "{config_source}:communication_language"` - Pulls from config_source
-`date: system-generated` - Correctly set
All standard config variables are present and properly formatted using {project-root} variable syntax.
---
## 2. YAML/Instruction/Template Alignment
**Variables Analyzed:** 9 (excluding standard config)
**Used in Instructions:** 6
**Unused (Bloat):** 3
### YAML Variables Defined
1. `story_dir` - USED in instructions (file paths)
2. `context_path` - UNUSED (appears to duplicate story_dir)
3. `story_file` - USED in instructions
4. `context_file` - USED in instructions
5. `installed_path` - USED in instructions (workflow.xml reference)
6. `instructions` - USED in instructions (self-reference in critical tag)
7. `validation` - USED in instructions (checklist reference)
8. `web_bundle` - CONFIGURATION (correctly set to false)
9. `date` - USED in instructions (config variable)
### Variables Used in Instructions But NOT Defined in YAML
**IMPORTANT ISSUE:** The following variables are referenced in instructions.md but are NOT defined in workflow.yaml:
1. `{user_skill_level}` - Used 4 times (lines 6, 13, 173, 182)
2. `{document_output_language}` - Used 1 time (line 7)
3. `{run_until_complete}` - Used 1 time (line 108)
4. `{run_tests_command}` - Used 1 time (line 120)
These variables appear to be pulling from config.yaml but are not explicitly defined in the workflow.yaml file. While the config_source mechanism may provide these, workflow.yaml should document all variables used in the workflow for clarity.
### Unused Variables (Bloat)
1. **context_path** - Defined as `"{config_source}:dev_story_location"` but never used. This duplicates `story_dir` functionality.
---
## 3. Config Variable Usage
**Communication Language:** PASS ✓
**User Name:** PASS ✓
**Output Folder:** PASS ✓
**Date:** PASS ✓
### Detailed Analysis
**Communication Language:**
- ✓ Used in line 6: "Communicate all responses in {communication_language}"
- ✓ Properly used as agent instruction variable (not in template)
**User Name:**
- ✓ Used in line 169: "Communicate to {user_name} that story implementation is complete"
- ✓ Appropriately used for personalization
**Output Folder:**
- ✓ Used multiple times for sprint-status.yaml file paths
- ✓ All file operations target {output_folder} correctly
- ✓ No hardcoded paths detected
**Date:**
- ✓ Available for agent use (system-generated)
- ✓ Used appropriately in context of workflow execution
### Additional Config Variables
**IMPORTANT ISSUE:** The workflow uses additional variables that appear to come from config but are not explicitly documented:
1. `{user_skill_level}` - Used to tailor communication style
2. `{document_output_language}` - Used for document generation
3. `{run_until_complete}` - Used for execution control
4. `{run_tests_command}` - Used for test execution
These should either be:
- Added to workflow.yaml with proper config_source references, OR
- Documented as optional config variables with defaults
---
## 4. Web Bundle Validation
**Web Bundle Present:** No (Intentional)
**Status:** EXPECTED ✓
The workflow correctly sets `web_bundle: false`. This is the expected configuration for implementation workflows that:
- Run locally in the development environment
- Don't need to be bundled for web deployment
- Are IDE-integrated workflows
**No issues found** - This is the correct configuration for dev-story.
---
## 5. Bloat Detection
**Bloat Percentage:** 11% (1 unused field / 9 total fields)
**Cleanup Potential:** Low
### Unused YAML Fields
1. **context_path** (line 11 in workflow.yaml)
- Defined as: `"{config_source}:dev_story_location"`
- Never referenced in instructions.md
- Duplicates functionality of `story_dir` variable
- **Recommendation:** Remove this variable as `story_dir` serves the same purpose
### Hardcoded Values
No significant hardcoded values that should be variables were detected. The workflow properly uses variables for:
- File paths ({output_folder}, {story_dir})
- User personalization ({user_name})
- Communication style ({communication_language}, {user_skill_level})
### Calculation
- Total yaml fields: 9 (excluding standard config and metadata)
- Used fields: 8
- Unused fields: 1 (context_path)
- Bloat percentage: 11%
**Status:** Acceptable (under 15% threshold)
---
## 6. Template Variable Mapping
**Not Applicable** - This is an action workflow, not a document workflow.
No template.md file exists, which is correct for action-type workflows.
---
## 7. Instructions Quality Analysis
### Structure
- ✓ Steps numbered sequentially (1, 1.5, 2-7)
- ✓ Each step has clear goal attributes
- ✓ Proper use of XML tags (<action>, <check>, <goto>, <anchor>, <output>)
- ✓ Logical flow control with anchors and conditional checks
- ✓ Repeat patterns used appropriately (step 2-5 loop)
### Critical Tags
- ✓ Critical blocks present and well-defined
- ✓ Clear references to workflow execution engine
- ✓ Workflow.yaml load requirement specified
- ✓ Communication preferences documented
### Variable Usage Consistency
**ISSUE:** Inconsistent variable syntax found:
1. Lines 4, 5 use `{project_root}` (underscore)
2. Line 166 uses `{project-root}` (hyphen)
**Recommendation:** Standardize to `{project-root}` throughout (hyphen is the standard in BMAD v6)
### Step Quality
**Excellent:**
- Steps are focused and single-purpose
- Clear HALT conditions defined
- Comprehensive validation checks
- Good error handling patterns
- Iterative execution model well-structured
**Areas for improvement:**
- Step 1 is complex and could potentially be split
- Some <action if="..."> conditionals could be clearer with <check> blocks
---
## Recommendations
### Critical (Fix Immediately)
None - No critical issues detected.
### Important (Address Soon)
1. **Document or Define Missing Variables**
- Add explicit definitions in workflow.yaml for: `user_skill_level`, `document_output_language`, `run_until_complete`, `run_tests_command`
- OR document these as optional config variables with defaults
- These variables are used in instructions but not defined in YAML
- **Impact:** Reduces clarity and may cause confusion about variable sources
2. **Standardize project-root Variable Syntax**
- Change line 4 `{project_root}` to `{project-root}` (hyphen)
- Ensure consistency with BMAD v6 standard naming convention
- **Impact:** Maintains consistency with framework standards
3. **Remove or Use context_path Variable**
- Variable `context_path` is defined but never used
- Since `story_dir` serves the same purpose, remove `context_path`
- OR if there's a semantic difference, document why both exist
- **Impact:** Reduces bloat and potential confusion
### Cleanup (Nice to Have)
1. **Consider Splitting Step 1**
- Step 1 handles both story discovery AND file loading
- Could be split into "1. Find Story" and "2. Load Story Files"
- Would improve clarity and maintainability
- **Impact:** Minor improvement to workflow structure
2. **Add Variable Documentation Comment**
- Add a comment block in workflow.yaml listing all variables used by this workflow
- Include both explicit YAML variables and config-pulled variables
- Example format:
```yaml
# Workflow-specific variables
# - story_file: Path to story markdown
# - story_dir: Directory containing stories
#
# Config-pulled variables (from bmm/config.yaml)
# - user_skill_level: User's technical skill level
# - document_output_language: Language for generated docs
```
- **Impact:** Improves developer understanding and maintenance
---
## Validation Checklist
### Structure ✓
- [x] workflow.yaml loads without YAML syntax errors
- [x] instructions.md exists and is properly formatted
- [x] No template.md (correct for action workflow)
- [x] All critical headers present in instructions
- [x] Workflow type correctly identified (action)
- [x] All referenced files exist
- [x] No placeholder text remains
### Standard Config Block ✓
- [x] config_source points to correct module config
- [x] output_folder pulls from config_source
- [x] user_name pulls from config_source
- [x] communication_language pulls from config_source
- [x] date is system-generated
- [x] Config source uses {project-root} variable
- [x] Standard config comment present
### Config Variable Usage ✓
- [x] Instructions communicate in {communication_language}
- [x] Instructions address {user_name}
- [x] All file outputs use {output_folder}
- [x] No hardcoded paths
- [x] Date available for agent awareness
### YAML/Instruction/Template Alignment ⚠️
- [⚠️] Some variables used in instructions not defined in YAML
- [x] Template variables N/A (action workflow)
- [x] Variable names are descriptive
- [⚠️] One unused yaml field (context_path)
### Web Bundle Validation ✓
- [x] web_bundle: false is correct for this workflow
- [x] No web_bundle section needed
- [x] Workflow is local/IDE-integrated only
### Instructions Quality ✓
- [x] Steps numbered sequentially
- [x] Clear goal attributes
- [x] Proper XML tag usage
- [x] Logical flow control
- [⚠️] Minor inconsistency: {project_root} vs {project-root}
### Bloat Detection ✓
- [x] Bloat percentage: 11% (acceptable, under 15%)
- [x] No significant hardcoded values
- [x] No redundant configuration
- [x] One cleanup recommendation (context_path)
---
## Next Steps
1. **Define missing variables** - Add explicit YAML definitions or document as config-pulled variables
2. **Standardize variable syntax** - Change `{project_root}` to `{project-root}`
3. **Remove context_path** - Clean up unused variable
4. **Re-run audit** - Verify improvements after fixes
---
## Additional Notes
### Strengths
1. **Comprehensive Workflow Logic:** The dev-story workflow is well-thought-out with proper error handling, validation gates, and iterative execution
2. **Config Integration:** Excellent use of config variables for user personalization and output management
3. **Clear Documentation:** Instructions are detailed with specific HALT conditions and validation checkpoints
4. **Proper Web Bundle Setting:** Correctly identifies this as a local-only workflow with web_bundle: false
5. **Step Flow:** Excellent use of anchors, goto, and conditional checks for complex flow control
### Workflow Purpose
This workflow executes user stories by:
- Finding ready-for-dev stories from sprint status
- Implementing tasks and subtasks incrementally
- Writing comprehensive tests
- Validating against acceptance criteria
- Updating story status through sprint lifecycle
- Supporting different user skill levels with adaptive communication
The workflow is a critical part of the BMM implementation phase and shows mature design patterns.
---
**Audit Complete** - Generated by audit-workflow v1.0
**Pass Rate:** 89% (62 passed / 70 total checks)
**Recommendation:** Good - Minor fixes needed
The dev-story workflow is production-ready with minor improvements recommended. The issues identified are primarily documentation and consistency improvements rather than functional problems.

View File

@@ -0,0 +1,206 @@
# Dev Story Workflow
The dev-story workflow is where v6's just-in-time context approach delivers its primary value, enabling the Developer (DEV) agent to implement stories with expert-level guidance injected directly into their context. This workflow is run EXCLUSIVELY by the DEV agent and operates on a single story that has been prepared by the SM through create-story and enhanced with story-context. The DEV loads both the story specification and the dynamically-generated story context XML, then proceeds through implementation with the combined knowledge of requirements and domain-specific expertise.
The workflow operates with two critical inputs: the story file (created by SM's create-story) containing acceptance criteria, tasks, and requirements; and the story-context XML (generated by SM's story-context) providing just-in-time expertise injection tailored to the story's technical needs. This dual-input approach ensures the developer has both the "what" (from the story) and the "how" (from the context) needed for successful implementation. The workflow iterates through tasks sequentially, implementing code, writing tests, and updating the story document's allowed sections until all tasks are complete.
A critical aspect of v6 flow is that dev-story may be run multiple times for the same story. Initially run to implement the story, it may be run again after code-review identifies issues that need correction. The workflow intelligently resumes from incomplete tasks, making it ideal for both initial implementation and post-review fixes. The DEV agent maintains strict boundaries on what can be modified in the story file—only Tasks/Subtasks checkboxes, Dev Agent Record, File List, Change Log, and Status may be updated, preserving the story's requirements integrity.
## Usage
```bash
# Developer implements the story with injected context
bmad dev *develop
# Or if returning to fix review issues
bmad dev *develop # Will resume from incomplete tasks
```
The DEV runs this workflow:
- After SM completes both create-story and story-context
- When a story status is "Draft" or "Approved" (ready for development)
- After code-review identifies issues requiring fixes
- To resume work on a partially completed story
## Inputs
**Required Files:**
- **Story Document** (`{story_dir}/story-{epic}.{story}.md`): Complete specification including:
- User story statement
- Acceptance criteria (immutable during dev)
- Tasks and subtasks (checkable during implementation)
- Dev Notes section (for context and guidance)
- Status field (Draft → In Progress → Ready for Review)
- **Story Context XML** (`{story_dir}/story-{epic}.{story}-context.xml`): Domain expertise including:
- Technical patterns and best practices
- Gotchas and common pitfalls
- Testing strategies
- Relevant code references
- Architecture constraints
**Configuration:**
- `dev_story_location`: Directory containing story files (from bmm/config.yaml)
- `story_selection_limit`: Number of recent stories to show (default: 10)
- `run_tests_command`: Test command (default: auto-detected)
- `strict`: Whether to halt on test failures (default: true)
## Outputs
**Code Implementation:**
- Production code satisfying acceptance criteria
- Unit, integration, and E2E tests as appropriate
- Documentation updates
- Migration scripts if needed
**Story File Updates (allowed sections only):**
- **Tasks/Subtasks**: Checkboxes marked complete as work progresses
- **Dev Agent Record**: Debug log and completion notes
- **File List**: All files created or modified
- **Change Log**: Summary of implementation changes
- **Status**: Updated to "Ready for Review" when complete
**Validation:**
- All tests passing (existing + new)
- Acceptance criteria verified
- Code quality checks passed
- No regression in existing functionality
## Key Features
**Story-Context Integration**: The workflow loads and leverages the story-context XML throughout implementation, providing expert guidance without cluttering the main conversation. This ensures best practices are followed while maintaining developer autonomy.
**Task-by-Task Iteration**: Implements one task at a time, marking completion before moving to the next. This granular approach enables progress tracking and clean resumption if interrupted or after review feedback.
**Strict File Boundaries**: Only specific sections of the story file may be modified, preserving requirement integrity. The story's acceptance criteria, main description, and other planning sections remain immutable during development.
**Test-Driven Approach**: For each task, the workflow emphasizes writing tests that verify acceptance criteria before or alongside implementation, ensuring requirements are actually met.
**Resumable Implementation**: If the workflow is run again on a story with incomplete tasks (e.g., after code-review finds issues), it intelligently resumes from where it left off rather than starting over.
## Integration with v6 Flow
The dev-story workflow is step 3 in the v6 implementation cycle:
1. SM: create-story (generates the story specification)
2. SM: story-context (adds JIT technical expertise)
3. **DEV: dev-story** ← You are here (initial implementation)
4. DEV/SR: code-review (validates completion)
5. If issues found: **DEV: dev-story** ← May return here for fixes
6. After epic: retrospective (captures learnings)
This workflow may be executed multiple times for the same story as part of the review-fix cycle. Each execution picks up from incomplete tasks, making it efficient for iterative improvement.
## Workflow Process
### Phase 1: Story Selection
- If `story_path` provided: Use that story directly
- Otherwise: List recent stories from `dev_story_location`
- Parse story structure and validate format
- Load corresponding story-context XML
### Phase 2: Task Implementation Loop
For each incomplete task:
1. **Plan**: Log approach in Dev Agent Record
2. **Implement**: Write code following story-context guidance
3. **Test**: Create tests verifying acceptance criteria
4. **Validate**: Run tests and quality checks
5. **Update**: Mark task complete, update File List
### Phase 3: Completion
- Verify all tasks completed
- Run full test suite
- Update Status to "Ready for Review"
- Add completion notes to Dev Agent Record
## Story File Structure
The workflow expects stories with this structure:
```markdown
# Story {epic}.{story}: {Title}
**Status**: Draft|In Progress|Ready for Review|Done
**Epic**: {epic_number}
**Estimated Effort**: {estimate}
## Story
As a {role}, I want to {action} so that {benefit}
## Acceptance Criteria
- [ ] AC1...
- [ ] AC2...
## Tasks and Subtasks
- [ ] Task 1
- [ ] Subtask 1.1
- [ ] Task 2
## Dev Notes
{Context and guidance from story creation}
## Dev Agent Record
### Debug Log
{Implementation notes added during development}
### Completion Notes
{Summary added when complete}
## File List
{Files created/modified}
## Change Log
{Implementation summary}
```
## Best Practices
**Load Context First**: Always ensure the story-context XML is loaded at the start. This provides the expert guidance needed throughout implementation.
**Follow Task Order**: Implement tasks in the order listed. Dependencies are typically structured in the task sequence.
**Test Each Task**: Don't wait until the end to write tests. Test each task as you complete it to ensure it meets acceptance criteria.
**Update Incrementally**: Update the story file after each task completion rather than batching updates at the end.
**Respect Boundaries**: Never modify acceptance criteria or story description. If they seem wrong, that's a code-review or correct-course issue, not a dev-story fix.
**Use Context Guidance**: Actively reference the story-context for patterns, gotchas, and best practices. It's there to help you succeed.
## Troubleshooting
**Story Not Found**: Ensure story exists in `dev_story_location` and follows naming pattern `story-{epic}.{story}.md`
**Context XML Missing**: The story-context workflow must be run first. Check for `story-{epic}.{story}-context.xml`
**Tests Failing**: If strict mode is on, the workflow halts. Fix tests before continuing or set strict=false for development iteration.
**Can't Modify Story Section**: Remember only Tasks, Dev Agent Record, File List, Change Log, and Status can be modified. Other sections are immutable.
**Resuming After Review**: If code-review found issues, the workflow automatically picks up from incomplete tasks when run again.
## Related Workflows
- **create-story**: Creates the story specification (run by SM)
- **story-context**: Generates the context XML (run by SM)
- **code-review**: Validates implementation (run by SR/DEV)
- **correct-course**: Handles major issues or scope changes (run by SM)

View File

@@ -0,0 +1,38 @@
---
title: 'Dev Story Completion Checklist'
validation-target: 'Story markdown ({{story_path}})'
required-inputs:
- 'Story markdown file with Tasks/Subtasks, Acceptance Criteria'
optional-inputs:
- 'Test results output (if saved)'
- 'CI logs (if applicable)'
validation-rules:
- 'Only permitted sections in story were modified: Tasks/Subtasks checkboxes, Dev Agent Record (Debug Log, Completion Notes), File List, Change Log, and Status'
---
# Dev Story Completion Checklist
## Tasks Completion
- [ ] All tasks and subtasks for this story are marked complete with [x]
- [ ] Implementation aligns with every Acceptance Criterion in the story
## Tests and Quality
- [ ] Unit tests added/updated for core functionality changed by this story
- [ ] Integration tests added/updated when component interactions are affected
- [ ] End-to-end tests created for critical user flows, if applicable
- [ ] All tests pass locally (no regressions introduced)
- [ ] Linting and static checks (if configured) pass
## Story File Updates
- [ ] File List section includes every new/modified/deleted file (paths relative to repo root)
- [ ] Dev Agent Record contains relevant Debug Log and/or Completion Notes for this work
- [ ] Change Log includes a brief summary of what changed
- [ ] Only permitted sections of the story file were modified
## Final Status
- [ ] Regression suite executed successfully
- [ ] Story Status is set to "Ready for Review"

View File

@@ -0,0 +1,262 @@
# Develop Story - Workflow Instructions
```xml
<critical>The workflow execution engine is governed by: {project-root}/bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<critical>Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level}</critical>
<critical>Generate all documents in {document_output_language}</critical>
<critical>Only modify the story file in these areas: Tasks/Subtasks checkboxes, Dev Agent Record (Debug Log, Completion Notes), File List, Change Log, and Status</critical>
<critical>Execute ALL steps in exact order; do NOT skip steps</critical>
<critical>Absolutely DO NOT stop because of "milestones", "significant progress", or "session boundaries". Continue in a single execution until the story is COMPLETE (all ACs satisfied and all tasks/subtasks checked) UNLESS a HALT condition is triggered or the USER gives other instruction.</critical>
<critical>Do NOT schedule a "next session" or request review pauses unless a HALT condition applies. Only Step 6 decides completion.</critical>
<critical>User skill level ({user_skill_level}) affects conversation style ONLY, not code updates.</critical>
<workflow>
<step n="1" goal="Find next ready story and load it" tag="sprint-status">
<check if="{{story_path}} is provided">
<action>Use {{story_path}} directly</action>
<action>Read COMPLETE story file</action>
<action>Extract story_key from filename or metadata</action>
<goto>task_check</goto>
</check>
<critical>MUST read COMPLETE sprint-status.yaml file from start to end to preserve order</critical>
<action>Load the FULL file: {{output_folder}}/sprint-status.yaml</action>
<action>Read ALL lines from beginning to end - do not skip any content</action>
<action>Parse the development_status section completely to understand story order</action>
<action>Find the FIRST story (by reading in order from top to bottom) where:
- Key matches pattern: number-number-name (e.g., "1-2-user-auth")
- NOT an epic key (epic-X) or retrospective (epic-X-retrospective)
- Status value equals "ready-for-dev"
</action>
<check if="no ready-for-dev or in-progress story found">
<output>📋 No ready-for-dev stories found in sprint-status.yaml
**Options:**
1. Run `story-context` to generate context file and mark drafted stories as ready
2. Run `story-ready` to quickly mark drafted stories as ready without generating context
3. Run `create-story` if no incomplete stories are drafted yet
4. Check {output-folder}/sprint-status.yaml to see current sprint status
</output>
<action>HALT</action>
</check>
<action>Store the found story_key (e.g., "1-2-user-authentication") for later status updates</action>
<action>Find matching story file in {{story_dir}} using story_key pattern: {{story_key}}.md</action>
<action>Read COMPLETE story file from discovered path</action>
<anchor id="task_check" />
<action>Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Dev Agent Record, File List, Change Log, Status</action>
<action>Check if context file exists at: {{story_dir}}/{{story_key}}.context.xml</action>
<check if="context file exists">
<action>Read COMPLETE context file</action>
<action>Parse all sections: story details, artifacts (docs, code, dependencies), interfaces, constraints, tests</action>
<action>Use this context to inform implementation decisions and approaches</action>
</check>
<check if="context file does NOT exist">
<output> No context file found for {{story_key}}
Proceeding with story file only. For better context, consider running `story-context` workflow first.
</output>
</check>
<action>Identify first incomplete task (unchecked [ ]) in Tasks/Subtasks</action>
<action if="no incomplete tasks"><goto step="6">Completion sequence</goto></action>
<action if="story file inaccessible">HALT: "Cannot develop story without access to story file"</action>
<action if="incomplete task or subtask requirements ambiguous">ASK user to clarify or HALT</action>
</step>
<step n="1.5" goal="Detect review continuation and extract review context">
<critical>Determine if this is a fresh start or continuation after code review</critical>
<action>Check if "Senior Developer Review (AI)" section exists in the story file</action>
<action>Check if "Review Follow-ups (AI)" subsection exists under Tasks/Subtasks</action>
<check if="Senior Developer Review section exists">
<action>Set review_continuation = true</action>
<action>Extract from "Senior Developer Review (AI)" section:
- Review outcome (Approve/Changes Requested/Blocked)
- Review date
- Total action items with checkboxes (count checked vs unchecked)
- Severity breakdown (High/Med/Low counts)
</action>
<action>Count unchecked [ ] review follow-up tasks in "Review Follow-ups (AI)" subsection</action>
<action>Store list of unchecked review items as {{pending_review_items}}</action>
<output>⏯️ **Resuming Story After Code Review** ({{review_date}})
**Review Outcome:** {{review_outcome}}
**Action Items:** {{unchecked_review_count}} remaining to address
**Priorities:** {{high_count}} High, {{med_count}} Medium, {{low_count}} Low
**Strategy:** Will prioritize review follow-up tasks (marked [AI-Review]) before continuing with regular tasks.
</output>
</check>
<check if="Senior Developer Review section does NOT exist">
<action>Set review_continuation = false</action>
<action>Set {{pending_review_items}} = empty</action>
<output>🚀 **Starting Fresh Implementation**
Story: {{story_key}}
Context file: {{context_available}}
First incomplete task: {{first_task_description}}
</output>
</check>
</step>
<step n="1.6" goal="Mark story in-progress" tag="sprint-status">
<action>Load the FULL file: {{output_folder}}/sprint-status.yaml</action>
<action>Read all development_status entries to find {{story_key}}</action>
<action>Get current status value for development_status[{{story_key}}]</action>
<check if="current status == 'ready-for-dev'">
<action>Update the story in the sprint status report to = "in-progress"</action>
<output>🚀 Starting work on story {{story_key}}
Status updated: ready-for-dev → in-progress
</output>
</check>
<check if="current status == 'in-progress'">
<output>⏯️ Resuming work on story {{story_key}}
Story is already marked in-progress
</output>
</check>
<check if="current status is neither ready-for-dev nor in-progress">
<output>⚠️ Unexpected story status: {{current_status}}
Expected ready-for-dev or in-progress. Continuing anyway...
</output>
</check>
</step>
<step n="2" goal="Plan and implement task">
<action>Review acceptance criteria and dev notes for the selected task</action>
<action>Plan implementation steps and edge cases; write down a brief plan in Dev Agent Record → Debug Log</action>
<action>Implement the task COMPLETELY including all subtasks, critically following best practices, coding patterns and coding standards in this repo you have learned about from the story and context file or your own critical agent instructions</action>
<action>Handle error conditions and edge cases appropriately</action>
<action if="new or different than what is documented dependencies are needed">ASK user for approval before adding</action>
<action if="3 consecutive implementation failures occur">HALT and request guidance</action>
<action if="required configuration is missing">HALT: "Cannot proceed without necessary configuration files"</action>
<critical>Do not stop after partial progress; continue iterating tasks until all ACs are satisfied and tested or a HALT condition triggers</critical>
<critical>Do NOT propose to pause for review, stand-ups, or validation until Step 6 gates are satisfied</critical>
</step>
<step n="3" goal="Author comprehensive tests">
<action>Create unit tests for business logic and core functionality introduced/changed by the task</action>
<action>Add integration tests for component interactions where desired by test plan or story notes</action>
<action>Include end-to-end tests for critical user flows where desired by test plan or story notes</action>
<action>Cover edge cases and error handling scenarios noted in the test plan or story notes</action>
</step>
<step n="4" goal="Run validations and tests">
<action>Determine how to run tests for this repo (infer or use {{run_tests_command}} if provided)</action>
<action>Run all existing tests to ensure no regressions</action>
<action>Run the new tests to verify implementation correctness</action>
<action>Run linting and code quality checks if configured</action>
<action>Validate implementation meets ALL story acceptance criteria; if ACs include quantitative thresholds (e.g., test pass rate), ensure they are met before marking complete</action>
<action if="regression tests fail">STOP and fix before continuing, consider how current changes made broke regression</action>
<action if="new tests fail">STOP and fix before continuing</action>
</step>
<step n="5" goal="Mark task complete, track review resolutions, and update story">
<critical>If task is a review follow-up, must mark BOTH the task checkbox AND the corresponding action item in the review section</critical>
<action>Check if completed task has [AI-Review] prefix (indicates review follow-up task)</action>
<check if="task is review follow-up">
<action>Extract review item details (severity, description, related AC/file)</action>
<action>Add to resolution tracking list: {{resolved_review_items}}</action>
<!-- Mark task in Review Follow-ups section -->
<action>Mark task checkbox [x] in "Tasks/Subtasks → Review Follow-ups (AI)" section</action>
<!-- CRITICAL: Also mark corresponding action item in review section -->
<action>Find matching action item in "Senior Developer Review (AI) → Action Items" section by matching description</action>
<action>Mark that action item checkbox [x] as resolved</action>
<action>Add to Dev Agent Record → Completion Notes: "✅ Resolved review finding [{{severity}}]: {{description}}"</action>
</check>
<action>ONLY mark the task (and subtasks) checkbox with [x] if ALL tests pass and validation succeeds</action>
<action>Update File List section with any new, modified, or deleted files (paths relative to repo root)</action>
<action>Add completion notes to Dev Agent Record if significant changes were made (summarize intent, approach, and any follow-ups)</action>
<check if="review_continuation == true and {{resolved_review_items}} is not empty">
<action>Count total resolved review items in this session</action>
<action>Add Change Log entry: "Addressed code review findings - {{resolved_count}} items resolved (Date: {{date}})"</action>
</check>
<action>Save the story file</action>
<action>Determine if more incomplete tasks remain</action>
<action if="more tasks remain"><goto step="2">Next task</goto></action>
<action if="no tasks remain"><goto step="6">Completion</goto></action>
</step>
<step n="6" goal="Story completion and mark for review" tag="sprint-status">
<action>Verify ALL tasks and subtasks are marked [x] (re-scan the story document now)</action>
<action>Run the full regression suite (do not skip)</action>
<action>Confirm File List includes every changed file</action>
<action>Execute story definition-of-done checklist, if the story includes one</action>
<action>Update the story Status to: review</action>
<!-- Mark story ready for review -->
<action>Load the FULL file: {{output_folder}}/sprint-status.yaml</action>
<action>Find development_status key matching {{story_key}}</action>
<action>Verify current status is "in-progress" (expected previous state)</action>
<action>Update development_status[{{story_key}}] = "review"</action>
<action>Save file, preserving ALL comments and structure including STATUS DEFINITIONS</action>
<check if="story key not found in file">
<output>⚠️ Story file updated, but sprint-status update failed: {{story_key}} not found
Story is marked Ready for Review in file, but sprint-status.yaml may be out of sync.
</output>
</check>
<action if="any task is incomplete">Return to step 1 to complete remaining work (Do NOT finish with partial progress)</action>
<action if="regression failures exist">STOP and resolve before completing</action>
<action if="File List is incomplete">Update it before completing</action>
</step>
<step n="7" goal="Completion communication and user support">
<action>Optionally run the workflow validation task against the story using {project-root}/bmad/core/tasks/validate-workflow.xml</action>
<action>Prepare a concise summary in Dev Agent Record → Completion Notes</action>
<action>Communicate to {user_name} that story implementation is complete and ready for review</action>
<action>Summarize key accomplishments: story ID, story key, title, key changes made, tests added, files modified</action>
<action>Provide the story file path and current status (now "review", was "in-progress")</action>
<action>Based on {user_skill_level}, ask if user needs any explanations about:
- What was implemented and how it works
- Why certain technical decisions were made
- How to test or verify the changes
- Any patterns, libraries, or approaches used
- Anything else they'd like clarified
</action>
<check if="user asks for explanations">
<action>Provide clear, contextual explanations tailored to {user_skill_level}</action>
<action>Use examples and references to specific code when helpful</action>
</check>
<action>Once explanations are complete (or user indicates no questions), suggest logical next steps</action>
<action>Common next steps to suggest (but allow user flexibility):
- Review the implemented story yourself and test the changes
- Verify all acceptance criteria are met
- Ensure deployment readiness if applicable
- Run `code-review` workflow for peer review
- Check sprint-status.yaml to see project progress
</action>
<action>Remain flexible - allow user to choose their own path or ask for other assistance</action>
</step>
</workflow>
```

View File

@@ -0,0 +1,26 @@
name: dev-story
description: "Execute a story by implementing tasks/subtasks, writing tests, validating, and updating the story file per acceptance criteria"
author: "BMad"
# Critical variables from config
config_source: "{project-root}/bmad/bmm/config.yaml"
output_folder: "{config_source}:output_folder"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
user_skill_level: "{config_source}:user_skill_level"
document_output_language: "{config_source}:document_output_language"
story_dir: "{config_source}:dev_story_location"
run_until_complete: "{config_source}:run_until_complete"
run_tests_command: "{config_source}:run_tests_command"
date: system-generated
story_file: "" # Explicit story path; auto-discovered if empty
# Context file uses same story_key as story file (e.g., "1-2-user-authentication.context.xml")
context_file: "{story_dir}/{{story_key}}.context.xml"
# Workflow components
installed_path: "{project-root}/bmad/bmm/workflows/4-implementation/dev-story"
instructions: "{installed_path}/instructions.md"
validation: "{installed_path}/checklist.md"
standalone: true

View File

@@ -0,0 +1,195 @@
# Technical Specification Workflow
## Overview
Generate a comprehensive Technical Specification for a single epic from PRD, Epics file and Architecture to produce a document with full acceptance criteria and traceability mapping. Creates detailed implementation guidance that bridges business requirements with technical execution.
## Key Features
- **PRD-Architecture Integration** - Synthesizes business requirements with technical constraints
- **Traceability Mapping** - Links acceptance criteria to technical components and tests
- **Multi-Input Support** - Leverages PRD, architecture, front-end specs, and brownfield notes
- **Implementation Focus** - Provides concrete technical guidance for development teams
- **Testing Integration** - Includes test strategy aligned with acceptance criteria
- **Validation Ready** - Generates specifications suitable for architectural review
## Usage
### Basic Invocation
```bash
workflow tech-spec
```
### With Input Documents
```bash
# With specific PRD and architecture
workflow tech-spec --input PRD.md --input architecture.md
# With comprehensive inputs
workflow tech-spec --input PRD.md --input architecture.md --input front-end-spec.md
```
### Configuration
- **output_folder**: Location for generated technical specification
- **epic_id**: Used in output filename (extracted from PRD or prompted)
- **recommended_inputs**: PRD, architecture, front-end spec, brownfield notes
## Workflow Structure
### Files Included
```
tech-spec/
├── workflow.yaml # Configuration and metadata
├── instructions.md # Step-by-step execution guide
├── template.md # Technical specification structure
├── checklist.md # Validation criteria
└── README.md # This file
```
## Workflow Process
### Phase 1: Input Collection and Analysis (Steps 1-2)
- **Document Discovery**: Locates PRD and Architecture documents
- **Epic Extraction**: Identifies epic title and ID from PRD content
- **Template Initialization**: Creates tech-spec document from template
- **Context Analysis**: Reviews complete PRD and architecture for alignment
### Phase 2: Technical Design Specification (Step 3)
- **Service Architecture**: Defines services/modules with responsibilities and interfaces
- **Data Modeling**: Specifies entities, schemas, and relationships
- **API Design**: Documents endpoints, request/response models, and error handling
- **Workflow Definition**: Details sequence flows and data processing patterns
### Phase 3: Non-Functional Requirements (Step 4)
- **Performance Specs**: Defines measurable latency, throughput, and scalability targets
- **Security Requirements**: Specifies authentication, authorization, and data protection
- **Reliability Standards**: Documents availability, recovery, and degradation behavior
- **Observability Framework**: Defines logging, metrics, and tracing requirements
### Phase 4: Dependencies and Integration (Step 5)
- **Dependency Analysis**: Scans repository for package manifests and frameworks
- **Integration Mapping**: Identifies external systems and API dependencies
- **Version Management**: Documents version constraints and compatibility requirements
- **Infrastructure Needs**: Specifies deployment and runtime dependencies
### Phase 5: Acceptance and Testing (Step 6)
- **Criteria Normalization**: Extracts and refines acceptance criteria from PRD
- **Traceability Matrix**: Maps acceptance criteria to technical components
- **Test Strategy**: Defines testing approach for each acceptance criterion
- **Component Mapping**: Links requirements to specific APIs and modules
### Phase 6: Risk and Validation (Steps 7-8)
- **Risk Assessment**: Identifies technical risks, assumptions, and open questions
- **Mitigation Planning**: Provides strategies for addressing identified risks
- **Quality Validation**: Ensures specification meets completeness criteria
- **Checklist Verification**: Validates against workflow quality standards
## Output
### Generated Files
- **Primary output**: tech-spec-{epic_id}-{date}.md
- **Traceability data**: Embedded mapping tables linking requirements to implementation
### Output Structure
1. **Overview and Scope** - Project context and boundaries
2. **System Architecture Alignment** - Connection to architecture
3. **Detailed Design** - Services, data models, APIs, and workflows
4. **Non-Functional Requirements** - Performance, security, reliability, observability
5. **Dependencies and Integrations** - External systems and technical dependencies
6. **Acceptance Criteria** - Testable requirements from PRD
7. **Traceability Mapping** - Requirements-to-implementation mapping
8. **Test Strategy** - Testing approach and framework alignment
9. **Risks and Assumptions** - Technical risks and mitigation strategies
## Requirements
- **PRD Document**: Product Requirements Document with clear acceptance criteria
- **Architecture Document**: architecture or technical design
- **Repository Access**: For dependency analysis and framework detection
- **Epic Context**: Clear epic identification and scope definition
## Best Practices
### Before Starting
1. **Validate Inputs**: Ensure PRD and architecture documents are complete and current
2. **Review Requirements**: Confirm acceptance criteria are specific and testable
3. **Check Dependencies**: Verify repository structure supports automated dependency analysis
### During Execution
1. **Maintain Traceability**: Ensure every acceptance criterion maps to implementation
2. **Be Implementation-Specific**: Provide concrete technical guidance, not high-level concepts
3. **Consider Constraints**: Factor in existing system limitations and brownfield challenges
### After Completion
1. **Architect Review**: Have technical lead validate design decisions
2. **Developer Walkthrough**: Ensure implementation team understands specifications
3. **Test Planning**: Use traceability matrix for test case development
## Troubleshooting
### Common Issues
**Issue**: Missing or incomplete technical details
- **Solution**: Review architecture document for implementation specifics
- **Check**: Ensure PRD contains sufficient functional requirements detail
**Issue**: Acceptance criteria don't map cleanly to technical components
- **Solution**: Break down complex criteria into atomic, testable statements
- **Check**: Verify PRD acceptance criteria are properly structured
**Issue**: Dependency analysis fails or is incomplete
- **Solution**: Check repository structure and manifest file locations
- **Check**: Verify repository contains standard dependency files (package.json, etc.)
**Issue**: Non-functional requirements are too vague
- **Solution**: Extract specific performance and quality targets from architecture
- **Check**: Ensure architecture document contains measurable NFR specifications
## Customization
To customize this workflow:
1. **Modify Template**: Update template.md to match organizational technical spec standards
2. **Extend Dependencies**: Add support for additional package managers or frameworks
3. **Enhance Validation**: Extend checklist.md with company-specific technical criteria
4. **Add Sections**: Include additional technical concerns like DevOps or monitoring
## Version History
- **v6.0.0** - Comprehensive technical specification with traceability
- PRD-Architecture integration
- Automated dependency detection
- Traceability mapping
- Test strategy alignment
## Support
For issues or questions:
- Review the workflow creation guide at `/bmad/bmb/workflows/create-workflow/workflow-creation-guide.md`
- Validate output using `checklist.md`
- Ensure PRD and architecture documents are complete before starting
- Consult BMAD documentation for technical specification standards
---
_Part of the BMad Method v6 - BMM (Method) Module_

View File

@@ -0,0 +1,17 @@
# Tech Spec Validation Checklist
```xml
<checklist id="bmad/bmm/workflows/4-implementation/epic-tech-context/checklist">
<item>Overview clearly ties to PRD goals</item>
<item>Scope explicitly lists in-scope and out-of-scope</item>
<item>Design lists all services/modules with responsibilities</item>
<item>Data models include entities, fields, and relationships</item>
<item>APIs/interfaces are specified with methods and schemas</item>
<item>NFRs: performance, security, reliability, observability addressed</item>
<item>Dependencies/integrations enumerated with versions where known</item>
<item>Acceptance criteria are atomic and testable</item>
<item>Traceability maps AC → Spec → Components → Tests</item>
<item>Risks/assumptions/questions listed with mitigation/next steps</item>
<item>Test strategy covers all ACs and critical paths</item>
</checklist>
```

View File

@@ -0,0 +1,160 @@
<!-- BMAD BMM Tech Spec Workflow Instructions (v6) -->
```xml
<critical>The workflow execution engine is governed by: {project_root}/bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<critical>Communicate all responses in {communication_language}</critical>
<critical>This workflow generates a comprehensive Technical Specification from PRD and Architecture, including detailed design, NFRs, acceptance criteria, and traceability mapping.</critical>
<critical>If required inputs cannot be auto-discovered HALT with a clear message listing missing documents, allow user to provide them to proceed.</critical>
<workflow>
<step n="1" goal="Collect inputs and discover next epic" tag="sprint-status">
<action>Identify PRD and Architecture documents from recommended_inputs. Attempt to auto-discover at default paths.</action>
<ask if="inputs are missing">ask the user for file paths. HALT and wait for docs to proceed</ask>
<!-- Intelligent Epic Discovery -->
<critical>MUST read COMPLETE sprint-status.yaml file to discover next epic</critical>
<action>Load the FULL file: {{output_folder}}/sprint-status.yaml</action>
<action>Read ALL development_status entries</action>
<action>Find all epics with status "backlog" (not yet contexted)</action>
<action>Identify the FIRST backlog epic as the suggested default</action>
<check if="backlog epics found">
<output>📋 **Next Epic Suggested:** Epic {{suggested_epic_id}}: {{suggested_epic_title}}</output>
<ask>Use this epic?
- [y] Yes, use {{suggested_epic_id}}
- [n] No, let me specify a different epic_id
</ask>
<check if="user selects 'n'">
<ask>Enter the epic_id you want to context</ask>
<action>Store user-provided epic_id as {{epic_id}}</action>
</check>
<check if="user selects 'y'">
<action>Use {{suggested_epic_id}} as {{epic_id}}</action>
</check>
</check>
<check if="no backlog epics found">
<output>✅ All epics are already contexted!
No epics with status "backlog" found in sprint-status.yaml.
</output>
<ask>Do you want to re-context an existing epic? Enter epic_id or [q] to quit:</ask>
<check if="user enters epic_id">
<action>Store as {{epic_id}}</action>
</check>
<check if="user enters 'q'">
<action>HALT - No work needed</action>
</check>
</check>
<action>Extract {{epic_title}} from PRD based on {{epic_id}}.</action>
<action>Resolve output file path using workflow variables and initialize by writing the template.</action>
</step>
<step n="2" goal="Validate epic exists in sprint status" tag="sprint-status">
<action>Look for epic key "epic-{{epic_id}}" in development_status (already loaded from step 1)</action>
<action>Get current status value if epic exists</action>
<check if="epic not found">
<output>⚠️ Epic {{epic_id}} not found in sprint-status.yaml
This epic hasn't been registered in the sprint plan yet.
Run sprint-planning workflow to initialize epic tracking.
</output>
<action>HALT</action>
</check>
<check if="epic status == 'contexted'">
<output> Epic {{epic_id}} already marked as contexted
Continuing to regenerate tech spec...
</output>
</check>
</step>
<step n="3" goal="Overview and scope">
<action>Read COMPLETE found {recommended_inputs}.</action>
<template-output file="{default_output_file}">
Replace {{overview}} with a concise 1-2 paragraph summary referencing PRD context and goals
Replace {{objectives_scope}} with explicit in-scope and out-of-scope bullets
Replace {{system_arch_alignment}} with a short alignment summary to the architecture (components referenced, constraints)
</template-output>
</step>
<step n="4" goal="Detailed design">
<action>Derive concrete implementation specifics from all {recommended_inputs} (CRITICAL: NO invention). If a epic tech spec precedes this one and exists, maintain consistency where appropriate.</action>
<template-output file="{default_output_file}">
Replace {{services_modules}} with a table or bullets listing services/modules with responsibilities, inputs/outputs, and owners
Replace {{data_models}} with normalized data model definitions (entities, fields, types, relationships); include schema snippets where available
Replace {{apis_interfaces}} with API endpoint specs or interface signatures (method, path, request/response models, error codes)
Replace {{workflows_sequencing}} with sequence notes or diagrams-as-text (steps, actors, data flow)
</template-output>
</step>
<step n="5" goal="Non-functional requirements">
<template-output file="{default_output_file}">
Replace {{nfr_performance}} with measurable targets (latency, throughput); link to any performance requirements in PRD/Architecture
Replace {{nfr_security}} with authn/z requirements, data handling, threat notes; cite source sections
Replace {{nfr_reliability}} with availability, recovery, and degradation behavior
Replace {{nfr_observability}} with logging, metrics, tracing requirements; name required signals
</template-output>
</step>
<step n="6" goal="Dependencies and integrations">
<action>Scan repository for dependency manifests (e.g., package.json, pyproject.toml, go.mod, Unity Packages/manifest.json).</action>
<template-output file="{default_output_file}">
Replace {{dependencies_integrations}} with a structured list of dependencies and integration points with version or commit constraints when known
</template-output>
</step>
<step n="7" goal="Acceptance criteria and traceability">
<action>Extract acceptance criteria from PRD; normalize into atomic, testable statements.</action>
<template-output file="{default_output_file}">
Replace {{acceptance_criteria}} with a numbered list of testable acceptance criteria
Replace {{traceability_mapping}} with a table mapping: AC → Spec Section(s) → Component(s)/API(s) → Test Idea
</template-output>
</step>
<step n="8" goal="Risks and test strategy">
<template-output file="{default_output_file}">
Replace {{risks_assumptions_questions}} with explicit list (each item labeled as Risk/Assumption/Question) with mitigation or next step
Replace {{test_strategy}} with a brief plan (test levels, frameworks, coverage of ACs, edge cases)
</template-output>
</step>
<step n="9" goal="Validate and mark epic contexted" tag="sprint-status">
<invoke-task>Validate against checklist at {installed_path}/checklist.md using bmad/core/tasks/validate-workflow.xml</invoke-task>
<!-- Mark epic as contexted -->
<action>Load the FULL file: {{output_folder}}/sprint-status.yaml</action>
<action>Find development_status key "epic-{{epic_id}}"</action>
<action>Verify current status is "backlog" (expected previous state)</action>
<action>Update development_status["epic-{{epic_id}}"] = "contexted"</action>
<action>Save file, preserving ALL comments and structure including STATUS DEFINITIONS</action>
<check if="epic key not found in file">
<output>⚠️ Could not update epic status: epic-{{epic_id}} not found</output>
</check>
<output>**✅ Tech Spec Generated Successfully, {user_name}!**
**Epic Details:**
- Epic ID: {{epic_id}}
- Epic Title: {{epic_title}}
- Tech Spec File: {{default_output_file}}
- Epic Status: contexted (was backlog)
**Note:** This is a JIT (Just-In-Time) workflow - run again for other epics as needed.
**Next Steps:**
1. Load SM agent and run `create-story` to begin implementing the first story under this epic.
</output>
</step>
</workflow>
```

View File

@@ -0,0 +1,76 @@
# Epic Technical Specification: {{epic_title}}
Date: {{date}}
Author: {{user_name}}
Epic ID: {{epic_id}}
Status: Draft
---
## Overview
{{overview}}
## Objectives and Scope
{{objectives_scope}}
## System Architecture Alignment
{{system_arch_alignment}}
## Detailed Design
### Services and Modules
{{services_modules}}
### Data Models and Contracts
{{data_models}}
### APIs and Interfaces
{{apis_interfaces}}
### Workflows and Sequencing
{{workflows_sequencing}}
## Non-Functional Requirements
### Performance
{{nfr_performance}}
### Security
{{nfr_security}}
### Reliability/Availability
{{nfr_reliability}}
### Observability
{{nfr_observability}}
## Dependencies and Integrations
{{dependencies_integrations}}
## Acceptance Criteria (Authoritative)
{{acceptance_criteria}}
## Traceability Mapping
{{traceability_mapping}}
## Risks, Assumptions, Open Questions
{{risks_assumptions_questions}}
## Test Strategy Summary
{{test_strategy}}

View File

@@ -0,0 +1,32 @@
name: tech-spec
description: "Generate a comprehensive Technical Specification from PRD and Architecture with acceptance criteria and traceability mapping"
author: "BMAD BMM"
# Critical variables
config_source: "{project-root}/bmad/bmm/config.yaml"
output_folder: "{config_source}:output_folder"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
date: system-generated
# Inputs expected ( check output_folder or ask user if missing)
recommended_inputs:
- prd
- gdd
- spec
- architecture
- ux_spec
- ux-design
- if there is an index.md then read the index.md to find other related docs that could be relevant
- prior epic tech-specs for model, style and consistency reference
# Workflow components
installed_path: "{project-root}/bmad/bmm/workflows/4-implementation/epic-tech-context"
template: "{installed_path}/template.md"
instructions: "{installed_path}/instructions.md"
validation: "{installed_path}/checklist.md"
# Output configuration
default_output_file: "{output_folder}/tech-spec-epic-{{epic_id}}.md"
standalone: true

View File

@@ -0,0 +1,77 @@
---
last-redoc-date: 2025-10-01
---
# Retrospective Workflow
The retrospective workflow is v6's learning capture mechanism, run by the Scrum Master (SM) after epic completion to systematically harvest insights, patterns, and improvements discovered during implementation. Unlike traditional retrospectives that focus primarily on process and team dynamics, this workflow performs deep technical and methodological analysis of the entire epic journey—from story creation through implementation to review—identifying what worked well, what could improve, and what patterns emerged. The retrospective produces actionable intelligence that informs future epics, improves workflow templates, and evolves the team's shared knowledge.
This workflow represents a critical feedback loop in the v6 methodology. Each epic is an experiment in adaptive software development, and the retrospective is where we analyze the results of that experiment. The SM examines completed stories, review feedback, course corrections made, story-context effectiveness, technical decisions, and team collaboration patterns to extract transferable learning. This learning manifests as updates to workflow templates, new story-context patterns, refined estimation approaches, and documented best practices.
The retrospective is not just a compliance ritual but a genuine opportunity for continuous improvement. By systematically analyzing what happened during the epic, the team builds institutional knowledge that makes each subsequent epic smoother, faster, and higher quality. The insights captured here directly improve future story creation, context generation, development efficiency, and review effectiveness.
## Usage
```bash
# Conduct retrospective after epic completion
bmad run retrospective
```
The SM should run this workflow:
- After all stories in an epic are completed
- Before starting the next major epic
- When significant learning has accumulated
- As preparation for improving future epic execution
## Inputs
**Required Context:**
- **Epic Document**: Complete epic specification and goals
- **All Story Documents**: Every story created for the epic
- **Review Reports**: All review feedback and findings
- **Course Corrections**: Any correct-course actions taken
- **Story Contexts**: Generated expert guidance for each story
- **Implementation Artifacts**: Code commits, test results, deployment records
**Analysis Targets:**
- Story creation accuracy and sizing
- Story-context effectiveness and relevance
- Implementation challenges and solutions
- Review findings and patterns
- Technical decisions and outcomes
- Estimation accuracy
- Team collaboration and communication
- Workflow effectiveness
## Outputs
**Primary Deliverable:**
- **Retrospective Report** (`[epic-id]-retrospective.xml`): Comprehensive analysis including:
- Executive summary of epic outcomes
- Story-by-story analysis of what was learned
- Technical insights and architecture learnings
- Story-context effectiveness assessment
- Process improvements identified
- Patterns discovered (good and bad)
- Recommendations for future epics
- Metrics and statistics (velocity, cycle time, etc.)
**Actionable Outputs:**
- **Template Updates**: Improvements to workflow templates based on learning
- **Pattern Library**: New story-context patterns for common scenarios
- **Best Practices**: Documented approaches that worked well
- **Gotchas**: Issues to avoid in future work
- **Estimation Refinements**: Better story sizing guidance
- **Process Changes**: Workflow adjustments for next epic
**Artifacts:**
- Epic marked as complete with retrospective attached
- Knowledge base updated with new patterns
- Workflow templates updated if needed
- Team learning captured for onboarding

View File

@@ -0,0 +1,479 @@
# Retrospective - Epic Completion Review Instructions
<critical>The workflow execution engine is governed by: {project-root}/bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {project-root}/bmad/bmm/workflows/4-implementation/retrospective/workflow.yaml</critical>
<critical>Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level}</critical>
<critical>Generate all documents in {document_output_language}</critical>
<critical>
<critical>DOCUMENT OUTPUT: Retrospective analysis. Concise insights, lessons learned, action items. User skill level ({user_skill_level}) affects conversation style ONLY, not retrospective content.</critical>
FACILITATION NOTES:
- Scrum Master facilitates this retrospective
- Psychological safety is paramount - NO BLAME
- Focus on systems, processes, and learning
- Everyone contributes with specific examples preferred
- Action items must be achievable with clear ownership
- Two-part format: (1) Epic Review + (2) Next Epic Preparation
</critical>
<workflow>
<step n="1" goal="Epic Context Discovery and verify completion" tag="sprint-status">
<action>Help the user identify which epic was just completed through natural conversation</action>
<action>Attempt to auto-detect by checking {output_folder}/stories/ for the highest numbered completed story and extracting the epic number</action>
<action>If auto-detection succeeds, confirm with user: "It looks like Epic {{epic_number}} was just completed - is that correct?"</action>
<action>If auto-detection fails or user indicates different epic, ask them to share which epic they just completed</action>
<action>Verify epic completion status:</action>
<action>Load the FULL file: {output_folder}/sprint-status.yaml</action>
<action>Read ALL development_status entries</action>
<action>Find all stories for epic {{epic_number}}:
- Look for keys starting with "{{epic_number}}-" (e.g., "1-1-", "1-2-", etc.)
- Exclude epic key itself ("epic-{{epic_number}}")
- Exclude retrospective key ("epic-{{epic_number}}-retrospective")
</action>
<action>Count total stories found for this epic</action>
<action>Count stories with status = "done"</action>
<action>Collect list of pending story keys (status != "done")</action>
<action>Determine if complete: true if all stories are done, false otherwise</action>
<check if="epic is not complete">
<output>⚠️ Epic {{epic_number}} is not yet complete for retrospective
**Epic Status:**
- Total Stories: {{total_stories}}
- Completed (Done): {{done_stories}}
- Pending: {{pending_count}}
**Pending Stories:**
{{pending_story_list}}
**Options:**
1. Complete remaining stories before running retrospective
2. Continue with partial retrospective (not recommended)
3. Run sprint-planning to refresh story tracking
</output>
<ask if="{{non_interactive}} == false">Epic is incomplete. Continue anyway? (yes/no)</ask>
<check if="user says no">
<action>HALT</action>
</check>
<action if="user says yes">Set {{partial_retrospective}} = true</action>
</check>
<check if="epic is complete">
<output>✅ Epic {{epic_number}} is complete - all {{done_stories}} stories done!
Ready to proceed with retrospective.
</output>
</check>
<action>Load the completed epic from: {output_folder}/prd/epic-{{epic_number}}.md</action>
<action>Extract epic details:
- Epic title and goals
- Success criteria
- Planned stories and story points
- Estimated sprint duration
- Business objectives
</action>
<action>Find all stories for this epic in {output_folder}/stories/</action>
<action>For each story, extract: - Story number and title - Completion status - Story points (if tracked) - Actual completion date - Dev Agent Record notes - TEA Results and testing outcomes - PO Notes and acceptance - Blockers encountered and resolution - Technical debt incurred
</action>
<action>Calculate epic metrics: - Completed stories vs. total planned - Actual story points delivered vs. planned - Actual sprints taken vs. estimated - Velocity (points per sprint) - Blocker count - Technical debt items logged
</action>
<action>Review epic goals and compare actual outcomes vs. planned</action>
<action>Note any scope changes or descoped items</action>
<action>Document key architectural decisions made during epic</action>
<action>Identify technical debt incurred and why</action>
</step>
<step n="2" goal="Preview Next Epic">
<action>Identify the next epic in sequence</action>
<action>Load next epic from: {output_folder}/prd/epic-{{next_epic_number}}.md</action>
<action if="next epic exists">
Analyze next epic for:
- Epic title and objectives
- Planned stories and complexity
- Dependencies on completed epic work
- New technical requirements or capabilities needed
- Potential risks or unknowns
</action>
<action>Identify dependencies on completed work:
- What components from Epic {{completed_number}} does Epic {{next_number}} rely on?
- Are all prerequisites complete and stable?
- Any incomplete work that creates blocking dependencies?
</action>
<action>Note potential gaps or preparation needed:
- Technical setup required (infrastructure, tools, libraries)
- Knowledge gaps to fill (research, training, spikes)
- Refactoring needed before starting next epic
- Documentation or specifications to create
</action>
<action>Check for technical prerequisites:
- APIs or integrations that must be ready
- Data migrations or schema changes needed
- Testing infrastructure requirements
- Deployment or environment setup
</action>
</step>
<step n="3" goal="Initialize Retrospective with Context">
<action>Scrum Master opens the retrospective with context</action>
<action>Present formatted retrospective header:
```
🔄 TEAM RETROSPECTIVE - Epic {{epic_number}}: {{epic_title}}
Scrum Master facilitating
═══════════════════════════════════════════════════════════
EPIC {{epic_number}} SUMMARY:
Delivery Metrics:
- Completed: {{completed_stories}}/{{total_stories}} stories ({{completion_percentage}}%)
- Velocity: {{actual_points}} story points (planned: {{planned_points}})
- Duration: {{actual_sprints}} sprints (planned: {{planned_sprints}})
- Average velocity: {{points_per_sprint}} points/sprint
Quality and Technical:
- Blockers encountered: {{blocker_count}}
- Technical debt items: {{debt_count}}
- Test coverage: {{coverage_info}}
- Production incidents: {{incident_count}}
Business Outcomes:
- Goals achieved: {{goals_met}}/{{total_goals}}
- Success criteria: {{criteria_status}}
- Stakeholder feedback: {{feedback_summary}}
═══════════════════════════════════════════════════════════
NEXT EPIC PREVIEW: Epic {{next_number}}: {{next_epic_title}}
Dependencies on Epic {{epic_number}}:
{{list_dependencies}}
Preparation Needed:
{{list_preparation_gaps}}
Technical Prerequisites:
{{list_technical_prereqs}}
═══════════════════════════════════════════════════════════
Team assembled for reflection:
{{list_agents_based_on_story_records}}
Focus Areas:
1. Learning from Epic {{epic_number}} execution
2. Preparing for Epic {{next_number}} success
```
</action>
<action>Load agent configurations from {agent-manifest}</action>
<action>Ensure key roles present from the {agent_manifest}: Product Owner, Scrum Master (facilitating the retro), Devs, Testing or QA, Architect, Analyst</action>
</step>
<step n="4" goal="Epic Review Discussion">
<action>Scrum Master facilitates Part 1: Reviewing the completed epic through natural, psychologically safe discussion</action>
<action>Create space for each agent to share their perspective in their unique voice and communication style, grounded in actual story data and outcomes</action>
<action>Maintain psychological safety throughout - focus on learning and systems, not blame or individual performance</action>
<action>Guide the retrospective conversation to naturally surface key themes across three dimensions:</action>
**1. Successes and Strengths:**
<action>Facilitate discussion that helps agents share what worked well during the epic - encourage specific examples from completed stories, effective practices, velocity achievements, collaboration wins, and smart technical decisions</action>
<action>Draw out concrete examples: "Can you share a specific story where that approach worked well?"</action>
**2. Challenges and Growth Areas:**
<action>Create safe space for agents to explore challenges encountered - guide them to discuss blockers, process friction, technical debt decisions, and coordination issues with curiosity rather than judgment</action>
<action>Probe for root causes: "What made that challenging? What pattern do we see here?"</action>
<action>Keep focus on systems and processes, not individuals</action>
**3. Insights and Learning:**
<action>Help the team articulate what they learned from this epic - facilitate discovery of patterns to repeat or avoid, skills gained, and process improvements worth trying</action>
<action>Connect insights to future application: "How might this insight help us in future epics?"</action>
<action>For each agent participating (loaded from {agent_manifest}):</action>
- Let them speak naturally in their role's voice and communication style
- Encourage grounding in specific story records, metrics, and real outcomes
- Allow themes to emerge organically rather than forcing a rigid structure
- Follow interesting threads with adaptive questions
- Balance celebration with honest assessment
<action>As facilitator, actively synthesize common themes and patterns as the discussion unfolds</action>
<action>Notice when multiple agents mention similar issues or successes - call these out to deepen the team's shared understanding</action>
<action>Ensure every voice is heard, inviting quieter agents to contribute</action>
</step>
<step n="5" goal="Next Epic Preparation Discussion">
<action>Scrum Master facilitates Part 2: Preparing for the next epic through forward-looking exploration</action>
<action>Shift the team's focus from reflection to readiness - guide each agent to explore preparation needs from their domain perspective</action>
<action>Facilitate discovery across critical preparation dimensions:</action>
**Dependencies and Continuity:**
<action>Guide agents to explore connections between the completed epic and the upcoming one - help them identify what components, decisions, or work from Epic {{completed_number}} the next epic relies upon</action>
<action>Probe for gaps: "What needs to be in place before we can start Epic {{next_number}}?"</action>
<action>Surface hidden dependencies: "Are there integration points we need to verify?"</action>
**Readiness and Setup:**
<action>Facilitate discussion about what preparation work is needed before the next epic can begin successfully - technical setup, knowledge development, refactoring, documentation, or infrastructure</action>
<action>Draw out specific needs: "What do you need to feel ready to start Epic {{next_number}}?"</action>
<action>Identify knowledge gaps: "What do we need to learn or research before diving in?"</action>
**Risks and Mitigation:**
<action>Create space for agents to voice concerns and uncertainties about the upcoming epic based on what they learned from this one</action>
<action>Explore proactively: "Based on Epic {{completed_number}}, what concerns do you have about Epic {{next_number}}?"</action>
<action>Develop mitigation thinking: "What could we do now to reduce that risk?"</action>
<action>Identify early warning signs: "How will we know if we're heading for that problem again?"</action>
<action>For each agent participating:</action>
- Let them share preparation needs in their natural voice
- Encourage domain-specific insights (Architect on technical setup, PM on requirements clarity, etc.)
- Follow interesting preparation threads with adaptive questions
- Help agents build on each other's observations
- Surface quick wins that could de-risk the epic early
<action>As facilitator, identify dependencies between preparation tasks as they emerge</action>
<action>Notice when preparation items from different agents connect or conflict - explore these intersections</action>
<action>Build a shared understanding of what "ready to start Epic {{next_number}}" actually means</action>
</step>
<step n="6" goal="Synthesize Action Items">
<action>Scrum Master identifies patterns across all agent feedback</action>
<action>Synthesizes common themes into team agreements</action>
<action>Creates specific, achievable action items with clear ownership</action>
<action>Develops preparation sprint tasks if significant setup needed</action>
<action>Present comprehensive action plan:
```
═══════════════════════════════════════════════════════════
📝 EPIC {{completed_number}} ACTION ITEMS:
Process Improvements:
1. {{action_item}} (Owner: {{agent}}, By: {{timeline}})
2. {{action_item}} (Owner: {{agent}}, By: {{timeline}})
3. {{action_item}} (Owner: {{agent}}, By: {{timeline}})
Technical Debt:
1. {{debt_item}} (Owner: {{agent}}, Priority: {{high/medium/low}})
2. {{debt_item}} (Owner: {{agent}}, Priority: {{high/medium/low}})
Documentation:
1. {{doc_need}} (Owner: {{agent}}, By: {{timeline}})
Team Agreements:
- {{agreement_1}}
- {{agreement_2}}
- {{agreement_3}}
═══════════════════════════════════════════════════════════
🚀 EPIC {{next_number}} PREPARATION SPRINT:
Technical Setup:
[ ] {{setup_task}} (Owner: {{agent}}, Est: {{hours/days}})
[ ] {{setup_task}} (Owner: {{agent}}, Est: {{hours/days}})
Knowledge Development:
[ ] {{research_task}} (Owner: {{agent}}, Est: {{hours/days}})
[ ] {{spike_task}} (Owner: {{agent}}, Est: {{hours/days}})
Cleanup/Refactoring:
[ ] {{refactor_task}} (Owner: {{agent}}, Est: {{hours/days}})
[ ] {{cleanup_task}} (Owner: {{agent}}, Est: {{hours/days}})
Documentation:
[ ] {{doc_task}} (Owner: {{agent}}, Est: {{hours/days}})
Total Estimated Effort: {{total_hours}} hours ({{total_days}} days)
═══════════════════════════════════════════════════════════
⚠️ CRITICAL PATH:
Blockers to Resolve Before Epic {{next_number}}:
1. {{critical_item}} (Owner: {{agent}}, Must complete by: {{date}})
2. {{critical_item}} (Owner: {{agent}}, Must complete by: {{date}})
Dependencies Timeline:
{{timeline_visualization_of_critical_dependencies}}
Risk Mitigation:
- {{risk}}: {{mitigation_strategy}}
- {{risk}}: {{mitigation_strategy}}
```
</action>
<action>Ensure every action item has clear owner and timeline</action>
<action>Prioritize preparation tasks by dependencies and criticality</action>
<action>Identify which tasks can run in parallel vs. sequential</action>
</step>
<step n="7" goal="Critical Readiness Exploration">
<action>Scrum Master leads a thoughtful exploration of whether Epic {{completed_number}} is truly complete and the team is ready for Epic {{next_number}}</action>
<action>Approach this as discovery, not interrogation - help user surface any concerns or unfinished elements that could impact the next epic</action>
<action>Guide a conversation exploring the completeness of Epic {{completed_number}} across critical dimensions:</action>
**Testing and Quality:**
<action>Explore the testing state of the epic - help user assess whether quality verification is truly complete</action>
<action>Ask thoughtfully: "Walk me through the testing that's been done for Epic {{completed_number}}. Does anything still need verification?"</action>
<action>Probe for gaps: "Are you confident the epic is production-ready from a quality perspective?"</action>
<action if="testing concerns surface">Add to Critical Path: Complete necessary testing before Epic {{next_number}}</action>
**Deployment and Release:**
<action>Understand where the epic currently stands in the deployment pipeline</action>
<action>Explore: "What's the deployment status for Epic {{completed_number}}? Is it live, scheduled, or still pending?"</action>
<action>If not yet deployed, clarify timeline: "When is deployment planned? Does that timing work for starting Epic {{next_number}}?"</action>
<action if="deployment must happen first">Add to Critical Path: Deploy Epic {{completed_number}} with clear timeline</action>
**Stakeholder Acceptance:**
<action>Guide user to reflect on business validation and stakeholder satisfaction</action>
<action>Ask: "Have stakeholders seen and accepted the Epic {{completed_number}} deliverables? Any feedback pending?"</action>
<action>Probe for risk: "Is there anything about stakeholder acceptance that could affect Epic {{next_number}}?"</action>
<action if="acceptance incomplete">Add to Critical Path: Obtain stakeholder acceptance before proceeding</action>
**Technical Health:**
<action>Create space for honest assessment of codebase stability after the epic</action>
<action>Explore: "How does the codebase feel after Epic {{completed_number}}? Stable and maintainable, or are there concerns?"</action>
<action>If concerns arise, probe deeper: "What's causing those concerns? What would it take to address them?"</action>
<action if="stability concerns exist">Document concerns and add to Preparation Sprint: Address stability issues before Epic {{next_number}}</action>
**Unresolved Blockers:**
<action>Help user surface any lingering issues that could create problems for the next epic</action>
<action>Ask: "Are there any unresolved blockers or technical issues from Epic {{completed_number}} that we need to address before moving forward?"</action>
<action>Explore impact: "How would these blockers affect Epic {{next_number}} if left unresolved?"</action>
<action if="blockers exist">Document blockers and add to Critical Path with appropriate priority</action>
<action>Synthesize the readiness discussion into a clear picture of what must happen before Epic {{next_number}} can safely begin</action>
<action>Summarize any critical items identified and ensure user agrees with the assessment</action>
</step>
<step n="8" goal="Retrospective Closure">
<action>Scrum Master closes the retrospective with summary and next steps</action>
<action>Present closure summary:</action>
```
═══════════════════════════════════════════════════════════
✅ RETROSPECTIVE COMPLETE
Epic {{completed_number}}: {{epic_title}} - REVIEWED
Key Takeaways:
- {{key_lesson_1}}
- {{key_lesson_2}}
- {{key_lesson_3}}
Action Items Committed: {{action_count}}
Preparation Tasks Defined: {{prep_task_count}}
Critical Path Items: {{critical_count}}
═══════════════════════════════════════════════════════════
🎯 NEXT STEPS:
1. Execute Preparation Sprint (Est: {{prep_days}} days)
2. Complete Critical Path items before Epic {{next_number}}
3. Review action items in next standup
4. Begin Epic {{next_number}} planning when preparation complete
═══════════════════════════════════════════════════════════
Scrum Master: "Great work team! We learned a lot from Epic {{completed_number}}.
Let's use these insights to make Epic {{next_number}} even better.
See you at sprint planning once prep work is done!"
```
<action>Save retrospective summary to: {output_folder}/retrospectives/epic-{{completed_number}}-retro-{{date}}.md</action>
</step>
<step n="9" goal="Mark retrospective completed in sprint status" tag="sprint-status">
<action>Load the FULL file: {output_folder}/sprint-status.yaml</action>
<action>Find development_status key "epic-{{completed_number}}-retrospective"</action>
<action>Verify current status is "optional" (expected previous state)</action>
<action>Update development_status["epic-{{completed_number}}-retrospective"] = "completed"</action>
<action>Save file, preserving ALL comments and structure including STATUS DEFINITIONS</action>
<check if="update successful">
<output>✅ Retrospective marked as completed in sprint-status.yaml
Retrospective key: epic-{{completed_number}}-retrospective
Status: optional → completed
</output>
</check>
<check if="retrospective key not found">
<output>⚠️ Could not update retrospective status: epic-{{completed_number}}-retrospective not found
Retrospective document was saved, but sprint-status.yaml may need manual update.
</output>
</check>
</step>
<step n="10" goal="Final summary">
<action>Confirm all action items have been captured</action>
<action>Remind user to schedule prep sprint if needed</action>
<output>**✅ Retrospective Complete, {user_name}!**
**Epic Review:**
- Epic {{completed_number}}: {{epic_title}} reviewed
- Retrospective Status: completed
- Retrospective saved: {output_folder}/retrospectives/epic-{{completed_number}}-retro-{{date}}.md
- Action Items: {{action_count}}
- Preparation Tasks: {{prep_task_count}}
- Critical Path Items: {{critical_count}}
**Next Steps:**
1. Review retrospective summary: {output_folder}/retrospectives/epic-{{completed_number}}-retro-{{date}}.md
2. Execute preparation sprint (Est: {{prep_days}} days)
3. Complete critical path items before Epic {{next_number}}
4. Begin Epic {{next_number}} planning when preparation complete
- Load PM agent and run `epic-tech-context` for next epic
- Or continue with existing contexted epics
</output>
</step>
</workflow>
<facilitation-guidelines>
<guideline>Scrum Master maintains psychological safety throughout - no blame or judgment</guideline>
<guideline>Focus on systems and processes, not individual performance</guideline>
<guideline>Encourage specific examples over general statements</guideline>
<guideline>Balance celebration of wins with honest assessment of challenges</guideline>
<guideline>Ensure every voice is heard - all agents contribute</guideline>
<guideline>Action items must be specific, achievable, and owned</guideline>
<guideline>Forward-looking mindset - how do we improve for next epic?</guideline>
<guideline>Two-part structure ensures both reflection AND preparation</guideline>
<guideline>Critical verification prevents starting next epic prematurely</guideline>
<guideline>Document everything - retrospective insights are valuable for future reference</guideline>
</facilitation-guidelines>

View File

@@ -0,0 +1,43 @@
# Retrospective - Epic Completion Review Workflow
name: "retrospective"
description: "Run after epic completion to review overall success, extract lessons learned, and explore if new information emerged that might impact the next epic"
author: "BMad"
config_source: "{project-root}/bmad/bmm/config.yaml"
output_folder: "{config_source}:output_folder"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
user_skill_level: "{config_source}:user_skill_level"
document_output_language: "{config_source}:document_output_language"
date: system-generated
installed_path: "{project-root}/bmad/bmm/workflows/4-implementation/retrospective"
template: false
instructions: "{installed_path}/instructions.md"
mode: interactive
trigger: "Run AFTER completing an epic"
required_inputs:
- agent_manifest: "{project-root}/bmad/_cfg/agent-manifest.csv"
output_artifacts:
- retrospective_summary: "Comprehensive review of what went well and what could improve"
- lessons_learned: "Key insights for future epics"
- action_items: "Specific improvements with ownership"
- next_epic_preparation: "Dependencies, gaps, and preparation tasks for next epic"
- critical_path: "Blockers or prerequisites that must be addressed"
facilitation:
facilitator: "Bob (Scrum Master)"
tone: "Psychological safety - no blame, focus on systems and processes"
format: "Two-part: (1) Review completed epic + (2) Preview next epic preparation"
validation_required:
- testing_complete: "Has full regression testing been completed?"
- deployment_status: "Has epic been deployed to production?"
- business_validation: "Have stakeholders reviewed and accepted deliverables?"
- technical_health: "Is codebase in stable, maintainable state?"
- blocker_resolution: "Any unresolved blockers that will impact next epic?"
standalone: true

View File

@@ -0,0 +1,156 @@
# Sprint Planning Workflow
## Overview
The sprint-planning workflow generates and manages the sprint status tracking file that serves as the single source of truth for Phase 4 implementation. It extracts all epics and stories from epic files and tracks their progress through the development lifecycle.
In Agile terminology, this workflow facilitates **Sprint Planning** or **Sprint 0 Kickoff** - the transition from planning/architecture into actual development execution.
## Purpose
This workflow creates a `sprint-status.yaml` file that:
- Lists all epics, stories, and retrospectives in order
- Tracks the current status of each item
- Provides a clear view of what needs to be worked on next
- Ensures only one story is in progress at a time
- Maintains the development flow from backlog to done
## When to Use
Run this workflow:
1. **Initially** - After Phase 3 (solutioning) is complete and epics are finalized
2. **After epic context creation** - To update epic status to 'contexted'
3. **Periodically** - To auto-detect newly created story files
4. **For status checks** - To see overall project progress
## Status State Machine
### Epic Flow
```
backlog → contexted
```
### Story Flow
```
backlog → drafted → ready-for-dev → in-progress → review → done
```
### Retrospective Flow
```
optional ↔ completed
```
## Key Guidelines
1. **Epic Context Recommended**: Epics should be `contexted` before their stories can be `drafted`
2. **Flexible Parallelism**: Multiple stories can be `in-progress` based on team capacity
3. **Sequential Default**: Stories within an epic are typically worked in order, but parallel work is supported
4. **Review Flow**: Stories should go through `review` before `done`
5. **Learning Transfer**: SM typically drafts next story after previous is `done`, incorporating learnings
## File Locations
### Input Files
- **Epic Files**: `{output_folder}/epic*.md` or `{output_folder}/epics.md`
- **Epic Context**: `{output_folder}/epic-{n}-context.md`
- **Story Files**: `{story_dir}/{epic}-{story}-{title}.md`
- Example: `stories/1-1-user-authentication.md`
- **Story Context**: `{story_dir}/{epic}-{story}-{title}-context.md`
- Example: `stories/1-1-user-authentication-context.md`
### Output File
- **Status File**: `{output_folder}/sprint-status.yaml`
## Usage by Agents
### SM (Scrum Master) Agent
```yaml
Tasks:
- Check sprint-status.yaml for stories in 'done' status
- Identify next 'backlog' story to draft
- Run create-story workflow
- Update status to 'drafted'
- Create story context
- Update status to 'ready-for-dev'
```
### Developer Agent
```yaml
Tasks:
- Find stories with 'ready-for-dev' status
- Update to 'in-progress' when starting
- Implement the story
- Update to 'review' when complete
- Address review feedback
- Update to 'done' after review
```
### Test Architect
```yaml
Tasks:
- Monitor stories entering 'review'
- Track epic progress
- Identify when retrospectives are needed
```
## Example Output
```yaml
# Sprint Status
# Generated: 2025-01-20
# Project: MyPlantFamily
development_status:
epic-1: contexted
1-1-project-foundation: done
1-2-app-shell: done
1-3-user-authentication: in-progress
1-4-plant-data-model: ready-for-dev
1-5-add-plant-manual: drafted
1-6-photo-identification: backlog
epic-1-retrospective: optional
epic-2: contexted
2-1-personality-system: in-progress
2-2-chat-interface: drafted
2-3-llm-integration: backlog
2-4-reminder-system: backlog
epic-2-retrospective: optional
```
## Integration with BMM Workflow
This workflow is part of Phase 4 (Implementation) and integrates with:
1. **epic-tech-context** - Creates technical context for epics
2. **create-story** - Drafts individual story files
3. **story-context** - Adds implementation context to stories
4. **dev-story** - Developer implements the story
5. **code-review** - SM reviews implementation
6. **retrospective** - Optional epic retrospective
## Benefits
- **Clear Visibility**: Everyone knows what's being worked on
- **Flexible Capacity**: Supports both sequential and parallel work patterns
- **Learning Transfer**: SM can incorporate learnings when drafting next story
- **Progress Tracking**: Easy to see overall project status
- **Automation Friendly**: Simple YAML format for agent updates
## Tips
1. **Initial Generation**: Run immediately after epics are finalized
2. **Regular Updates**: Agents should update status as they work
3. **Manual Override**: You can manually edit the file if needed
4. **Backup First**: The workflow backs up existing status before regenerating
5. **Validation**: The workflow validates legal status transitions

View File

@@ -0,0 +1,33 @@
# Sprint Planning Validation Checklist
## Core Validation
### Complete Coverage Check
- [ ] Every epic found in epic\*.md files appears in sprint-status.yaml
- [ ] Every story found in epic\*.md files appears in sprint-status.yaml
- [ ] Every epic has a corresponding retrospective entry
- [ ] No items in sprint-status.yaml that don't exist in epic files
### Parsing Verification
Compare epic files against generated sprint-status.yaml:
```
Epic Files Contains: Sprint Status Contains:
✓ Epic 1 ✓ epic-1: [status]
✓ Story 1.1: User Auth ✓ 1-1-user-auth: [status]
✓ Story 1.2: Account Mgmt ✓ 1-2-account-mgmt: [status]
✓ Story 1.3: Plant Naming ✓ 1-3-plant-naming: [status]
✓ epic-1-retrospective: [status]
✓ Epic 2 ✓ epic-2: [status]
✓ Story 2.1: Personality Model ✓ 2-1-personality-model: [status]
✓ Story 2.2: Chat Interface ✓ 2-2-chat-interface: [status]
✓ epic-2-retrospective: [status]
```
### Final Check
- [ ] Total count of epics matches
- [ ] Total count of stories matches
- [ ] All items are in the expected order (epic, stories, retrospective)

View File

@@ -0,0 +1,221 @@
# Sprint Planning - Sprint Status Generator
<critical>The workflow execution engine is governed by: {project-root}/bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {project-root}/bmad/bmm/workflows/4-implementation/sprint-planning/workflow.yaml</critical>
<workflow>
<step n="1" goal="Parse epic files and extract all work items">
<action>Communicate in {communication_language} with {user_name}</action>
<action>Look for all files matching `{epics_pattern}` in {epics_location}</action>
<action>Could be a single `epics.md` file or multiple `epic-1.md`, `epic-2.md` files</action>
<action>For each epic file found, extract:</action>
- Epic numbers from headers like `## Epic 1:` or `## Epic 2:`
- Story IDs and titles from patterns like `### Story 1.1: User Authentication`
- Convert story format from `Epic.Story: Title` to kebab-case key: `epic-story-title`
**Story ID Conversion Rules:**
- Original: `### Story 1.1: User Authentication`
- Replace period with dash: `1-1`
- Convert title to kebab-case: `user-authentication`
- Final key: `1-1-user-authentication`
<action>Build complete inventory of all epics and stories from all epic files</action>
</step>
<step n="2" goal="Build sprint status structure">
<action>For each epic found, create entries in this order:</action>
1. **Epic entry** - Key: `epic-{num}`, Default status: `backlog`
2. **Story entries** - Key: `{epic}-{story}-{title}`, Default status: `backlog`
3. **Retrospective entry** - Key: `epic-{num}-retrospective`, Default status: `optional`
**Example structure:**
```yaml
development_status:
epic-1: backlog
1-1-user-authentication: backlog
1-2-account-management: backlog
epic-1-retrospective: optional
```
</step>
<step n="3" goal="Apply intelligent status detection">
<action>For each epic, check if tech context file exists:</action>
- Check: `{output_folder}/epic-{num}-context.md`
- If exists → set epic status to `contexted`
- Else → keep as `backlog`
<action>For each story, detect current status by checking files:</action>
**Story file detection:**
- Check: `{story_location_absolute}/{story-key}.md` (e.g., `stories/1-1-user-authentication.md`)
- If exists → upgrade status to at least `drafted`
**Story context detection:**
- Check: `{story_location_absolute}/{story-key}-context.md` (e.g., `stories/1-1-user-authentication-context.md`)
- If exists → upgrade status to at least `ready-for-dev`
**Preservation rule:**
- If existing `{status_file}` exists and has more advanced status, preserve it
- Never downgrade status (e.g., don't change `done` to `drafted`)
**Status Flow Reference:**
- Epic: `backlog``contexted`
- Story: `backlog``drafted``ready-for-dev``in-progress``review``done`
- Retrospective: `optional``completed`
</step>
<step n="4" goal="Generate sprint status file">
<action>Create or update {status_file} with:</action>
**File Structure:**
```yaml
# generated: {date}
# project: {project_name}
# project_key: {project_key}
# tracking_system: {tracking_system}
# story_location: {story_location}
# STATUS DEFINITIONS:
# ==================
# Epic Status:
# - backlog: Epic exists in epic file but not contexted
# - contexted: Epic tech context created (required before drafting stories)
#
# Story Status:
# - backlog: Story only exists in epic file
# - drafted: Story file created in stories folder
# - ready-for-dev: Draft approved and story context created
# - in-progress: Developer actively working on implementation
# - review: Under SM review (via code-review workflow)
# - done: Story completed
#
# Retrospective Status:
# - optional: Can be completed but not required
# - completed: Retrospective has been done
#
# WORKFLOW NOTES:
# ===============
# - Epics should be 'contexted' before stories can be 'drafted'
# - Stories can be worked in parallel if team capacity allows
# - SM typically drafts next story after previous one is 'done' to incorporate learnings
# - Dev moves story to 'review', SM reviews, then Dev moves to 'done'
generated: { date }
project: { project_name }
project_key: { project_key }
tracking_system: { tracking_system }
story_location: { story_location }
development_status:
# All epics, stories, and retrospectives in order
```
<action>Write the complete sprint status YAML to {status_file}</action>
<action>CRITICAL: For story_location field, write the variable value EXACTLY as defined in workflow.yaml: "{project-root}/docs/stories"</action>
<action>CRITICAL: Do NOT resolve {project-root} to an absolute path - keep it as the literal string "{project-root}/docs/stories"</action>
<action>CRITICAL: Metadata appears TWICE - once as comments (#) for documentation, once as YAML key:value fields for parsing</action>
<action>Ensure all items are ordered: epic, its stories, its retrospective, next epic...</action>
</step>
<step n="5" goal="Validate and report">
<action>Perform validation checks:</action>
- [ ] Every epic in epic files appears in sprint-status.yaml
- [ ] Every story in epic files appears in sprint-status.yaml
- [ ] Every epic has a corresponding retrospective entry
- [ ] No items in sprint-status.yaml that don't exist in epic files
- [ ] All status values are legal (match state machine definitions)
- [ ] File is valid YAML syntax
<action>Count totals:</action>
- Total epics: {{epic_count}}
- Total stories: {{story_count}}
- Epics contexted: {{contexted_count}}
- Stories in progress: {{in_progress_count}}
- Stories done: {{done_count}}
<action>Display completion summary to {user_name} in {communication_language}:</action>
**Sprint Status Generated Successfully**
- **File Location:** {status_file}
- **Total Epics:** {{epic_count}}
- **Total Stories:** {{story_count}}
- **Contexted Epics:** {{contexted_count}}
- **Stories In Progress:** {{in_progress_count}}
- **Stories Completed:** {{done_count}}
**Next Steps:**
1. Review the generated sprint-status.yaml
2. Use this file to track development progress
3. Agents will update statuses as they work
4. Re-run this workflow to refresh auto-detected statuses
</step>
</workflow>
## Additional Documentation
### Status State Machine
**Epic Status Flow:**
```
backlog → contexted
```
- **backlog**: Epic exists in epic file but tech context not created
- **contexted**: Epic tech context has been generated (prerequisite for story drafting)
**Story Status Flow:**
```
backlog → drafted → ready-for-dev → in-progress → review → done
```
- **backlog**: Story only exists in epic file
- **drafted**: Story file created (e.g., `stories/1-3-plant-naming.md`)
- **ready-for-dev**: Draft approved + story context created
- **in-progress**: Developer actively working
- **review**: Under SM review (via code-review workflow)
- **done**: Completed
**Retrospective Status:**
```
optional ↔ completed
```
- **optional**: Can be done but not required
- **completed**: Retrospective has been completed
### Guidelines
1. **Epic Context Recommended**: Epics should be `contexted` before stories can be `drafted`
2. **Sequential Default**: Stories are typically worked in order, but parallel work is supported
3. **Parallel Work Supported**: Multiple stories can be `in-progress` if team capacity allows
4. **Review Before Done**: Stories should pass through `review` before `done`
5. **Learning Transfer**: SM typically drafts next story after previous one is `done` to incorporate learnings
### Error Handling
- If epic file can't be parsed, report specific file and continue with others
- If existing status file is malformed, backup and regenerate
- Log warnings for duplicate story IDs across epics
- Validate status transitions are legal (can't go from `backlog` to `done`)

View File

@@ -0,0 +1,55 @@
# Sprint Status Template
# This is an EXAMPLE showing the expected format
# The actual file will be generated with all epics/stories from your epic files
# generated: {date}
# project: {project_name}
# project_key: {project_key}
# tracking_system: {tracking_system}
# story_location: {story_location}
# STATUS DEFINITIONS:
# ==================
# Epic Status:
# - backlog: Epic exists in epic file but not contexted
# - contexted: Next epic tech context created by *epic-tech-context (required)
#
# Story Status:
# - backlog: Story only exists in epic file
# - drafted: Story file created in stories folder by *create-story
# - ready-for-dev: Draft approved and story context created by *story-ready
# - in-progress: Developer actively working on implementation by *dev-story
# - review: Implementation complete, ready for review by *code-review
# - done: Story completed by *story-done
#
# Retrospective Status:
# - optional: Can be completed but not required
# - completed: Retrospective has been done by *retrospective
#
# WORKFLOW NOTES:
# ===============
# - Epics should be 'contexted' before stories can be 'drafted'
# - SM typically drafts next story ONLY after previous one is 'done' to incorporate learnings
# - Dev moves story to 'review', dev reviews, then Dev moves to 'done'
# EXAMPLE STRUCTURE (your actual epics/stories will replace these):
generated: 05-06-2-2025 21:30
project: My Awesome Project
project_key: jira-1234
tracking_system: file-system
story_location: "{project-root}/docs/stories"
development_status:
epic-1: contexted
1-1-user-authentication: done
1-2-account-management: drafted
1-3-plant-data-model: backlog
1-4-add-plant-manual: backlog
epic-1-retrospective: optional
epic-2: backlog
2-1-personality-system: backlog
2-2-chat-interface: backlog
2-3-llm-integration: backlog
epic-2-retrospective: optional

View File

@@ -0,0 +1,39 @@
name: sprint-planning
description: "Generate and manage the sprint status tracking file for Phase 4 implementation, extracting all epics and stories from epic files and tracking their status through the development lifecycle"
author: "BMad"
# Critical variables from config
config_source: "{project-root}/bmad/bmm/config.yaml"
output_folder: "{config_source}:output_folder"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
date: system-generated
# Workflow components
installed_path: "{project-root}/bmad/bmm/workflows/4-implementation/sprint-planning"
instructions: "{installed_path}/instructions.md"
template: "{installed_path}/sprint-status-template.yaml"
validation: "{installed_path}/checklist.md"
# Variables and inputs
variables:
# Project identification
project_name: "{config_source}:project_name"
project_key: "{config_source}:project_name" # Future: Jira project key, Linear workspace ID, etc.
# Tracking system configuration
tracking_system: "file-system" # Options: file-system, Future will support other options from config of mcp such as jira, linear, trello
story_location: "{project-root}/docs/stories" # Relative path for file-system, Future will support URL for Jira/Linear/Trello
story_location_absolute: "{config_source}:dev_story_location" # Absolute path for file operations
# Source files (file-system only)
epics_location: "{output_folder}" # Directory containing epic*.md files
epics_pattern: "epic*.md" # Pattern to find epic files
# Output configuration
status_file: "{output_folder}/sprint-status.yaml"
# Output configuration
default_output_file: "{status_file}"
standalone: true

View File

@@ -0,0 +1,234 @@
# Story Context Assembly Workflow
## Overview
Assembles a dynamic Story Context XML by pulling latest documentation and existing code/library artifacts relevant to a drafted story. Creates comprehensive development context for AI agents and developers working on specific stories.
## Key Features
- **Automated Context Discovery** - Scans documentation and codebase for relevant artifacts
- **XML Output Format** - Structured context optimized for LLM consumption
- **Dependency Detection** - Identifies frameworks, packages, and technical dependencies
- **Interface Mapping** - Locates existing APIs and interfaces to reuse
- **Testing Integration** - Includes testing standards and generates test ideas
- **Status Tracking** - Updates story status and maintains context references
## Usage
### Basic Invocation
```bash
workflow story-context
```
### With Specific Story
```bash
# Process specific story file
workflow story-context --input /path/to/story.md
# Using story path variable
workflow story-context --story_path "docs/stories/1.2.feature-name.md"
```
### Configuration
- **story_path**: Path to target story markdown file (defaults to latest.md)
- **auto_update_status**: Whether to automatically update story status (default: false)
## Workflow Structure
### Files Included
```
story-context/
├── workflow.yaml # Configuration and metadata
├── instructions.md # Step-by-step execution guide
├── context-template.xml # XML structure template
├── checklist.md # Validation criteria
└── README.md # This file
```
## Workflow Process
### Phase 1: Story Analysis (Steps 1-2)
- **Story Location**: Finds and loads target story markdown file
- **Content Parsing**: Extracts epic ID, story ID, title, acceptance criteria, and tasks
- **Template Initialization**: Creates initial XML context structure
- **User Story Extraction**: Parses "As a... I want... So that..." components
### Phase 2: Documentation Discovery (Step 3)
- **Keyword Analysis**: Identifies relevant terms from story content
- **Document Scanning**: Searches docs and module documentation
- **Authority Prioritization**: Prefers PRDs, architecture docs, and specs
- **Context Extraction**: Captures relevant sections with snippets
### Phase 3: Code Analysis (Step 4)
- **Symbol Search**: Finds relevant modules, functions, and components
- **Interface Identification**: Locates existing APIs and interfaces
- **Constraint Extraction**: Identifies development patterns and requirements
- **Reuse Opportunities**: Highlights existing code to leverage
### Phase 4: Dependency Analysis (Step 5)
- **Manifest Detection**: Scans for package.json, requirements.txt, go.mod, etc.
- **Framework Identification**: Identifies Unity, Node.js, Python, Go ecosystems
- **Version Tracking**: Captures dependency versions where available
- **Configuration Discovery**: Finds relevant project configurations
### Phase 5: Testing Context (Step 6)
- **Standards Extraction**: Identifies testing frameworks and patterns
- **Location Mapping**: Documents where tests should be placed
- **Test Ideas**: Generates initial test concepts for acceptance criteria
- **Framework Integration**: Links to existing test infrastructure
### Phase 6: Validation and Updates (Steps 7-8)
- **XML Validation**: Ensures proper structure and completeness
- **Status Updates**: Changes story status from Draft to ContextReadyDraft
- **Reference Tracking**: Adds context file reference to story document
- **Quality Assurance**: Validates against workflow checklist
## Output
### Generated Files
- **Primary output**: story-context-{epic_id}.{story_id}-{date}.xml
- **Story updates**: Modified original story file with context references
### Output Structure
```xml
<storyContext>
<story>
<basicInfo>
<epicId>...</epicId>
<storyId>...</storyId>
<title>...</title>
<status>...</status>
</basicInfo>
<userStory>
<asA>...</asA>
<iWant>...</iWant>
<soThat>...</soThat>
</userStory>
<acceptanceCriteria>
<criterion id="1">...</criterion>
</acceptanceCriteria>
<tasks>
<task>...</task>
</tasks>
</story>
<artifacts>
<docs>
<doc path="..." title="..." section="..." snippet="..."/>
</docs>
<code>
<file path="..." kind="..." symbol="..." lines="..." reason="..."/>
</code>
<dependencies>
<node>
<package name="..." version="..."/>
</node>
</dependencies>
</artifacts>
<interfaces>
<interface name="..." kind="..." signature="..." path="..."/>
</interfaces>
<constraints>
<constraint>...</constraint>
</constraints>
<tests>
<standards>...</standards>
<locations>
<location>...</location>
</locations>
<ideas>
<idea ac="1">...</idea>
</ideas>
</tests>
</storyContext>
```
## Requirements
- **Story File**: Valid story markdown with proper structure (epic.story.title.md format)
- **Repository Access**: Ability to scan documentation and source code
- **Documentation**: Project documentation in standard locations (docs/, src/, etc.)
## Best Practices
### Before Starting
1. **Ensure Story Quality**: Verify story has clear acceptance criteria and tasks
2. **Update Documentation**: Ensure relevant docs are current and discoverable
3. **Clean Repository**: Remove obsolete code that might confuse context assembly
### During Execution
1. **Review Extracted Context**: Verify that discovered artifacts are actually relevant
2. **Check Interface Accuracy**: Ensure identified APIs and interfaces are current
3. **Validate Dependencies**: Confirm dependency information matches project state
### After Completion
1. **Review XML Output**: Validate the generated context makes sense
2. **Test with Developer**: Have a developer review context usefulness
3. **Update Story Status**: Verify story status was properly updated
## Troubleshooting
### Common Issues
**Issue**: Context contains irrelevant or outdated information
- **Solution**: Review keyword extraction and document filtering logic
- **Check**: Ensure story title and acceptance criteria are specific and clear
**Issue**: Missing relevant code or interfaces
- **Solution**: Verify code search patterns and symbol extraction
- **Check**: Ensure relevant code follows project naming conventions
**Issue**: Dependency information is incomplete or wrong
- **Solution**: Check for multiple package manifests or unusual project structure
- **Check**: Verify dependency files are in expected locations and formats
## Customization
To customize this workflow:
1. **Modify Search Patterns**: Update instructions.md to adjust code and doc discovery
2. **Extend XML Schema**: Customize context-template.xml for additional context types
3. **Add Validation**: Extend checklist.md with project-specific quality criteria
4. **Configure Dependencies**: Adjust dependency detection for specific tech stacks
## Version History
- **v6.0.0** - XML-based context assembly with comprehensive artifact discovery
- Multi-ecosystem dependency detection
- Interface and constraint extraction
- Testing context integration
- Story status management
## Support
For issues or questions:
- Review the workflow creation guide at `/bmad/bmb/workflows/create-workflow/workflow-creation-guide.md`
- Validate output using `checklist.md`
- Ensure story files follow expected markdown structure
- Check that repository structure supports automated discovery
---
_Part of the BMad Method v6 - BMM (Method) Module_

View File

@@ -0,0 +1,16 @@
# Story Context Assembly Checklist
```xml
<checklist id="bmad/bmm/workflows/4-implementation/story-context/checklist">
<item>Story fields (asA/iWant/soThat) captured</item>
<item>Acceptance criteria list matches story draft exactly (no invention)</item>
<item>Tasks/subtasks captured as task list</item>
<item>Relevant docs (5-15) included with path and snippets</item>
<item>Relevant code references included with reason and line hints</item>
<item>Interfaces/API contracts extracted if applicable</item>
<item>Constraints include applicable dev rules and patterns</item>
<item>Dependencies detected from manifests and frameworks</item>
<item>Testing standards and locations populated</item>
<item>XML structure follows story-context template format</item>
</checklist>
```

View File

@@ -0,0 +1,34 @@
<story-context id="bmad/bmm/workflows/4-implementation/story-context/template" v="1.0">
<metadata>
<epicId>{{epic_id}}</epicId>
<storyId>{{story_id}}</storyId>
<title>{{story_title}}</title>
<status>{{story_status}}</status>
<generatedAt>{{date}}</generatedAt>
<generator>BMAD Story Context Workflow</generator>
<sourceStoryPath>{{story_path}}</sourceStoryPath>
</metadata>
<story>
<asA>{{as_a}}</asA>
<iWant>{{i_want}}</iWant>
<soThat>{{so_that}}</soThat>
<tasks>{{story_tasks}}</tasks>
</story>
<acceptanceCriteria>{{acceptance_criteria}}</acceptanceCriteria>
<artifacts>
<docs>{{docs_artifacts}}</docs>
<code>{{code_artifacts}}</code>
<dependencies>{{dependencies_artifacts}}</dependencies>
</artifacts>
<constraints>{{constraints}}</constraints>
<interfaces>{{interfaces}}</interfaces>
<tests>
<standards>{{test_standards}}</standards>
<locations>{{test_locations}}</locations>
<ideas>{{test_ideas}}</ideas>
</tests>
</story-context>

View File

@@ -0,0 +1,204 @@
<!-- BMAD BMM Story Context Assembly Instructions (v6) -->
```xml
<critical>The workflow execution engine is governed by: {project-root}/bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<critical>Communicate all responses in {communication_language}</critical>
<critical>Generate all documents in {document_output_language}</critical>
<critical>This workflow assembles a Story Context file for a single drafted story by extracting acceptance criteria, tasks, relevant docs/code, interfaces, constraints, and testing guidance.</critical>
<critical>If story_path is provided, use it. Otherwise, find the first story with status "drafted" in sprint-status.yaml. If none found, HALT.</critical>
<critical>Check if context file already exists. If it does, ask user if they want to replace it, verify it, or cancel.</critical>
<critical>DOCUMENT OUTPUT: Technical context file (.context.xml). Concise, structured, project-relative paths only.</critical>
<workflow>
<step n="1" goal="Find drafted story and check for existing context" tag="sprint-status">
<check if="{{story_path}} is provided">
<action>Use {{story_path}} directly</action>
<action>Read COMPLETE story file and parse sections</action>
<action>Extract story_key from filename or story metadata</action>
<action>Verify Status is "drafted" - if not, HALT with message: "Story status must be 'drafted' to generate context"</action>
</check>
<check if="{{story_path}} is NOT provided">
<critical>MUST read COMPLETE sprint-status.yaml file from start to end to preserve order</critical>
<action>Load the FULL file: {{output_folder}}/sprint-status.yaml</action>
<action>Read ALL lines from beginning to end - do not skip any content</action>
<action>Parse the development_status section completely</action>
<action>Find FIRST story (reading in order from top to bottom) where:
- Key matches pattern: number-number-name (e.g., "1-2-user-auth")
- NOT an epic key (epic-X) or retrospective (epic-X-retrospective)
- Status value equals "drafted"
</action>
<check if="no story with status 'drafted' found">
<output>📋 No drafted stories found in sprint-status.yaml
All stories are either still in backlog or already marked ready/in-progress/done.
**Next Steps:**
1. Run `create-story` to draft more stories
2. Run `sprint-planning` to refresh story tracking
</output>
<action>HALT</action>
</check>
<action>Use the first drafted story found</action>
<action>Find matching story file in {{story_dir}} using story_key pattern</action>
<action>Read the COMPLETE story file</action>
</check>
<action>Extract {{epic_id}}, {{story_id}}, {{story_title}}, {{story_status}} from filename/content</action>
<action>Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes</action>
<action>Extract user story fields (asA, iWant, soThat)</action>
<template-output file="{default_output_file}">story_tasks</template-output>
<template-output file="{default_output_file}">acceptance_criteria</template-output>
<!-- Check if context file already exists -->
<action>Check if file exists at {default_output_file}</action>
<check if="context file already exists">
<output>⚠️ Context file already exists: {default_output_file}
**What would you like to do?**
1. **Replace** - Generate new context file (overwrites existing)
2. **Verify** - Validate existing context file
3. **Cancel** - Exit without changes
</output>
<ask>Choose action (replace/verify/cancel):</ask>
<check if="user chooses verify">
<action>GOTO validation_step</action>
</check>
<check if="user chooses cancel">
<action>HALT with message: "Context generation cancelled"</action>
</check>
<check if="user chooses replace">
<action>Continue to generate new context file</action>
</check>
</check>
<action>Store project root path for relative path conversion: extract from {project-root} variable</action>
<action>Define path normalization function: convert any absolute path to project-relative by removing project root prefix</action>
<action>Initialize output by writing template to {default_output_file}</action>
<template-output file="{default_output_file}">as_a</template-output>
<template-output file="{default_output_file}">i_want</template-output>
<template-output file="{default_output_file}">so_that</template-output>
</step>
<step n="2" goal="Collect relevant documentation">
<action>Scan docs and src module docs for items relevant to this story's domain: search keywords from story title, ACs, and tasks.</action>
<action>Prefer authoritative sources: PRD, Architecture, Front-end Spec, Testing standards, module-specific docs.</action>
<action>For each discovered document: convert absolute paths to project-relative format by removing {project-root} prefix. Store only relative paths (e.g., "docs/prd.md" not "/Users/.../docs/prd.md").</action>
<template-output file="{default_output_file}">
Add artifacts.docs entries with {path, title, section, snippet}:
- path: PROJECT-RELATIVE path only (strip {project-root} prefix)
- title: Document title
- section: Relevant section name
- snippet: Brief excerpt (2-3 sentences max, NO invention)
</template-output>
</step>
<step n="3" goal="Analyze existing code, interfaces, and constraints">
<action>Search source tree for modules, files, and symbols matching story intent and AC keywords (controllers, services, components, tests).</action>
<action>Identify existing interfaces/APIs the story should reuse rather than recreate.</action>
<action>Extract development constraints from Dev Notes and architecture (patterns, layers, testing requirements).</action>
<action>For all discovered code artifacts: convert absolute paths to project-relative format (strip {project-root} prefix).</action>
<template-output file="{default_output_file}">
Add artifacts.code entries with {path, kind, symbol, lines, reason}:
- path: PROJECT-RELATIVE path only (e.g., "src/services/api.js" not full path)
- kind: file type (controller, service, component, test, etc.)
- symbol: function/class/interface name
- lines: line range if specific (e.g., "45-67")
- reason: brief explanation of relevance to this story
Populate interfaces with API/interface signatures:
- name: Interface or API name
- kind: REST endpoint, GraphQL, function signature, class interface
- signature: Full signature or endpoint definition
- path: PROJECT-RELATIVE path to definition
Populate constraints with development rules:
- Extract from Dev Notes and architecture
- Include: required patterns, layer restrictions, testing requirements, coding standards
</template-output>
</step>
<step n="4" goal="Gather dependencies and frameworks">
<action>Detect dependency manifests and frameworks in the repo:
- Node: package.json (dependencies/devDependencies)
- Python: pyproject.toml/requirements.txt
- Go: go.mod
- Unity: Packages/manifest.json, Assets/, ProjectSettings/
- Other: list notable frameworks/configs found</action>
<template-output file="{default_output_file}">
Populate artifacts.dependencies with keys for detected ecosystems and their packages with version ranges where present
</template-output>
</step>
<step n="5" goal="Testing standards and ideas">
<action>From Dev Notes, architecture docs, testing docs, and existing tests, extract testing standards (frameworks, patterns, locations).</action>
<template-output file="{default_output_file}">
Populate tests.standards with a concise paragraph
Populate tests.locations with directories or glob patterns where tests live
Populate tests.ideas with initial test ideas mapped to acceptance criteria IDs
</template-output>
</step>
<step n="6" goal="Validate and save">
<anchor id="validation_step" />
<action>Validate output context file structure and content</action>
<invoke-task>Validate against checklist at {installed_path}/checklist.md using bmad/core/tasks/validate-workflow.xml</invoke-task>
</step>
<step n="7" goal="Update story file and mark ready for dev" tag="sprint-status">
<action>Open {{story_path}}</action>
<action>Find the "Status:" line (usually at the top)</action>
<action>Update story file: Change Status to "ready-for-dev"</action>
<action>Under 'Dev Agent Record' → 'Context Reference' (create if missing), add or update a list item for {default_output_file}.</action>
<action>Save the story file.</action>
<!-- Update sprint status to mark ready-for-dev -->
<action>Load the FULL file: {{output_folder}}/sprint-status.yaml</action>
<action>Find development_status key matching {{story_key}}</action>
<action>Verify current status is "drafted" (expected previous state)</action>
<action>Update development_status[{{story_key}}] = "ready-for-dev"</action>
<action>Save file, preserving ALL comments and structure including STATUS DEFINITIONS</action>
<check if="story key not found in file">
<output>⚠️ Story file updated, but could not update sprint-status: {{story_key}} not found
You may need to run sprint-planning to refresh tracking.
</output>
</check>
<output>✅ Story context generated successfully, {user_name}!
**Story Details:**
- Story: {{epic_id}}.{{story_id}} - {{story_title}}
- Story Key: {{story_key}}
- Context File: {default_output_file}
- Status: drafted → ready-for-dev
**Context Includes:**
- Documentation artifacts and references
- Existing code and interfaces
- Dependencies and frameworks
- Testing standards and ideas
- Development constraints
**Next Steps:**
1. Review the context file: {default_output_file}
2. Run `dev-story` to implement the story
3. Generate context for more drafted stories if needed
</output>
</step>
</workflow>
```

View File

@@ -0,0 +1,30 @@
# Story Context Creation Workflow
name: story-context
description: "Assemble a dynamic Story Context XML by pulling latest documentation and existing code/library artifacts relevant to a drafted story"
author: "BMad"
# Critical variables
config_source: "{project-root}/bmad/bmm/config.yaml"
output_folder: "{config_source}:output_folder"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
document_output_language: "{config_source}:document_output_language"
story_path: "{config_source}:dev_story_location"
date: system-generated
# Workflow components
installed_path: "{project-root}/bmad/bmm/workflows/4-implementation/story-context"
template: "{installed_path}/context-template.xml"
instructions: "{installed_path}/instructions.md"
validation: "{installed_path}/checklist.md"
# Variables and inputs
variables:
story_path: "" # Optional: Explicit story path. If not provided, finds first story with status "drafted"
story_dir: "{config_source}:dev_story_location"
# Output configuration
# Uses story_key from sprint-status.yaml (e.g., "1-2-user-authentication")
default_output_file: "{story_dir}/{{story_key}}.context.xml"
standalone: true

View File

@@ -0,0 +1,111 @@
# Story Approved Workflow Instructions (DEV Agent)
<critical>The workflow execution engine is governed by: {project-root}/bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<critical>Communicate all responses in {communication_language}</critical>
<workflow>
<critical>This workflow is run by DEV agent AFTER user confirms a story is approved (Definition of Done is complete)</critical>
<critical>Workflow: Update story file status to Done</critical>
<step n="1" goal="Find reviewed story to mark done" tag="sprint-status">
<check if="{story_path} is provided">
<action>Use {story_path} directly</action>
<action>Read COMPLETE story file and parse sections</action>
<action>Extract story_key from filename or story metadata</action>
<action>Verify Status is "review" - if not, HALT with message: "Story status must be 'review' to mark as done"</action>
</check>
<check if="{story_path} is NOT provided">
<critical>MUST read COMPLETE sprint-status.yaml file from start to end to preserve order</critical>
<action>Load the FULL file: {output_folder}/sprint-status.yaml</action>
<action>Read ALL lines from beginning to end - do not skip any content</action>
<action>Parse the development_status section completely</action>
<action>Find FIRST story (reading in order from top to bottom) where: - Key matches pattern: number-number-name (e.g., "1-2-user-auth") - NOT an epic key (epic-X) or retrospective (epic-X-retrospective) - Status value equals "review"
</action>
<check if="no story with status 'review' found">
<output>📋 No stories with status "review" found
All stories are either still in development or already done.
**Next Steps:**
1. Run `dev-story` to implement stories
2. Run `code-review` if stories need review first
3. Check sprint-status.yaml for current story states
</output>
<action>HALT</action>
</check>
<action>Use the first reviewed story found</action>
<action>Find matching story file in {story_dir} using story_key pattern</action>
<action>Read the COMPLETE story file</action>
</check>
<action>Extract story_id and story_title from the story file</action>
<action>Find the "Status:" line (usually at the top)</action>
<action>Update story file: Change Status to "done"</action>
<action>Add completion notes to Dev Agent Record section:</action>
<action>Find "## Dev Agent Record" section and add:
```
### Completion Notes
**Completed:** {date}
**Definition of Done:** All acceptance criteria met, code reviewed, tests passing
```
</action>
<action>Save the story file</action>
</step>
<step n="2" goal="Update sprint status to done" tag="sprint-status">
<action>Load the FULL file: {output_folder}/sprint-status.yaml</action>
<action>Find development_status key matching {story_key}</action>
<action>Verify current status is "review" (expected previous state)</action>
<action>Update development_status[{story_key}] = "done"</action>
<action>Save file, preserving ALL comments and structure including STATUS DEFINITIONS</action>
<check if="story key not found in file">
<output>⚠️ Story file updated, but could not update sprint-status: {story_key} not found
Story is marked Done in file, but sprint-status.yaml may be out of sync.
</output>
</check>
</step>
<step n="3" goal="Confirm completion to user">
<output>**Story Approved and Marked Done, {user_name}!**
✅ Story file updated → Status: done
✅ Sprint status updated: review → done
**Completed Story:**
- **ID:** {story_id}
- **Key:** {story_key}
- **Title:** {story_title}
- **Completed:** {date}
**Next Steps:**
1. Continue with next story in your backlog
- Run `create-story` for next backlog story
- Or run `dev-story` if ready stories exist
2. Check epic completion status
- Run `retrospective` workflow to check if epic is complete
- Epic retrospective will verify all stories are done
</output>
</step>
</workflow>
```

View File

@@ -0,0 +1,25 @@
# Story Done Workflow (DEV Agent)
name: story-done
description: "Marks a story as done (DoD complete) and moves it from its current status → DONE in the status file. Advances the story queue. Simple status-update workflow with no searching required."
author: "BMad"
# Critical variables from config
config_source: "{project-root}/bmad/bmm/config.yaml"
output_folder: "{config_source}:output_folder"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
date: system-generated
# Workflow components
installed_path: "{project-root}/bmad/bmm/workflows/4-implementation/story-done"
instructions: "{installed_path}/instructions.md"
# Variables and inputs
variables:
story_path: "" # Explicit path to story file
story_dir: "{config_source}:dev_story_location" # Directory where stories are stored
# Output configuration - no output file, just status updates
default_output_file: ""
standalone: true

View File

@@ -0,0 +1,117 @@
# Story Ready Workflow Instructions (SM Agent)
<critical>The workflow execution engine is governed by: {project_root}/bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<critical>Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level}</critical>
<critical>Generate all documents in {document_output_language}</critical>
<workflow>
<critical>This workflow is run by SM agent AFTER user reviews a drafted story and confirms it's ready for development</critical>
<critical>Simple workflow: Update story file status to Ready</critical>
<step n="1" goal="Find drafted story to mark ready" tag="sprint-status">
<action>If {{story_path}} is provided → use it directly; extract story_key from filename or metadata; GOTO mark_ready</action>
<critical>MUST read COMPLETE sprint-status.yaml file from start to end to preserve order</critical>
<action>Load the FULL file: {{output_folder}}/sprint-status.yaml</action>
<action>Read ALL lines from beginning to end - do not skip any content</action>
<action>Parse the development_status section completely</action>
<action>Find ALL stories (reading in order from top to bottom) where:
- Key matches pattern: number-number-name (e.g., "1-2-user-auth")
- NOT an epic key (epic-X) or retrospective (epic-X-retrospective)
- Status value equals "drafted"
</action>
<action>Collect up to 10 drafted story keys in order (limit for display purposes)</action>
<action>Count total drafted stories found</action>
<check if="no drafted stories found">
<output>📋 No drafted stories found in sprint-status.yaml
All stories are either still in backlog or already marked ready/in-progress/done.
**Options:**
1. Run `create-story` to draft more stories
2. Run `sprint-planning` to refresh story tracking
</output>
<action>HALT</action>
</check>
<action>Display available drafted stories:
**Drafted Stories Available ({{drafted_count}} found):**
{{list_of_drafted_story_keys}}
</action>
<ask if="{{non_interactive}} == false">Select the drafted story to mark as Ready (enter story key or number):</ask>
<action if="{{non_interactive}} == true">Auto-select first story from the list</action>
<action>Resolve selected story_key from user input or auto-selection</action>
<action>Find matching story file in {{story_dir}} using story_key pattern</action>
<anchor id="mark_ready" />
<action>Read the story file from resolved path</action>
<action>Extract story_id and story_title from the file</action>
<action>Find the "Status:" line (usually at the top)</action>
<action>Update story file: Change Status to "ready-for-dev"</action>
<action>Save the story file</action>
</step>
<step n="2" goal="Update sprint status to ready-for-dev" tag="sprint-status">
<action>Load the FULL file: {{output_folder}}/sprint-status.yaml</action>
<action>Find development_status key matching {{story_key}}</action>
<action>Verify current status is "drafted" (expected previous state)</action>
<action>Update development_status[{{story_key}}] = "ready-for-dev"</action>
<action>Save file, preserving ALL comments and structure including STATUS DEFINITIONS</action>
<check if="story key not found in file">
<output>⚠️ Story file updated, but could not update sprint-status: {{story_key}} not found
You may need to run sprint-planning to refresh tracking.
</output>
</check>
</step>
<step n="3" goal="Confirm completion to user">
<output>**Story Marked Ready for Development, {user_name}!**
✅ Story file updated: `{{story_file}}` → Status: ready-for-dev
✅ Sprint status updated: drafted → ready-for-dev
**Story Details:**
- **ID:** {{story_id}}
- **Key:** {{story_key}}
- **Title:** {{story_title}}
- **File:** `{{story_file}}`
- **Status:** ready-for-dev
**Next Steps:**
1. **Recommended:** Run `story-context` workflow to generate implementation context
- This creates a comprehensive context XML for the DEV agent
- Includes relevant architecture, dependencies, and existing code
2. **Alternative:** Skip context generation and go directly to `dev-story` workflow
- Faster, but DEV agent will have less context
- Only recommended for simple, well-understood stories
**To proceed:**
- For context generation: Stay with SM agent and run `story-context` workflow
- For direct implementation: Load DEV agent and run `dev-story` workflow
</step>
</workflow>

View File

@@ -0,0 +1,25 @@
# Story Ready Workflow (SM Agent)
name: story-ready
description: "Marks a drafted story as ready for development and moves it from TODO → IN PROGRESS in the status file. Simple status-update workflow with no searching required."
author: "BMad"
# Critical variables from config
config_source: "{project-root}/bmad/bmm/config.yaml"
output_folder: "{config_source}:output_folder"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
date: system-generated
# Workflow components
installed_path: "{project-root}/bmad/bmm/workflows/4-implementation/story-ready"
instructions: "{installed_path}/instructions.md"
# Variables and inputs
variables:
story_path: "" # Explicit path to story file
story_dir: "{config_source}:dev_story_location" # Directory where stories are stored
# Output configuration - no output file, just status updates
default_output_file: ""
standalone: true